TAALES 2.2 is out : automatic analysis of lexical sophistication, Windows and Mac

From the TAALES website:

Kyle, K. & Crossley, S. A. (2015). Automatically assessing lexical sophistication: Indices, tools, findings, and application. TESOL Quarterly 49(4), pp. 757-786. doi: 10.1002/tesq.194

TAALES is a tool that measures over 400 classic and new indices of lexical sophistication, and includes indices related to a wide range of sub-constructs. TAALES indices have been used to inform models of second language (L2) speaking proficiency, first language (L1) and L2 writing proficiency, spoken and written lexical proficiency, genre differences, and satirical language.

Starting with version 2.2, TAALES provides comprehensive index diagnostics, including text-level coverage output (i.e., the percent of words/bigrams/trigrams in a text covered by the index) AND individual word/bigram/trigram index coverage information.

TAALES takes plain text files as input (it will process all plain text files in a particular folder) and produces a comma separated values (.csv) spreadsheet that is easily read by any spreadsheet software.

 

You can find all the info here. Windows and Mac versions available for free.

The Conference on #NLP KONVES new deadline

cfp

KONVENS 2016
http://www.linguistics.rub.de/konvens16/

The Conference on Natural Language Processing (“Konferenz zur Verarbeitung natürlicher Sprache”, KONVENS) aims at offering a broad perspective on current research and developments within the interdisciplinary field of natural language processing. It allows researchers from all disciplines relevant to this field of research to present their work. The conference will take place September 19–21, 2016 in Bochum (Germany). We are pleased to announce that John Nerbonne and Barbara Plank will give invited talks at the conference.

Call for Papers

We welcome original, unpublished contributions on research, development, applications and evaluation, covering all areas of natural language processing, ranging from basic questions to practical implementations of natural language resources, components and systems.

The special theme of the 13th KONVENS is: “Processing non-standard data — commonalities and differences”.

A wide range of data can be considered “non-standard” because it deviates in one way or the other from standard written data such as newspaper texts. Examples include:
* data produced by language learners
* historical data
* data from social media
* (transcriptions of) spoken data

We especially encourage the submission of contributions comparing different types of non-standard data and their properties, focussing on their impact for natural language processing. For example, a feature common to many types of non-standard data is the use of non-standard spelling. However, spelling variation in learner data as compared to historical data is due to very different reasons and, most likely, resulting in very different types of non-standard spellings.

Topics that we would like to see addressed include:
* Common properties of (many) non-standard data, e.g. non-standard spelling, data sparseness, features of orality
* Impact of the commonalities and differences of non-standard data on the methods and tools that are applied to the data, e.g. normalization vs. tool adaptation, evaluation without gold standard, etc.

Important Dates
NEW: June 7, 2016  Paper submissions due
NEW: July 18, 2016 Notification of acceptance
August 15, 2016    Camera-ready copy due
September 19–21, 2016  Conference

Formats

We welcome two types of contributions:
* Full papers for oral presentation (8 pages plus references)
* Short papers for presentation as posters (4 pages plus references)

Short papers/posters can be combined with a system demonstration. Reviews will be anonymous. Accepted full and short papers will be published in the conference proceedings.

Submissions must conform to the formatting guidelines, and must be made electronically through the conference website (see https://www.linguistics.ruhr-uni-bochum.de/konvens16/call/index.html#formatting-guidelines).

The conference languages are English and German. We encourage the submission of contributions in English.

CFP Language and the new (instant) media

2016 PLIN Day, hosted by the Linguistics Research Unit of UCLouvain in Belgium.

After last year’s successful edition on Lexical complexity, this year’s topic is ‘Language and the new (instant) media’. The PLIN Day will take place on 12 May 2016 in Louvain-la-Neuve.

More information and registration (free for all Belgian participants)

The main objective of the workshop is to bring together specialists from a number of different but related fields to discuss the specificities of language in the new media. The workshop will thus offer a view of different approaches to language in the new media. The event will be structured around five keynote presentations and poster sessions. We are happy to welcome the following keynote speakers:
Patricia Bou-Franch (Universitat de València)
Walter Daelemans (Universiteit Antwerpen)
Elisabeth Stark (University of Zurich)
Caroline Tagg (The Open University)
Olga Volckaert-Legrier (Université Toulouse Jean Jaurès)

The poster sessions, which will include time for a short oral presentation of each poster, offer a forum for numerous other research trends. If you’re a PhD student, you’re eligible for the Best Poster Award!

Posters may deal with any of the following linguistic domains:
Discourse analysis
Language norms and contacts
Communication
Sociolinguistics
Psycholinguistics
Corpus Linguistics
Natural Language Processing
Language Statistics
We also invite companies which develop research or research-based applications concerning language and new media, to submit a poster proposal.

Important dates:
Deadline for poster proposal submissions: 31 January 2016
Notification of acceptance: 1 March 2016
Submission of Power Point Presentations for the posters boost session: 1 May 2016

We are also happy to inform you that the Annual Linguistic Day of the Linguistic Society of Belgium will also be held at UCL, on 13 MAY 2016, the day after the PLIN day (http://www.uclouvain.be/en-528988.html)

Best regards,

Convenors:
Louise-Amélie Cougnon (Girsef – Cental), Barbara De Cock (Valibel – Discours et variation) and Cédrick Fairon (Cental)

Follow @plindayucl on Twitter for the latest news!

Official Website: http://www.plindayucl.com/

Prof. Cédrick Fairon
Directeur
Centre de traitement automatique du langage (CENTAL)
Place Blaise Pascal, 1, bte L3.03.12 B-1348-Louvain-la-Neuve
cedrick.fairon@uclouvain.be
Tél. 32 (0)10 47 37 88 – Fax 32 (0)10 47 26 06
www.uclouvain.be/cental
www.facebook.com/ucl.cental
twitter.com/cfairon

Adam Kilgarriff: a selection of papers and talks

Some readings to remember one of the most indisputably influential corpus linguists in the 20 and 21st centuries.

Using corpora for language research

https://www.sketchengine.co.uk/documentation/attachment/wiki/AK/Papers/SkE_for_lingResearch2013.ppt?format=raw

Googleology is bad science

http://www.kilgarriff.co.uk/Publications/2007-K-CL-Googleology.pdf

Grammar is to meaning as the law is to good behaviour. Corpus Linguistics and Linguistic Theory 3 (2): 195-198.

http://www.kilgarriff.co.uk/Publications/2007-K-CLLT-grammarlaw.doc

Lexicoder automated content analysis of text

Lexicoder is a Java-based, multi-platform software for automated content analysis of text. Lexicoder was developed by Lori Young and Stuart Soroka, and programmed by Mark Daku (initially at McGill University, and now at Penn, Michigan, and McGill respectively).

The current version of the software (2.0) is freely available – for academic use only. Additions and revisions will also be released here as they become available. In addition, the Lexicoder Sentiment Dictionary, a dictionary designed to capture the sentiment of political texts, is available formatted for Lexicoder, or WordStat, and also adaptable to other content-analytic software. Work on Topic Dictionaries, based on the Policy Agendas coding scheme, is also underway.

Through Linkedin The WebGenre R&D Group.