@CALICOnsortium conference 2017 Multilingualism and Digital Literacies NAU May 16-20

 

flagstaffpasmontevista

CALICO 2017 34th ANNUAL CONFERENCE Multilingualism and Digital Literacies

Northern Arizona University
May 16-20
DEADLINE FOR PROPOSALS: OCTOBER 31, 2016
Workshops: Tuesday, May 16 – Wednesday, May 17, and Saturday, May 20
Opening Reception and Keynote: Wednesday, May 17
Presentation Sessions: Thursday, May 18 and Friday, May 19
Technology Showcase and Poster Session: Thursday, May 18

https://calico.org/page.php?id=690

CALICO is a professional organization whose members research the informed and innovative uses of technology in foreign/second language learning and teaching. CALICO’s conferences bring together educators, administrators, materials developers, researchers, government representatives, vendors of hardware and software, and others interested in the field of computer-assisted language learning. Proposals may explore the conference theme or address any area of technology pertaining to language learning and teaching. Presentations may be in either traditional or practitioner research styles, grounded in theory and/or methodology, covering topics in language acquisition and integration of software and technology into the learning environment. A formal paper need not accompany a presentation at the conference. However, presenters are encouraged to submit a formal paper for review to the CALICO Journal, on the same topic (or any other).
The proposal and its guidelines will require the following information: title, type of presentation, 100-word abstract, 300-word description, presenter/co-presenter contact information, and technology needs.
Five types of presentation formats are available:
Workshop (pre- or post-conference, hands-on; half-day, full-day or two-day presenter’s choice)
Technology Showcase (a two-hour informal event during one evening of the conference)
Poster Session (in conjunction with the Showcase)
Presentation (30-minute individual presentation)
Panel discussion (90-minute regular presentation designed for multiple presentations and presenters on a specific topic)

13th Corpus Linguistics in the South 26 November

 

Corpus Linguistics in the South 13 SCALE AND GRAIN IN CORPUS LINGUISTICS

University of Suffolk, Waterfront Lecture Theatre 1
Saturday 26 November 2016

Programme
10 – 10:45 Opening coffee/refreshments and discussion

10:45 Brief welcoming remarks

11 – 11:30 The Hillary Clinton emails: corpus linguistics meets the real world
Rachele de Felice, University College London

11:30 – 12:00 Grain and scale: Looking at small data sets in broader sociocultural contexts
Colleen Cotter, Lisa McEntee-Atalianis and Danniella Samos
Queen Mary University of London and (LMA) Birkbeck, University of London

12:30 – 1:00 Obviously native: uses of adverbs in native and advanced learner language in spoken English Pascual Pérez-Paredes and Camino Bueno
University of Cambridge and (CB) Universidad Pública de Navarra

Break for lunch at cafés surrounding Waterfront building

2:00 – 2:30 Corpus linguistics and news representations: a corpus-assisted framing analysis of mental health and arts participation messages in the British press
Dimitrinka Atanasova and Nelya Koteyko, Queen Mary University of London

2:30 – 3:00 From colony to text: the Twitter essay as a theoretical and corpor(e)al challenge
Diana ben-Aaron, University of Suffolk

3:00 Brief closing remarks
If you would like to attend, please RSVP to Dr Diana ben-Aaron at d.ben-aaron@uos.ac.uk by 24 November. As always with CLS, there is no charge for participants. Light refreshments will be provided and an informal dinner meetup will be arranged for those arriving on Friday night.
The University of Suffolk is located on the Ipswich waterfront, within walking distance of the train station (ca 75 mins to London) and National Express coach stop. A scalable map, campus map and links to other information are here. There are a number of inexpensive hotels in Ipswich and we are happy to advise on practical arrangements.

*********************************

First post below

We are pleased to announce that the 13th Corpus Linguistics in the South event will take place on Saturday, 26 November 2016, at the University of Suffolk in Ipswich. For this session we would like to continue the focus on theory and methodology, asking:

– How do we select data sets and units of analysis?
– How is this influenced by scale of resources?
– How does this affect our findings?

– How do these objects of study relate to speaker/reader interactions with the original texts?

– How can we ensure that our analyses bear relevance to these interactions?

Corpus work has enabled the identification of new linguistic objects of study, as well as the re-examination of  pre-existing categories in syntax, semantics, varieties and genres. Advances in data processing have enlarged our ability to investigate new categories. However, if corpus linguistic findings are to be relevant for other branches of linguistics, we need to problematise the correspondence between our methodological choices and the way the texts are used in situ by users or populations. This is particularly relevant as digital texts enable new kinds of displays and uses. With some kinds of new media, such as games, basic default units of analysis may be difficult to define. Even with more traditional texts there are questions to be asked about our categories, such as what is a meaningful unit of time in diachronic research?

These questions offer the opportunity to dig deeper into previous CLS topics, such as small and large corpora as discussed at Sussex last spring, as well as public and professional discourse, and social media. Thus we welcome proposals which respond to any of the questions above, or other questions relating to the construction and role of categories in our analysis:

Presentations should be 30 minutes in length, and will be followed by time for discussion. If you would like to participate, please send a short (250 word) abstract by 15 October tod.ben-aaron@ucs.ac.uk, as an attachment without name or affiliation. Acceptance of submitted abstracts will be notified at the beginning of November.

Contact person:

Dr Diana ben-Aaron
Lecturer in English
University of Suffolk, Neptune Quay, Ipswich IP4 1QJ
@diana180 | d.ben-aaron@uos.ac.uk | www.uos.ac.uk/english

 

 

 

 

#CFP NLP for learning and teaching Traitement Automatique des Langues

:::::::::::::::::::::::::::::::::::

Through the corpora list

::::::::::::::::::::::::::::::::::::

TAL Journal: 2016 Volume 57-3

Call for papers

Topic: NLP for learning and teaching (Link)

Foreign Language Learning and Teaching is one of the fields where the introduction of information and communication technologies (ICT) has proved particularly fruitful. It is thus no wonder that Computer-Assisted Language Learning (CALL) has been among the first, from the 1960’s, to integrate insights and techniques from Natural Language Processing (NLP) to create intelligent computer-assisted learning environments. Since then, various other fields and disciplines have also incorporated NLP into electronic learning environments to support self-directed learning, blended learning or classroom teaching. NLP has overall contributed to the improvement of learning environments, and to the development of research in the related fields. It has allowed for the improvement of integrated systems, not to say the widening of issues in the related fields.

Today, online learning tools, Massive Open Online Courses (MOOCs), Small Private Online Courses, Computer-Assisted Pronunciation Teaching (CAPT) systems, Computer-Assisted Instruction systems for mathematics, sign language learning applications, or Intelligent Tutoring Systems (ITS), among many others, are heavy “consumers” of NLP, or about to become it.

Integrating NLP into these systems enables to consider, process and reproduce for learning purposes aspects of the content of linguistic data, to create more advanced educational resources, but also to make the communication with the learner more relevant in a teaching context.

The aspects of NLP most frequently involved are analysis of learners’ responses, feedback provision, automated generation of exercises, and the monitoring of learning progress. Other aspects related to learning and teaching also involve NLP, such as plagiarism detection, writing support, use of learner corpora or parallel corpora to detect and resolve errors, or adaptive learning systems integrating ontologies for the associated domains.

The contribution of NLP to these systems is generally regarded as positive. It must be recognized, however, that only a handful of such applications have made it to the general public as a commercial software. In most cases, the systems never left the laboratory and have a limited range of use, sometimes only as a proof of concept. Is this due, as many believe, to the high production cost of NLP resources? Is it because of the current quality of NLP results? Is it a consequence of the integration strategy of NLP into these applications?

The goal of this issue of Traitement Automatique des Langues dedicated to “NLP for learning and teaching” is to summarize the contribution of NLP to instructional systems, both at a theoretical level (opportunities, limitations, integration methods) and at the level of learning systems – or parts of systems – production.

Authors are invited to submit papers on all the aspects of the implementation of NLP into Computer-Assisted Instruction (CAI) systems for a given discipline, as well as useful tools for this task, in particular regarding, but not limited to, the following issues and tasks:

  • Contribution of (written or spoken) NLP to CAI systems.
  • Needs and requirements of NLP techniques and methods for instructional systems design.
  • Instructional design methodology for NLP-based CAI systems.
  • Presentation of systems and learning tools involving NLP.
  • Collection and use of language corpora for pedagogical purposes using NLP.
  • Use of learner corpora and error annotation using NLP.
  • Automated evaluation of learner writing and short answers using NLP.
  • (Semi-)automated diagnostic assessment and remedial help.
  • Design and setting up of activities involving NLP.
  • Language resources for NLP-based instruction and learning.
  • Automated selection of text resources based on pedagogical criteria.
  • Development, presentation and use of linguistic and metalinguistic information for pedagogical purposes.
  • Learner modelling based on his linguistic output.
  • Approaches and methods for plagiarism detection.

Position papers and state of the art papers are also welcome.

Language

Papers can be written in French or in English. Submissions in English will only be accepted if at least one of the authors is not a native speaker of French.

Submission guidelines

Submitted papers should be 20 to 25 pages long. Any dispensation regarding length should be previously discussed with the guest editors.

Authors are invited to submit their paper as a PDF file on http://tal-57-3.sciencesconf.org/ , by clicking on “Soumission d’un article”, after having previously registered and logged in on SciencesConf.org.

The TAL Journal follows a double-blind peer-reviewing process. All submissions must be carefully anonymized.

Stylesheets are available online on the journal website: http://www.atala.org/IMG/zip/tal-style.zip .

Important dates

  • Paper submission deadline: 28 October, 2016
  • Notification to the authors after first review: 17 February, 2017
  • Notification to the authors after second review: 28 April, 2017
  • Publication: September 2017

Journal

Traitement Automatique des Langues is an international journal published since 1960 by ATALA (Association pour le traitement automatique des langues) with the support of CNRS. It is now published online, with an immediate open access to published papers, and annual print on demand. This does not change its editorial and reviewing process.

Guest editors

  • Georges Antoniadis, Université Grenoble-Alpes, LIDILEM, France
  • Piet Desmet, KU Leuven, iMinds-ITEC, Belgium

Editorial Board

  • Véronique Aubergé, LIG, Université Grenoble-Alpes, France
  • Yves Bestgen, IPSY, Université Catholique de Louvain, Belgique
  • Eric Bruillard, STEF, ENS Cachan, France
  • Cristelle Cavalla, DILTEC, Université Sorbonne Nouvelle, France
  • Thierry Chanier, LRL, Université Blaise Pascal de Clermont Ferrand, France
  • Françoise Demaizière, Université Paris Diderot, France
  • Philippe Dessus, LSE, Université Grenoble-Alpes, France
  • Sylvain Detey, Waseda University, Japon
  • Walt Detmar Meurers, Universität Tübingen, Allemagne
  • Maxine Eskenazi, Carnegie Mellon University, USA
  • Cédrick Fairon, CENTAL, Université Catholique de Louvain, Belgique
  • Dan Flickinger, LinGO Laboratory, Stanford University, USA
  • Nuria Gala, LIF, Aix-Marseille Université, France
  • Sylviane Granger, CECL, Université Catholique de Louvain, Belgique
  • Natalie Kübler, CLILLAC-ARP, Université Paris Diderot, France
  • Jean-Marc Labat, LIP6, Université Pierre-et-Marie-Curie, France
  • Patrice Pognan, PLIDAM, INALCO, France
  • Mathias Schulze, University of Waterloo, Canada
  • Isabel Trancoso, Instituto Superior Técnico, Portugal
  • Stefan Trausan-Matu, Universitatea Politehnica din Bucuresti, Roumanie
  • Elena Volodina , University of Gothenburg, Suède
  • Virginie Zampa, LIDILEM, Université Grenoble-Alpes, France
  • Michael Zock, LIF, Aix-Marseille Université, France

 

*************************************************

Georges ANTONIADIS

Professeur d’informatique-linguistique

Directeur du Dpt Sciences du Langage & FLE

Responsable du master Industries de la Langue

UFR LLASIC / Laboratoire LIDILEM

Université Grenoble-Alpes, bâtiment Stendhal

CS 40700

38058 Grenoble cedex 9

Tél. : +33/0 4 76 82 77 61 Fax : +33/0 4 76 82 41 26

Mél. : Georges.Antoniadis@univ-grenoble-alpes.fr

http://lidilem.u-grenoble3.fr/membres/

eLex 2017: Lexicography from Scratch submission deadline 1 Feb 2017

The fifth biennial conference on electronic lexicography, eLex 2017, will take place in Holiday Inn Leiden, Netherlands, from 19-21 September 2017.

The conference aims to investigate state-of-the-art technologies and methods for automating the creation of dictionaries. Over the past two decades, advances in NLP techniques have enabled the automatic extraction of different kinds of lexicographic information from corpora and other (digital) resources. As a result, key lexicographic tasks, such as finding collocations, definitions, example sentences, translations, are more and more beginning to be transferred from humans to machines. Automating the creation of dictionaries is highly relevant, especially for under-resourced languages, where dictionaries need to be compiled from scratch and where the users cannot wait for years, often decades, for the dictionary to be “completed”. Key questions to be discussed are: What are the best practices for automatic data extraction, crowdsourcing and data visualisation? How far can we get with Lexicography from scratch and what is the role of the lexicographer in this process?

Important dates

February 1st, 2017: abstract submissions
March 15th, 2017: reviews of abstracts
May 15th, 2017: submission of full papers
June 15th, 2017: reviews of full papers
June 25th, 2017: camera-ready copies submissions

Call for papers here: https://elex.link/elex2017/call-for-papers/