The Linguist

The Linguist 56,1 – February/March 2017

The Linguist is a languages magazine for professional linguists, translators, interpreters, language professionals, language teachers, trainers, students and academics with articles on translation, interpreting, business, government, technology

Issue link: https://thelinguist.uberflip.com/i/786024

Contents of this Issue

Navigation

Page 27 of 35

FEATURES thelinguist.uberflip.com DECEMBER 2016/JANUARY 2017 The Linguist 25 FEATURES yet completely ascendant, and the delegates believed that Esperanto could fill the vacuum, starting in schools and telegraph offices and spreading inexorably as both the medium and the message of international cooperation and world peace. It's easy to see why the League was so enamoured. Modern, efficient, progressive, Esperanto embodied a certain spirit of early 20th-century internationalism. Channelling what he called "the spirit of European languages", Zamenhof had forged a hybrid of Romance, Germanic and Slavic elements, streamlined for maximum transparency (no irregular verbs!) but distinctive and ingenious when it came to word formation. Chains of prefixes and suffixes work virtuoso wonders in Esperanto, with -eg- making anything bigger, -et- making anything smaller, and mal- turning anything into its opposite. More thoroughly and elegantly than the English suffix '-ly', Esperanto's -e transforms any noun into an adverb; Kiel vivi vegane is an Esperanto pamphlet whose title translates as 'How to Live Veganly'. There is no lack of idioms, slang or linguistic colour. Though the League of Nations eventually sent the Esperantists packing after three years of debate – official French opposition was apparently decisive – Zamenhof's followers would ultimately outlast the League itself. While it never achieved the fina venko ('final victory') projected by its more devoted acolytes, Esperanto is today "a living language with a worldwide community", reports Esther Schor in her fascinating new history Bridge of Words: Esperanto and the dream of a universal language. The language has survived derision, repression and the onslaught of global English, and it still has an estimated million-plus speakers in a hundred-odd countries. Probably in decline numerically speaking, Esperanto is still flourishing online, with 230,000 Wikipedia articles and counting. There is even a small world of denaskuloj – native Esperanto speakers from birth. No other language constructed largely by a single individual has ever matched Esperanto in popularity, or seems likely to any time soon. Why Esperanto – and not Lingua Ignota, Volapük, Interlingua, or any of a thousand other constructed languages? As 'conlanger' David Peterson, who invented Dothraki for Game of Thrones, points out in The Art of Language Invention, every consciously created language bears the imprint of its era. Medieval languages for addressing God, like the mystic polymath Hildegard von Bingen's Lingua Ignota, gave way to "philosophical languages" in the 17th and 18th centuries, which sought to encode the structure of all knowledge. (Imagine the Dewey Decimal System as a spoken tongue.) Today the hobbyists of the Language Creation Society, inspired by sci-fi and fantasy but ever more informed about Earth's linguistic diversity, share 'artlangs' online. Esperanto is a product of the late 19th and early 20th centuries, an era that saw a rage for constructed languages linked to some kind of cause or hope for social reform. The Eastern European Jewish milieu in which Zamenhof was raised was uniquely hospitable to language planning; he was born just a year after and a few hundred miles away from Eliezer Ben-Yehuda, the founder of modern Hebrew. Before the runaway success of Dr Esperanto's International Language, his 1887 opus, now universally known as the Unua Libro ('First Book'), Zamenhof was trying his hand at Zionist activism and Yiddish language reform. For a meagre living, he made eyeglasses for the Warsaw poor. The Unua Libro humbly pitched Esperanto as "an official and commercial dialect", an easy cipher designed to save its speakers time and money. A century before Linux, the language was avowedly open-source: there was a one-year open comment period during which anyone could vote on proposed changes, and Zamenhof declared that "the future of the international language is no longer more in my hands than in the hands of any other friend of this sacred idea". The timing was fortunate, too: Esperanto launched right after the collapse of Volapük, invented in 1879 on divine inspiration by the German Catholic priest Martin Schleyer. Saddled with endless, intricate verb endings, Volapük apparently never transcended its user base of "male, educated, German-speaking Catholics", according to Schor. Esperanto's centre of gravity moved early on to France, but Zamenhof retained at least nominal leadership until his death in 1917, issuing reforms and fighting off a schism with the offshoot language Ido. By then, the Universal Esperanto Association (UEA) and the Akademio de Esperanto were leading the way, managing a perennial rift between those who proclaimed the language's political and ideological neutrality and those who linked it to the interna ideo, "an undefined feeling or hope", in Zamenhof's words, that would eventually lead to "a special and completely defined political-religious program". Core tenets of Zamenhof's little-heeded programme of Hillelism, later called Homaranismo ('Humanity-ism'), were belief in a higher power, individual conscience and moral reciprocity (Rabbi Hillel's 'Golden Rule'). For other, more secular-minded Esperantists, the 'internal idea' was socialism, pacifism or anti-nationalism. Esperanto has always been weak in the United States, though not for lack of trying by its pioneer, the Irish immigrant Richard Geoghegan, a stenographer who, in his spare time, wrote a dictionary and grammar INTERNATIONAL COOPERATION A Spanish street named in tribute to Esperanto (main image); (right) books in the language at the World Esperanto Congress; and (far right) the seventh congress in Antwerp, 1911 24 The Linguist Vol/55 No/6 2016 www.ciol.org.uk Ross Perlin considers the continued relevance of the invented language on the eve of its 130th anniversary W hen the League of Nations first convened in 1920, a universal language was on the agenda. Delegates from a dozen countries, including Brazil, China, Haiti and India, declared their hope "that children of all nations from now on would know at least two languages, their own mother tongue and an easy means of international communication". The most likely candidate for such an "international auxiliary language" was Esperanto, launched only 33 years earlier by Polish-Jewish ophthalmologist Ludwik Zamenhof. At its height, the lingvo internacia was embraced by working-class Jews, French intellectuals, East Asian leftists, Baha'i believers, Shinto sectarians and Brazilian spiritists. French was in decline, English not ESPERANTO an idea with a future? No other language invented largely by a single individual has ever matched Esperanto in popularity FRANJA 'ESPERANTO' 7/8/07. CC BY-NC-ND 2.0; . ZIKO 'ESPERANTO BOOKS AT THE WORLD ESPERANTO CONGRESS, ROTTERDAM 2008' CC BY-SA 3.0 WIKIPEDIA Two cracking letters in this issue (TL55,6) from Helen Stubbs Pugin and David Smith. However, I find myself at odds with David in his concern over the future of language teaching and Brexit. I would have thought that, if anything, now is the time for UK modern foreign languages (MFL) teaching to pick up the gauntlet that has been thrown before it. Thanks to the referendum, the UK now finds itself on the back foot, having to try harder than ever before to secure a future- proof trading relationship with its European neighbours. This does not mean giving in to an imagined paradise in which learning foreign languages is no longer necessary; it means a new, powerful motivation to put languages at the forefront of national education, coupled with a renewed initiative aimed at broadening, not narrowing, our horizons. Neither should our young people be allowed to drift away from the cultural and social benefits of healthy involvement with our friends on the continent, let alone further afield. Brexit threatens to legitimise the worst in our attitude to foreigners (excellently expressed in David's letter) and this cannot be allowed to happen. The response must be a positive one: the challenge must be seen as an opportunity, and the ball is very much in MFL's court. Nigel Pearce MCIL This issue's star letter writer wins a BBC Active Talk Complete self-taught course. The article 'Inside Switzerland' (TL55,6) recalled my experience of Swiss German as a young translator working at the English Institute in Basle over 50 years ago. The needs of ordinary social contact meant learning the dialect, but there were very few books available on the subject. The only one I could find was Reded Schwîzertütsch, published by Payot, and I worked through this diligently with the help of my Swiss-born spouse to correct me. Friends told me that prior to 1914, post offices, tax offices and banks all spoke Standard German but that they then switched to Swiss German to stress their 'Swissness'. It wasn't easy, but the prize came one day in a shop in Basle, where, after struggling to express myself in a language which has lost many of the complications of Standard German, I was asked, unexpectedly, "Entschuldiged Sie, chömmed Sie useme andere Kanton?" ('Excuse me, do you come from another Kanton?'). John Catling MCIL thelinguist.uberflip.com 11 FEATURES and on a probabilistic model of the target language, learned from a large monolingual corpus of texts. Such 'learning' is done in a so-called 'training' phase. In a second 'tuning' phase, system developers work out the optimal weight that should be assigned to each model to get the best outcome. When the system is called upon to translate new text (in a third phase called 'decoding'), it searches for the most probable target-language sentence given a particular source sentence, the models it has learned and the weights assigned to them. SMT systems thus have a tri-partite architecture and involve a lot of tuning to find the optimal weights for different models. The models used are based on n-grams – i.e. strings of one, two, three or n words that appear contiguously in the training data used. SMTs can have difficulty handling discontinuous dependencies, such as that between 'threw' and 'out' in the sentence 'She threw all her old clothes out'. This is due to the relatively limited amount of context used to build models, and the fact that the n-grams are translated largely independently of each other and don't necessarily correspond to any kind of structural unit. SMT systems are also known to perform poorly for agglutinative and highly inflected languages, as they have no principled way of handling grammatical agreement. Other problems include word drop, where a system fails to translate a given source word, and inconsistency, where the same source- language word is translated two different ways, sometimes in the same sentence. These are precisely the kind of errors that human post-editors are employed to fix. The editing environments used by post-editors are often the same as those used by translators, namely the interfaces provided by TM tools. Although they are distinct technologies, the lines between TM and SMT are blurring somewhat, as it is now common for translators to be fed automatic translations directly from an SMT system when their translation memory does not contain a match for the source sentence. TM and SMT are also intimately connected by the fact that the translation memories that translators build up over time can become training data for their very own (or someone else's) SMT engine. Dominating the field Despite known problems, SMT systems have come to dominate the field of machine translation, out-performing previously leading systems. In the last two years, however, there has been a new kid on the block: neural machine translation (NMT). Like SMTs, NMT systems learn how to translate from pre-existing source texts and their translations. They have a simpler architecture than SMTs however, and don't use models based on n-grams. Instead, they use artificial neural networks in which individual nodes that can hold single words, phrases or whole sentences are connected with other nodes in the network. The connections between nodes are strengthened via bilingual training data. When it comes to translating new inputs, the system reads through the source-language sentence one word at a time. Then it starts outputting one target word at a time until it reaches the end of the sentence. NMT systems thus process full sentences (rather than n-grams). They handle morphology, 10 The Linguist Vol/55 No/6 2016 www.ciol.org.uk FEATURES In a shortened version of her Threlford Memorial Lecture, Dorothy Kenny asks what implications new technology has for the translation profession T ranslation without technology is now inconceivable, but the relationship between the two has become somewhat fraught of late. Even as translation activity continues to grow at a dizzying pace worldwide, translators worry about competition from computers, or having to work with poor quality machine output. At the same time, translation teachers are asking themselves what students should be learning now to see them safely through the 'revolutionary upheaval' currently under way in translation. After all, received wisdom is that education is the means by which human labour wins the race against technology. In coming to terms with current upheavals, there is no doubt that what we need is careful, critical examination of what is actually happening in the contemporary world of translation, one that avoids what Michael Cronin dubs "the dual dangers of terminal pessimism and besotted optimism". 1 These positions are all too present in current reflections on translation technology. Cyber- utopian visions of a world without language barriers abound, and even within translation studies, some commentators predict that machine translation will turn most translators into post-editors sometime soon. 2 Nor are predictions of wholesale automation limited to translation of the written word, where tools like Google Translate have already made their popular mark. If anything, technology pundits get even more excited about automatic translation of the spoken word – the very stuff of sci-fi fantasy. There are already several systems that use speech recognition to convert speech to written text in the same language, and then use conventional machine translation to translate that written text into another language. 3 All that's then needed is a speech synthesis module to speak the target-language text and we have speech- to-speech translation. The first two steps are error-prone and synthesising natural speech is challenging, but developers at one New York start-up are so confident that they can make the technology work that their Babel fish-style earpieces can already be pre-ordered. 4 Predicting the future Predictions about translation technology need careful scrutiny, because what we believe about the future has profound consequences for the decisions we make today. If it is only a matter of time before technology makes human translators and interpreters obsolete, or before post-editing displaces translation, should we still put effort into training translators and interpreters? And what might a career in post-editing look like anyway? What kind of conditions would post-editors work under? And would they like their jobs? Before pursuing these questions, I would like to stress that, while I support a critical approach to translation technology, I am not advocating an antagonistic approach. Despite frequent allusions to translators' supposed hostility to 'technology', there is little to suggest that they harbour negative sentiments towards technology per se. In one recent study by the Finnish researchers Kaisa Koskinen and Minna Ruokonen, for example, some 100 participants were invited to write a short love letter or break-up letter to a technological tool or some other aspect of their work. Most chose to write a love letter. 5 Koskinen and Ruokonen's study covers all sorts of technologies, from search engines to ergonomic mice, but the technologies that are most associated with translation are undoubtedly translation memory (TM) and machine translation (MT), and in particular, statistical machine translation (SMT). TM tools have been around since the 1990s. Put very simply, they store sentences from previously translated source texts alongside their human translations. If a source sentence (or something like it) is repeated in a subsequent translation job, the tool simply retrieves the existing translation for that sentence from memory and presents it to the human user, who can choose to accept, reject or edit it for current purposes. The human translator remains in control. Contemporary SMT, on the other hand, is fully automatic translation in which a computer program decides on the most probable translation for a given source sentence, based on a probabilistic model of translation that it has learned from pre-existing source texts and their human translations, The translator and THE MACHINE lexical selection and word order phenomena (including discontinuous dependencies) better than SMTs, but they take much longer and much more computing power to train. These are problems that large corporations can overcome and, in late September 2016, Google announced that all Chinese-to- English translation delivered by Google Translate for mobile and web apps would henceforth be powered by Google Neural Machine Translation (GNMT). 6 However, problems like word drop, mistranslations (especially of rare words) and contextually inappropriate translations can still occur. There may still be work, in other words, for post-editors. To date, however, we have little or no knowledge of what it is like to work as a post-editor of NMT output. Training implications But back to our questions: what does all this mean for the training of future translators and interpreters? And what might a career in post-editing look like? To answer the first question it is worth looking to the field of labour economics. It used to be the case that routine work was considered particularly INSIGHTS Dorothy Kenny at Stationers' Hall (left) A positive response to Brexit 28 The Linguist Vol/56 No/1 2017 www.ciol.org.uk OPINION & COMMENT Email linguist.editor@ciol.org.uk with your views Deciphering Swiss German A sight for sore eyes? I really enjoy reading The Linguist, in fact it's probably the main reason I keep up my membership. It's therefore the most expensive thing I read, and it annoys me to see some of that money being spent on coloured pages such as those on pages 10-11 (TL55,6), which make it harder to read. I'm glad to say the content is consistently interesting enough to require no embellishment, though your intelligent use of pictures and layout certainly facilitates reading. Elspeth Wardrop MCIL Editor replies Although there is no additional cost in printing pages with coloured backdrops, readability is very important to us and we have noted this concern. STAR LETTER IMAGES © SHUTTERSTOCK

Articles in this issue

Links on this page

Archives of this issue

view archives of The Linguist - The Linguist 56,1 – February/March 2017