The Linguist is a languages magazine for professional linguists, translators, interpreters, language professionals, language teachers, trainers, students and academics with articles on translation, interpreting, business, government, technology
Issue link: https://thelinguist.uberflip.com/i/1513068
10 The Linguist Vol/62 No/4 thelinguist.uberflip.com FEATURES Should interpreters be worried or could new technology simply improve their work, asks Claudio Fantinuoli The realm of artificial intelligence, especially in its application to natural languages, has witnessed significant advancements. These innovations do more than just automate tasks once reserved for humans; they empower language professionals, enhancing the quality of our work, streamlining delivery schedules and fostering collaboration among colleagues. A prominent application of AI is in assisting simultaneous interpreters, who frequently work in high-profile and critical settings, such as international conferences, court proceedings, live television interviews and political debates. Distinctive features of simultaneous interpreting include its concurrent nature (the translation happens nearly simultaneously with the original speech) and its immediacy (it is consumed instantly, without interruptions, and cannot be revised). Both aspects place strict time constraints on the process, making it a cognitively challenging task. To maintain quality, interpreters often work in pairs or teams, taking turns every 20-30 minutes or so to ensure accuracy and prevent fatigue. While they are not interpreting, they often assist their boothmate with terminology, ambiguous passages and technical issues. Drawing inspiration from this role, computer- assisted interpreting (CAI) tools with a 'boothmate' function have been developed. By analysing the original speech in realtime, they identify challenging segments for the interpreter and suggest translations. Problem triggers Three prominent challenges arising from the time constraints of simultaneous interpretation – often referred to as 'problem triggers' – are specialised terminology, numbers and proper names. Interpreters employ strategies to address them. Regarding terminology, a common strategy is to sidestep the direct translation of a technical term, opting instead for hyponyms or explanatory phrases. This approach may be effective communicatively but there are situations where precise terminology is paramount. For example, at corporate events clients often insist on specific terminology and jargon to foster brand awareness and enhance clear communication. Using accurate and precise terminology in specialised events also boosts the perceived quality of the interpreter, which will become increasingly vital with the advent of machine interpreting. Given the prowess of automated translation systems in terminology precision, user expectations for terminology accuracy are likely to increase. Employing CAI tools may become essential for professionals to meet the evolving demands of the market. Conceptually, the design of an artificial boothmate is straightforward. An automatic speech recognition (ASR) model transcribes the original speech live, while a language model processes this stream of data in real time, identifying key terms, numbers and other entities. The terms are then translated, either from a terminological database curated by the interpreter or via machine translation, and displayed on a user interface for the interpreter's reference. The best known tools at the moment are InterpretBank (www.interpretbank.com/ASR) and SmarTerp (www.smarterp.me). There are possible variants of this. For example, it is possible to use direct speech- to-text translation and present the running raw translation to the interpreter or to use this translation to select the units of interest. This entire process, ideally, should be executed within a two-second latency to allow the interpreter to seamlessly integrate the suggestions into the ongoing rendition. Help or hindrance? Such tools have already been introduced but several pressing questions remain. Does the artificial boothmate genuinely improve interpretation quality, especially accuracy? Or does it add to the tasks that interpreters must juggle, potentially overburdening them at the risk of being counterproductive? In fact, there is concern that CAI might compromise quality by impacting the holistic understanding of speech. Several empirical studies have examined the feasibility of human-machine interaction in the simultaneous modality. Although the results stem from small-scale experiments, and caution is advised regarding their broader applicability, they hint at the positive impact of real-time support in terms of both improving accuracy and reducing omissions. The artificial boothmate Does the artificial boothmate add to the tasks that interpreters must juggle, potentially overburdening them?