The Linguist

The Linguist-63/1-Spring 2024

The Linguist is a languages magazine for professional linguists, translators, interpreters, language professionals, language teachers, trainers, students and academics with articles on translation, interpreting, business, government, technology

Issue link: https://thelinguist.uberflip.com/i/1516687

Contents of this Issue

Navigation

Page 10 of 35

SPRING 2024 The Linguist 11 ARTIFICIAL INTERPRETING @CIOL_Linguists result, humans are superior interpreters, especially where specialised or complex discussions are involved. Second, humans can adapt in real time to speakers' accents, dialects and speaking styles. The AI interpreter cannot do this because it is essentially using technology based on translation. Yes, AI is using speech to text (speech recognition) and text to speech (speech synthesis) technology to produce the spoken word, but anyone who has dealt with Alexa, a phone response system based on oral cues, or automatic transcription in an application like Zoom or YouTube knows that this technology still presents challenges. Third, humans have emotional intelligence that a computer does not. Human interpreters can perceive the emotion, tone and non-verbal cues of a speaker with a sophistication that AI cannot master. All of these are essential to conveying meaning. The final issue that ChatGPT underscored was the question of ethics. Human interpreters must abide by a code of ethics which governs their decision-making and they can be asked to explain their thought processes, thus ensuring a degree of impartiality and confidentiality. WHAT CHATGPT MISSES What ChatGPT doesn't mention is that while speech synthesis technology is improving, humans are still able to detect a synthesised voice and it can be irritating to listen to for long periods of time. We have come a long way from the 'Stephen Hawking'-type voice, but have not yet imbued AI voices with true emotional heft. They come off as monochromatic and monotone in many cases. Even as the technology improves, AI faces the 'uncanny valley' effect, 4 where the better the AI gets at mimicking humans, the creepier we find it. That is a definite hurdle for those who think that AI interpreters can replace humans. ChatGPT also left out the notion of pacing as a human skill that the AI interpreter cannot replicate. Humans not only adapt to dialect, accent and speaking style, but also to the pace that the information needs to be delivered in. People can read an audience and know when a speaker is flooding them with information at a rate that makes comprehension less likely. The human interpreter can account for this in pacing their speech at a more acceptable rate (within limits, of course). What ChatGPT doesn't say about ethics is also revelatory. The great irony behind AI and LLMs is that the more training data used to create them, the better the AI is at responding to language-based tasks. But this also has the opposite effect on transparency. The best trained LLMs are 'black boxes' 5 where it is very difficult to know the process by which they make decisions. This is problematic in that it could open these LLMs to bias; allows for little flexibility because it makes it hard to fix errors in the algorithms used to produce the results; and renders these LLMs susceptible to intentional use for malicious purposes. Finally, ChatGPT doesn't mention liability for errors in interpretation. If AI makes an error, who do you hold responsible? The AI interpreter? The client who chose to use AI instead of a human interpreter? The person who wrote the code to train the AI? If the latter, which part of the code? There are multiple technologies at work. With a human interpreter, who is at fault is clear. This issue is not something that interpreters can resolve, as it is a legal one, but it is an important implication nonetheless. As AI becomes integrated into our daily lives, we will all have to come to grips with the liability ramifications of its use. CAN SAFE-AI KEEP AI SAFE? One of the first organisations in the US to raise public awareness of the repercussions of AI in interpreting is Stakeholders Advocating for Fair and Ethical AI in Interpreting (SAFE-AI). 6 Their mission is to "establish, disseminate and promote industry-wide guidelines and best practices for the responsible adoption of AI in interpreting, through facilitating dialogue and action among vendors, buyers, qualified practitioners, end users and other stakeholders". To find out more about the ethical issues around AI, I asked them about their work. What are some of the proposed solutions to the ethical dilemmas of using AI in interpreting? The SAFE-AI Task Force is in the process of gathering insight into this question with its perception surveys. That said, the general nature of guardrails being proposed in significant governmental legislation and guidelines put forward by the EU's Artificial Intelligence Act and US Executive Order on Safe, Secure and Trustworthy Artificial Intelligence provide a beginning understanding of the key dilemmas and possible solutions, which broadly include: • Human oversight of AI applications should be the norm • AI companies will need to share their safety test results and data with governments • AI cannot be used to create dangerous materials • Applications need to be transparent and explainable • Bias needs to be detected and eliminated • Data privacy and copyright protections must be in place • Civil rights protections cannot be infringed upon. When it comes to AI and interpreting, a key difference from translation is the application of the technology in real-time communication and dialogue between humans. The need to capture the unspoken elements of meaning, such as facial expression, tone, sarcasm and other body language, is a challenge not yet solved by AI interpreting technologies. Another difference is that STAKEHOLDERS Founding members of SAFE-AI include translator and interpreter Professor Winnie Heh (below); and public service interpreter and trainer Eliana Lobo (bottom)

Articles in this issue

Archives of this issue

view archives of The Linguist - The Linguist-63/1-Spring 2024