The Linguist

The Linguist 58,5 - October/November 2019

The Linguist is a languages magazine for professional linguists, translators, interpreters, language professionals, language teachers, trainers, students and academics with articles on translation, interpreting, business, government, technology

Issue link: https://thelinguist.uberflip.com/i/1172840

Contents of this Issue

Navigation

Page 13 of 35

14 The Linguist Vol/58 No/5 2019 ciol.org.uk/tl FEATURES Luke Barrett outlines the fast-paced work of a live subtitler W hen people think of live subtitling, they usually imagine someone behind a desk typing away frantically, trying to keep up with the speaker on screen. Until about 15 years ago, this wasn't too far from the truth. Broadcasters were employing stenographers, who were both highly skilled and very expensive. While this was perfectly achievable at the time, the growing demand for subtitles has created a need for a faster, more accurate, more cost- effective service. This follows the Communications Act of 2003, which made it a legal requirement for TV channels to provide subtitling. Because of this, the industry has shifted away from typing towards 'respeaking' using voice recognition technology – the most commonly used being Dragon NaturallySpeaking. Before a subtitler goes live on air, they must successfully complete a training period, which can last up to three months. During that time, they begin to build their own voice profile, which initially involves reading excerpts of different texts (ranging from instruction manuals to Charlie and the Chocolate Factory) to enable the software to adapt to how they pronounce their words. From there, it is a matter of developing the skill of respeaking. The subtitler listens to the programme audio through headphones and respeaks what is being said through a microphone, adding any punctuation. For example, respeaking the opening sentence of a football match could be: "[cap] Hello [comma] welcome to tonight's game between Manchester United and Liverpool [full stop]". This is input through the voice recognition software into the subtitling software, which transfers it to the screen. For those who use the service – primarily the deaf and hard of hearing, as well as people learning English as a second language – the introduction of voice recognition software is a welcome improvement. Not only has the quantity of subtitled programming increased, but also the quality. On average, the software enables 95% accuracy, but the live subtitling department at Deluxe, where I work, aims for at least 98%. With training, it is possible for a wide range of people to become live subtitlers, and both graduates (in degrees such as media) and non-graduates increasingly apply. Nevertheless, it is still often language graduates who find jobs in this field. Not only is it an opportunity for them to showcase their recently honed skills in a working environment, but their skills are also sought- after by the subtitling companies. The ability to edit, mentally construct sentences and punctuate appropriately produces a more accurate final product. It is not essential to be a native English speaker – and in our office we have a mixture of nationalities, including Greek and Italian – but you do need a very strong grasp of the English language. Common challenges Despite advancements in both subtitling technology and skilled personnel, there is still a variety of challenges and difficulties that arise on a day-to-day basis. By its very nature, live programming is unpredictable, so the subtitler has to maintain absolute concentration and be ready for anything. To ensure they are well prepared, the subtitler is allotted at least half an hour before going on air to research the programme and add any words it might feature to their voice profile's dictionary, training it to make sure the correct word comes out. When subtitling a cricket match, for example, the subtitler would import word lists containing the teams' players and specific cricket terminology ('mid- on', 'seamer', 'LBW' etc). This makes the job a lot more manageable and provides a more efficient service for the viewer. Without this preparation, the subtitler would have to make numerous corrections while on air, likely resulting in them missing several sentences. One of the biggest challenges for a subtitler is when a programme has multiple speakers who talk over one another. This occurs frequently on Sky News's The Pledge, where a panel of at least four people discusses a whole range of topics. Here, the subtitler has to mentally edit and respeak the important parts of the sentences, while trying to give each speaker a voice. As if this task isn't hard enough, the subtitler also has to assign different colours for each speaker – usually white, yellow, blue and green. These are voiced using 'macro' commands (e.g. 'macro white'), which are added to the profile's dictionary. Inevitably, not everything said will get subtitled, but subtitlers do their best using their skill set. A live performance

Articles in this issue

Links on this page

Archives of this issue

view archives of The Linguist - The Linguist 58,5 - October/November 2019