When I signed up to the webinar on Machine Translation Use in Public Services organised by the Chartered Institute of Linguists, I had certain expectations. Based on the first survey of its kind, the preliminary report of the University of Bristol and CIoL was bound to highlight unauthorised use of AI translations by public organisations and the risks associated with it, especially in healthcare.
However, the report has also identified positive findings as it pointed out some valid situations where the use of translation tools can be justified. For example, what would you say is wrong with using AI to explain to a relative visiting a patient how to get from reception to the required department in a large hospital? Or to reassure a patient waiting for a scheduled appointment for more than half an hour that they will definitely be seen soon. Even when a professional telephone interpreter is used, there are occasions where doctors need to resort to machine translation tools after the interpreting session ends and the patient asks a follow-up question he hadn’t thought of before.
These real life examples demonstrate that the integration of AI can support public sector staff and be used in hybrid models alongside human interpreters. The way to meet the demand seems to be to recognise all options available to public service staff and risks associated with their use. Public services could also identify which standard content, both spoken and written, they ask machine tools to help with, and have it translated professionally for use across the board. The report stresses that the implementation of AI needs to be supported by efforts to raise awareness and ensure proper staff training.
Although the CIoL is calling for overarching policies in relation to the use of AI, the UK government doesn’t appear to be working on any comprehensive policies at present, delegating decision-making to existing sector-specific regulators instead.
It’s clear that the trend to utilise machine translation tools to bridge language gaps will continue. A few weeks ago the BBC reported of a new pocket-sized device helping to improve communication between patients who do not speak English and healthcare staff in Northern Ireland. It works with audio and text in 108 languages and is part of a pilot project.
Amid concerns that professional interpreters are too costly and AI tools lack perfect accuracy, some have proposed language training for healthcare professionals as a solution to the problem.
How you can cover dozens of languages at one workplace and ensure overburdened healthcare staff realistically achieve foreign language proficiency needed to give diagnosis and explain complicated procedures is anyone’s guess.
So we return to a technology-based solution overseen by humans, acknowledging that some translators will persevere with their resistance and criticism of AI translation tools. The truth of a wider context is startling though because it’s not just translators and interpreters who are affected by generative AI. Sir Geoffrey Vos, the Head of Civil Justice in England and Wales, recently said that using artificial intelligence in the justice system would leave lawyers and judges with “no choice but to accept the advice or verdict of the machine”. The prediction is technological advances will rapidly outpace any new laws designed to rein in the machines. Translators should watch their workspace too and use any positive developments to their advantage. A colleague has recently told the Business Insider how he is already doing so and successfully.
[Tip: More translation-related content is published on our Instagram account]
Comments