New AI programs like ChatGPT promise speed and therefore lower costs for translators, but beneath the hype, there are significant risks. There’s a saying: if something is free, you’re the product. So why are programs like ChatGPT seen as a solution when they come with hidden dangers?
The tools we trust & those we don't
Our company isn’t against new technology (we promise you, we aren’t Luddites). We embrace it when it’s safe, but we’re careful because our clients expect accuracy and consistency in their final translations. They also trust us with sensitive company information and data privacy, and we take this responsibility seriously.
We use technology every day and aren’t against software that makes us faster and more efficient: but we choose our tools carefully. They need to be safe. Two tools that we implement daily are translation memories (TM) and term bases (TB). These are invaluable, helping us maintain accuracy and consistency while keeping control over the process.
What is a translation memory (TM)?: Think of it as a database of past translations. It remembers entire sentences and paragraphs, suggesting previous translations for similar content. This ensures consistency across projects.
What is a term base (TB)?: A multilingual glossary. It stores approved translations for specific terms, ensuring key terminology is always correct. These tools are safe, reliable, and controlled by humans – unlike AI, which can go rogue.
Let’s talk risks
Another significant problem lies in the foundation of automated translation systems. They are often based on existing translations, which may already contain inaccuracies or errors. If these flawed translations were used as training data, AI translation engines don’t understand language; they predict words based on patterns, which can result in mistranslations and fabricated terminology (this is called a “hallucination”). In casual contexts or when researching a topic, AI may be harmless. But in legal, medical and business translations, mistakes can have serious consequences.
Legal risk: A mistranslation or AI hallucination in a contract or court document could lead to disputes, financial losses, or even wrongful convictions. South African lawyers recently got in big trouble for using ChatGPT in court papers as the AI program made up non-existent citations.
Medical risk: AI inventing medical terms in patient reports is not just an inconvenience; it’s dangerous. Incorrect terminology could lead to misdiagnosis or improper treatment.
Data privacy concerns: Many AI tools process translations on external servers, raising GDPR issues. Business reports, contracts, and confidential medical documents should never be exposed to uncontrolled AI systems.
AI translation errors are common
Even when AI translations appear correct at first glance, errors often lie in the details. AI struggles with:
AI in marketing & why human translators matter
AI translation tools do not understand branding, audience tone or cultural messaging. This makes them unreliable for translating marketing content where wording is crucial.
AI is not a replacement for humans & needs oversight
At Translations Koll we don’t reject technology but we are cautious. AI translations may speed up the process, but the final product often lacks research skills, cultural nuance and accountability. We do offer a service where we proofread and edit AI-generated texts for our clients to fact-check and ensure accuracy.
And speaking of research, AI doesn’t actually do it. It predicts text based on patterns, not verified facts. A recent example? Mark Zuckerberg quietly disabled Facebook’s fact-checking feature. If one of the largest tech companies can’t ensure AI-generated content is reliable, why should we trust AI with critical translations?
The bottom line
AI is here to stay but businesses and translators need to be wary of the risks. At Translations Koll, we embrace technology where it makes sense, using safe tools like TM and TB, but we remain sceptical about AI’s ability to handle complex and high-stakes translations. The key to quality isn’t automation - it’s human expertise. Invented terminology and data privacy concerns may outweigh its efficiency.
Request