Machine Translation and the Savvy Translator

Using machine translation is easy; using it critically requires some thought.

Tick tock! As translators, we’re all too familiar with the experience of working under pressure to meet tight deadlines. We may have various tools that can help us to work more quickly, such as translation memory systems, terminology management tools, and online concordancers. Sometimes, we may even find it helpful to run a text segment through a machine translation (MT) system.

There was a time when translators would have been embarrassed to admit “resorting” to MT because these tools often produced laughable rather than passable results. But MT has come a long way since its post-World War II roots. Early rule-based approaches, where developers tried to program MT systems to process language similar to the way people do (i.e., using grammar rules and bilingual lexicons) have been largely set aside. Around the turn of the millennium, statistics rather than linguistics came into play, and new statistical machine translation (SMT) approaches allowed computers to do what they’re good at: number crunching and pattern matching. With SMT, translation quality got noticeably better, and companies such as Google and Microsoft, among others, released free online versions of their MT tools.

Neural Machine Translation: A game changer

In late 2016, the underlying approach to MT changed again. Now state-of-the-art MT systems use artificial neural networks coupled with a technique known as machine learning. Developers “train” neural machine translation (NMT) systems by feeding them enormous parallel corpora that contain hundreds of thousands of pages of previously translated texts. In a way, this should make translators feel good! Rather than replacing translators, NMT systems depend on having access to very large volumes of high quality translation in order to function. Without these professionally translated corpora, NMT systems would not be able to “learn” how to translate. Although the precise inner workings of NMT systems remain mysterious, the quality of the output has, for the most part, improved.

It’s not perfect, and no reasonable person would claim that it is better than the work of a professional translator. However, it would be short-sighted of translators to dismiss this technology, which has become more or less ubiquitous.

MT Literacy: Be a savvy MT user

Today, there should be no shame in consulting an MT system. Even if the suggested translation can’t be used “as is,” a translator might be able to fix it up quickly, or might simply be inspired by it on the way to producing a better translation. However, as with any tool, it pays to understand what you are dealing with. It’s always better to be a savvy user than not. Thinking about whether, when, why, and how to use MT is part of what we term “MT literacy.” It basically comes down to being an informed and critical user of this technology, rather than being someone who just copies, pastes and clicks. So what should savvy translators know about using free online MT systems?

— Information entered into a free online MT system doesn’t simply “disappear” once you close the window. Rather, the companies that own the MT system (e.g., Google, Microsoft) might keep the data and use it for other purposes. Don’t enter sensitive or confidential information into an online MT system. For more tips on security and online MT, see Don DePalma’s article in TC World magazine.

— Consider the notion of “fit-for-purpose” when deciding whether an MT system could help. Chris Durban and Alan Melby prepared a guide for the ATA entitled Translation: Buying a non-commodity in which they note that one of the most important criteria to consider is:

The purpose of the translation: Sometimes all you want is to get (or give) the general idea of a document (rough translation); in other cases, a polished text is essential.

The closer you are to needing a rough translation, the more likely it is that MT can help. As you move closer towards needing a polished translation, MT may still prove useful, but it’s likely that you are going to need to invest more time in improving the output. Regardless, it’s always worth keeping the intended purpose of the text in mind. Just as you wouldn’t want to under-deliver by offering a client a text that doesn’t meet their needs, there’s also no point in over-delivering by offering them a text that exceeds their needs. By over-delivering, you run the risk of doing extra work for free instead of using that time to work on another job or to take a well-earned break!

— Not all MT systems are the same. Each NMT system is trained using different corpora (e.g., different text types, different language pairs, different number of texts), which means they could be “learning” different things. If one system doesn’t provide helpful information, another one might. Also, these systems are constantly learning. If one doesn’t meet your needs today, try it again next month and the results could be different. Some free online MT systems include:

— Check the MT output carefully before deciding to use it. Whereas older MT systems tended to produce text that was recognizably “translationese,” a study involving professional translators that was carried out by Sheila Castilho and colleagues in 2017 found that newer NMT systems often produce text that is more fluent and contains fewer telltale errors such as incorrect word order. But just because the NMT output reads well doesn’t mean that it’s accurate or right for your needs. As a language professional, it’s up to you to be vigilant and to ensure that any MT output that you use is appropriate for and works well as part of your final target text.

Image credits: Pixabay 1, Pixabay 2, Pixabay 3

Author bio

Lynne Bowker, PhD, is a certified French to English translator with the Association of Translators and Interpreters of Ontario, Canada. She is also a full professor at the School of Translation and Interpretation at the University of Ottawa and 2019 Researcher-in-Residence at Concordia University Library where she is leading a project on Machine Translation Literacy. She has published widely on the subject of translation technologies and is most recently co-author of Machine Translation and Global Research (2019, Emerald).

Why machine translation should have a role in your life. Really!

Why machine translation should have a role in your life

Image: pixabay.com

By Spence Green (@LiltHQ)
Reblogged from the The Language of Translation blog with permission from the author (incl. the image)

Guest author Spence Green talks about a heated topic: Machine Translation, Translation Memories and everything in between. Spence Green is a co-founder of Lilt, a provider of interactive translation systems. He has a PhD in computer science from Stanford University and a BS in computer engineering from the University of Virginia.

It is neither new nor interesting to observe that the mention of machine translation (MT) provokes strong opinions in the language services industry. MT is one scapegoat for ever decreasing per-word rates, especially among independent translators. The choice to accept post-editing work is often cast in moral terms (peruse the ProZ forums sometime…). Even those who deliberately avoid MT can find it suddenly before them when unscrupulous clients hire “proof-readers” for MT output. And maybe you have had one of those annoying conversations with a new acquaintance who, upon learning your profession, says, “Oh! How useful. I use Google Translate all the time!”

But MT is a tool, and one that I think is both misunderstood and underutilized by some translators. It is best understood as generalized translation memory (TM), a technology that most translators find indispensable. This post clarifies the relationship between TM and MT, dispels myths about the two technologies, and discusses a few recent developments in translation automation.

Translation Memory

Translation memory (TM) was first proposed publicly by Peter Arthern, a translator, in 1979. The European Commission had been evaluating rule-based MT, and Arthern argued forcefully that raw MT output was an unsuitable substitute for scratch translations. Nonetheless, there were intriguing possibilities for machine assistance. He observed a high degree of repetition in the EC’s text, so efficiency could be improved if the EC stored “all the texts it produces in [a] system’s memory, together with their translations into however many languages are required.” [1, p.94]. For source segments that had been translated before, high precision translations could be immediately retrieved for human review.

Improvements upon Arthern’s proposal have included subsegment matching, partial matching (“fuzzies”) with variable thresholds, and even generalization over inflections and free variables like pronouns. But the basic proposal remains the same: Translation memory is a high-precision system for storing and retrieving previously translated segments.

Machine Translation

Arthern admitted a weakness in his proposal: the TM could not produce output for unseen segments. Therefore, the TM “could very conveniently be supplemented by ‘genuine’ machine translation, perhaps to translate the missing areas in texts retrieved from the text memory” [1, p.95]. Arthern viewed machine translation as a mechanism for increasing recall, i.e., a backoff in the case of “missing areas” in texts.

Think of MT this way: Machine translation is a high-recall system for translating unseen segments.

Modern MT systems are built on large collections of human translations, so they can of course translate  previously seen segments, too. But for computational reasons they typically only store fragments of each sentence pair, so they often fail to produce exact matches. TM is therefore a special case of MT for repeated text. TM offers high-precision, and general MT fills in to improve recall.

Myths and countermyths

By understanding MT and TM as closely related technologies, each with a specific and useful role in the translation process, you can offer informed responses when you hear the following proclamations:

  • TM is “better than” MT – false. MT is best suited to unseen segments, for which TM often produces no output.
  • Post-editing is MT – false. Both TM and MT produce suggestions for input source segments. Partial TM matches are post-edited just like MT. Errors can be present in TM exact matches, too.
  • MT post-editing leads to lower quality translation – false. The translator is always free to ignore the MT just as he or she can disregard TM partial matches. Any effect on quality is probably due to priming, apathy, and/or other behavioral phenomena.
  • MT is only useful if it is trained on my data – neither true nor false. Statistical MT systems are trained on large collections of human-generated parallel text, i.e., large TMs. If you are translating text that is similar to the MT training data, the output can be surprisingly good. This is the justification for the custom MT offered by SDL, Microsoft, and other vendors.
  • TMs improve with use; MT does not – true until recently. Lilt and CasmaCat (see below) are two recent systems that, like TM, learn from feedback.

Tighter MT Integration

Major desktop-based CAT systems such as Trados and memoQ emphasize TM over MT, which is typically accessible only as a plugin or add-on. This is a sensible default since TM has the twin benefits of high precision and domain relevance. But new CAT environments are incorporating MT more directly as in Arthern’s original proposal.

In the November 2015 issue of the ATA Chronicle I wrote about three research CAT systems based on interactive MT, that is an MT system that responds to and learns from translator feedback. Two of them are now available for production use:

  • CasmaCat – Free, open source, runs locally on Linux or on a Windows virtual machine.
  • Lilt – Free, cloud-based, runs on all major browsers.

The present version of CasmaCat does not include TM, so I’ll briefly describe Lilt, which is based on research by me and others on translator productivity.

Lilt offers the translator an integrated TM / MT environment. TM entries, if present, are always shown before backing off to MT. The MT system is interactive, so it suggests words and full translations as the translator types. Smartphone users will be familiar with this style of predictive typing.

Lilt also learns. Recall that both TM and MT are derived from parallel text. In Lilt, each confirmed translation is immediately added to the TM and MT components. The MT system extracts new words and phrases, which can be offered as future suggestions.

Conclusion

New translators should think about how to integrate MT into their workflows as a backoff. Experiment with it in combination with your TM. Measure yourself. In a future post, I’ll offer some tips for working with both conventional and interactive MT systems.

————— [1] Peter J. Arthern. 1979. Machine translation and computerized terminology systems: A translator’s viewpoint. In Translating and the Computer, B.M. Snell (ed.)