Adrià de Gispert (Cambridge)
Marcus Tomalin (Cambridge)
In recent years, the art of translation has witnessed an unprecedented technological revolution. For many people, websites such as Google Translate are rapidly becoming the primary resource for obtaining a rough-and-ready translation of a given source-language text. If a Hungarian rendering of the first sentence of this current paragraph is required, then it can be obtained instantaneously: ‘Az elmúlt években, a művészet fordítás tanúja technológiai forradalmat’. The need for long years of patient tussling with conjugations, declensions, and the mysteries of vowel harmony is (seemingly) eliminated. However, few of the so-called ‘naïve users’ of these online translation systems know how they work. And even if they are dimly aware that some kind of modelling is being deployed, they generally do not know how or why it is applied, or whether a given system is rule-based, example-based, or statistical in nature (Trujillo 2012; Bhattacharyya 2015). Yet in order to evaluate the significance of any such systems, it is important to understand how they are trained, what kinds of bilingual corpora are used, and which particular kinds of linguistic patterns are modelled. There are also important distinctions between the kinds of texts translated. Machine translation systems struggle with poetry, but cope more successfully with certain kinds of genre-specific technical writing.
This discussion panel will explore different aspects of the impact of recent technology on the art and craft of translation. It will assess the professional contexts of use of machine translation systems, and it will offer a chance to reflect upon the overarching anxiety that such systems pose a potential threat to human-produced translations.