|6 Feb 2017||2:00pm - 4:00pm||Seminar Room SG1, Alison Richard Building|
Professor Andrew Rothwell (Swansea)
To see Prof Rothwell © presentation click here
CRASSH is not reponsible for the content of external websites
In recent years, the art of translation has witnessed an unprecedented technological revolution. For many people, websites such as Google Translate are rapidly becoming the primary resource for obtaining a rough-and-ready translation of a given source-language text. If a Hungarian rendering of the first sentence of this current paragraph is required, then it can be obtained instantaneously: ‘Az elmúlt években, a művészet fordítás tanúja technológiai forradalmat’. The need for long years of patient tussling with conjugations, declensions, and the mysteries of vowel harmony is (seemingly) eliminated. However, few of the so-called ‘naïve users’ of these online translation systems know how they work. And even if they are dimly aware that some kind of modelling is being deployed, they generally do not know how or why it is applied, or whether a given system is rule-based, example-based, or statistical in nature (Trujillo 2012; Bhattacharyya 2015). Yet in order to evaluate the significance of any such systems, it is important to understand how they are trained, what kinds of bilingual corpora are used, and which particular kinds of linguistic patterns are modelled. There are also important distinctions between the kinds of texts translated. Machine translation systems struggle with poetry, but cope more successfully with certain kinds of genre-specific technical writing.
This workshop will explore the practical implications for translation of the recent developments in speech technology. There will be opportunities to address questions such as whether professional human translators use such systems, and, if so, how and why.
Open to all. No registration required
Part of Cambridge Conversations in Translation Research Group Seminar Series
Administrative assistance: email@example.com