At last year’s ATA conference in Denver, I was still stunned because the Association still seemed to catch up with technology and the opportunity to embrace machine translation. This year, I saw something completely differently. Mike Dillinger gave a well attended, entertaining and educational seminar on machine translation. He certainly lived up to his promise of showing “what the translator’s role is in this new business model.”
It was so clear that editing for MT is a market segment on the rise, if not during Mike’s seminar, then during Laurie Gerber’s presentation on the specifics of editing machine translation output. She also shared tips on how to educate “over-optimistic clients”. You add to that Jost Zetzsche’s presentation on dealing with that flood of data, and the puzzle pieces start forming a picture of new skills and new jobs.
Jost’s presentation is very much in line with an article by Detlef Reineke and Christian Galinski in eDITion, the publication of the German Terminology Association, DTT, about the flood of terminology in our future (“Vor uns die Terminologieflut”). To stem the flood, it helps to think of “data,” as Jost did, rather than texts, documents or even segments. He also declared the glossary outdated and announced a bright future for terminology databases. To think about texts, documents, segments, concepts and terms as data is helpful in the sense that data along with solid corresponding metadata have a higher reuse value, if you will, than unmanaged translation memories or the final translation product. That has been terminologists’ message for a long time.
I also attended sessions on translation education, one by the University of Illinois at Urbana-Champaign and one by New York University. Since I will be working with the Translation Center of the University of Illinois on a small research project and am currently preparing the online terminology course that will be part of the M.S. at NYU starting this spring, it was nice to meet my colleagues in person.