Here is an opening for a terminologist at BMO Financial Group.
Posted by Barbara Inge Karsch on October 17, 2016
Posted by Barbara Inge Karsch on November 3, 2012
In the last few months, I have been reading quite a bit about machine translation. And I also took the opportunity at the recent LocWorld in Seattle and the ATA conference in San Diego to attend sessions on MT.
In Seattle, TAUS presented several real-world examples of what can today be done with the Moses engine. It was refreshing to hear from experts on statistical MT that terminology matters, since that camp, at least at MS, had largely been ignorant to terminology management in the past. Here are a number of worthwhile tutorials on the TAUS site for those who’d like to stay abreast of developments.
At the ATA, the usual suspects, Laurie Gerber, Rubén de la Fuente, and Mike Dillinger, outdid each other once again in debunking the myth around MT. When fears did come up in the audience about MT and its effects, I had to think of a little story:
In the mid-90s, five of us German translators at J.D. Edwards were huddled in a conference room for some training. Something or someone was terribly delayed, and while chatting we all started catching up on the translation quota due that day. You know what that involved? It involved finding a string that came up 500 to 800 times. After translating it once, you could continue your chat and hit enter 500 to 800 times. See the screen print of the translation software to the right and you will realize that the software didn’t allow a human to translate like a human; we translated like machines, only worse because…oops, the 800 strings are through and you are on to the next source string. Some would call this negligent behavior and I am glad that today we have better translation software and that machines assist with or do the jobs that we are not good with.
Here is an example of where MT should not have been used. Those of you who know German, check out this text translated by MT and you will recognize very easily that, oops, it doesn’t make sense to have a glossary translated by a machine, least of all if there is no post-editing.
Posted by Barbara Inge Karsch on October 26, 2012
Yesterday, I ran into a fellow graduate from the Monterey Institute. K. has been a freelance translator for many years and shared some interesting insights that I thought others might like to hear.
My students in particular are wondering whether they should spend time managing their terminology. Most of them are planning careers as freelance translators and are unsure, at least at the beginning of our course.
K. and I started talking about work, and her comments were completely unsolicited. I was also glad to see that I didn’t trigger the uncertain feeling that the mere presence of a terminologist sometimes sets off in my freelance friends. Instead, she was frustrated when she mentioned that her “brain” had reached capacity, if you will, and she could no longer remember things she used to remember early on in working for a particular end-client. That term that had always been on the tip of her tongue just wouldn’t be available.
Furthermore, she mentioned that she had worked for one end-client for many years and diligently set up documents with glossaries. Her direct client, an agency, recently shifted her to a new end-client in the same industry. She said she was almost relieved because handling the many glossaries had become rather difficult.
Terminology technology is much more advanced and much more widely available today than it was when K. started her career. It no longer has to be difficult to set up a system that allows us to be fast. You also don’t have to manage huge volumes of data—after all freelance translators are likely to “only” drive one process with their terminology data, i.e. their translation process. But my advice is to follow terminology best practices, such as concept orientation, term autonomy, data elementarity.
Whatever you do, no need to look guilty.
Posted by Barbara Inge Karsch on October 24, 2012
Another really good question! Let me address this by answering when it would NOT be worth using available corpora. I would not use it if I didn’t expect work from the same client or in the same subject area again. I would also not do it, if I hated the user experience of the concordance tool. If I have to struggle with a tool, I am probably faster and attain a more reliable result by simply researching the concepts from scratch. BUT if the subject matter is clear-cut, you expect more work in that area and the tool provides a nice interface that allows you to work efficiently, by all means use the existing bilingual corpus as one of your research tools.
And here comes my second qualification: Double-check the target-language equivalents found in bilingual corpora. Your additional research will a) confirm that the target term used in the corpus was correct and b) give you the metadata that you might want to document anyway. I am thinking mostly of context samples. While you could easily use the translated context from your corpus, context written by a native-speaker expert in the target language gives you a higher reliability: it shows that the term is correct and used and how it is used correctly. The beauty of working with a corpus is that you already have terms that you can check on. Be prepared to discard them, though, if your research does not confirm them. Ultimately, you want your term base entries to be highly reliable: Do the work once, reuse it many times!
Posted by Barbara Inge Karsch on October 8, 2012
Eurocopter (Marignane) is looking for an intern for a 6-month internship.
At the start of 2012, Eurocopter launched a terminology standardisation project to improve the coherence of its deliverables and its customer satisfaction. This standardisation is essentially targeted at technical terms and more specifically the names of parts, titles of drawings and the names of systems, generally called "designations". To improve the quality of its terminology, Eurocopter has implemented designation or terminology rules, a process and a French/English terminology database.
Eurocopter is looking for an intern to support its team of terminologists and assist them in their daily work.
For more information see the job posting.
Posted by Barbara Inge Karsch on October 1, 2012
There are not a lot of trained terminologists around the world and not necessarily a lot of positions either. I get questions on how to find a job from students and former students often enough. From now on, I will try to provide ways to make that connection and also post job openings. Here is one from IBM.
Job Title: Terminologist
Main Responsibilities include:
The terminology group helps product developers, writers, and translators use the correct terminology in IBM products and materials. The group currently has an open position for a junior terminologist. Regular tasks include creating and updating terminological entries in a multilingual terminology database, and identifying terms from an automatically extracted corpus that are relevant to translators. The position includes the following additional responsibilities:
For more information see the job posting.
Posted by Barbara Inge Karsch on September 28, 2012
Thank you very much for your positive feedback while I was busy with things like Windows 8 terminology, teaching at NYU, attending TKE and the ISO meetings in Madrid, and doing webinars. During one of the webinars, we didn’t get around to all questions. I will be addressing some of these here now.
Answer: Yes, that is a scenario that is very common and that everyone setting up terminology entries is facing: We do our best to enter terms and names in canonical form in order to find them again and to avoid creating duplicates. So, we document, say, operating system and not Operating Systems, or we enter purge, and not to purge or purged in the database. Even though we were good about the form of our terms, we might not remember the meaning of all entries created and thus willy-nilly create doublettes in our database. Often times, we create them because we are not aware that one entry is a view onto a concept from one angle and a second entry might present the same concept from another angle, similar to these two pictures of the some flower.
Here are a few thoughts on what might help you avoid duplicate entries:
- Start out by specifying the subject field in your database. It will help you narrow down the concept for which you are about to create an entry. You might do a search on the subject field and see what concepts you defined at an earlier time. Sometimes that helps trigger your memory.
- As you are narrowing down the subject field and take a quick glance through some of the existing definitions, you might identify and recognize an existing concept as the one you are about to work on.
If you set up a doublette anyway—and it is bound to happen—you might find it later in one of the following ways and eradicate it:
- Export your database into a spreadsheet program and do a quick QA on your entries. In a spreadsheet, such as Excel, you can sort each column. If there are true doublettes, you might have started the definition with the same superordinate, which, if you sort the entries, get lined up next to each other.
- Maybe you don’t have time for QA, then I would simply wait until you notice while you are using your database and take care of it then. The damage in databases with lots of languages attached to a source language entry is bigger, but there are usually also more people working in the system, so errors are identified quickly. For the freelance translator, a doublette here and there is not as costly and it is also eliminated quickly once identified.
Developers of terminology management systems might eventually get to a point where maintenance functionality becomes part of the out-of-the-box program. At Microsoft, a colleague worked on an algorithm that helped us identify duplicates. The project was not completed when I left the corporate world, but a first test showed that the noise the program identified was not overwhelming. So, there is hope that with increasing demand for clean terminological and conceptual data such functionality becomes standard in off-the-shelf TMSs. In the meantime, stick with best practices when documenting your terms and names and use the database.
Posted by Barbara Inge Karsch on November 22, 2011
I have traveled quite a bit during the last four weeks and it is high time for an update. Let me start with a review of yet another great conference of the American Translators Association in Boston.
At last year’s ATA conference in Denver, I was still stunned because the Association still seemed to catch up with technology and the opportunity to embrace machine translation. This year, I saw something completely differently. Mike Dillinger gave a well attended, entertaining and educational seminar on machine translation. He certainly lived up to his promise of showing “what the translator’s role is in this new business model.”
It was so clear that editing for MT is a market segment on the rise, if not during Mike’s seminar, then during Laurie Gerber’s presentation on the specifics of editing machine translation output. She also shared tips on how to educate “over-optimistic clients”. You add to that Jost Zetzsche’s presentation on dealing with that flood of data, and the puzzle pieces start forming a picture of new skills and new jobs.
Jost’s presentation is very much in line with an article by Detlef Reineke and Christian Galinski in eDITion, the publication of the German Terminology Association, DTT, about the flood of terminology in our future (“Vor uns die Terminologieflut”). To stem the flood, it helps to think of “data,” as Jost did, rather than texts, documents or even segments. He also declared the glossary outdated and announced a bright future for terminology databases. To think about texts, documents, segments, concepts and terms as data is helpful in the sense that data along with solid corresponding metadata have a higher reuse value, if you will, than unmanaged translation memories or the final translation product. That has been terminologists’ message for a long time.
I also attended sessions on translation education, one by the University of Illinois at Urbana-Champaign and one by New York University. Since I will be working with the Translation Center of the University of Illinois on a small research project and am currently preparing the online terminology course that will be part of the M.S. at NYU starting this spring, it was nice to meet my colleagues in person.
Posted by Barbara Inge Karsch on November 16, 2011
There are not too many conferences that carry terminology in their title and that offer continuing education and fruitful exchanges to terminologists. The TKE conference is one of them. Here is the announcement of the TKE Conference 2012.
New frontiers in the constructive symbiosis of terminology and knowledge engineering
TKE (Terminology and Knowledge Engineering) Conference
In 2012, the conference will take place in Madrid, Spain, from June 19 through 22. This conference will mainly focus on those theoretical, methodological and practical aspects that show the symbiosis of terminology and knowledge engineering by highlighting the recent advances in these related fields. The first call for papers can be accessed here.