The list of questions frequently asked by the users.
General Questions
How to set up language region codes in templates?
Refer to this article Templates to guide you throught this process.
Trados
Token is invalid
In case you have the error from the screen below, follow the suggested steps to solve the issue.
Check the API Key (or create the new one) from console.custom.MT and paste it into the field. For more detailed information, refer to this article Credentials .
The next option to try is the correct installation of the plugin. Make sure you have the correct version of the plugin from the RWS appstore.
Refer to this article RWS Trados Studio on how to install the plugin.
As an addition: before installing the plugin, make sure there are no old plugin files in the following folders.
C:\ProgramData\SDL\SDL Trados Studio\16\Plugins\Packages
C:\ProgramData\SDL\SDL Trados Studio\16\Plugins\Unpacked
Disabling the Lookup function in Trados
In order to prevent excessive character usage when working on preranslated files, you can disable the Lookup function in Trados. Please note that disabling the Lookup function will not allow for interactive search in MT in the pretranslated files for the segments with confirmed status, the segments with translation that was pulled from the TM, and for the segments pretranslated with translation from MT using the settings you have chosen (70% match and higher). You will still be getting translations from MT for the segments that are unconfirmed, with no translation from the TM and for the matches 70% and below.
To disable the Lookup function, you need to open Trados and choose File - Options in Trados menu.
The options highlighted in yellow in the following images should be unchecked. These settings should prevent duplicate translation.
Editor - Automation section. Uncheck the checkbox Perform automated translation lookup on confirmed segments.
Auto-suggest - Translation Memory and Automated Translation section. Uncheck the checkbox Automated translations.
Very important settings:
Language Pairs - All Language Pairs - Translation Memory and Automated Translation - Search section. Uncheck the checkbox Look up segments in MT even if a TM match has been found.
GPT
How to use your personal API Key from OpenAI
In the section Credentials next to the GPT version, click the three dots and select Edit. Next, check the box confirming that you want to use your API key and paste your API Key. You can find your API Key from OpenAI at https://platform.openai.com/account/api-keys
How to set up your OpenAI model?
Add the model as custom in the section Credentials. To do this, click New engine in the Custom Engine section and specify the parameters of your model.
Step 1.
Step 2.
GPT Prompts
Prompt includes a request for translation from the source language to the target language, sets the GPT role, and also sets the task of using a style guide and a glossary uploaded by the user. If the user does not fill out the fields or upload the glossary, these conditions are ignored and the translation is performed without them.
You can also set the temperature for greater sensitivity to given conditions. The recommended setting for this value is 0.2.
Note: The text of the prompt is periodically revised, so it is not indicated for reference purposes so as not to misinform.
Why is the glossary so small?
OpenAI sets limits on the size of an incoming request. This size differs depending on the GPT version. For this reason, prompt cannot be infinite. We distribute the permitted volume into the source text, style guide, glossary and prompt text. Therefore, each of these partitions has size limitations.
How is a large file translated?
Typically, the user has two ways to complete the translation:
а) Segment by segment. As a rule, there are no issues here, except when the source contains long sentences, for example HTML with tags. In this case, it is recommended to use the Tags Encoding filter. It will collapse all tags into a short form and protect the tags from translation.
b) Translation of the entire file. Depending on the connector, the files are divided into batches of 10 - 30 rows. Batches are sent for http://translation.To work correctly with GPT, batches are divided into chunks of 10 rows for GPT3.5 and 5 rows for GPT4. After translation, the chunks are collected into batches and sent back.
How are costs calculated?
Each request is charged according to GPT rules. GPT counts not only the response, but also the request. The request is formed from: Source text, prompt, role, style guide and glossary. And the response includes what is returned in the form of a GPT translation (sometimes with comments). Thus, the volume of tokens for the request and for the response will be different.
The prompt will be applied every time a translation request is sent.
To reduce costs, minimise the size of details in Prompt engineering (style guide and glossary) and translate the document as a whole, not line by line. When translating an entire document, the prompt will be applied once per chunk (once per 10 lines for GPT3.5 and once per 5 lines for GPT4).
What happens if there is an error?
If GPT throws an error (unavailable, returns an empty response or an incomplete response), the console will make up to 5 attempts to receive the translation, after, if the issues still persist, it will return empty lines and continue working with another chunk.