FAQ
The list of questions frequently asked by the users.
- 1 General Questions
- 2 Trados
- 3 GPT
- 3.1 How to use your personal API Key from OpenAI
- 3.2 How to set up your OpenAI model?
- 3.3 GPT Prompts
- 3.4 Why is the glossary so small?
- 3.5 How is a large file translated?
- 3.6 How are costs calculated?
- 3.7 What happens if there is an error?
- 3.8 I am using a Translator subscription. Can I use GPT-4 from OpenAI as a Custom Engine?
General Questions
How to set up language region codes in templates?
Refer to this article Templates to guide you throughout this process.
Template count limit error
The issue is related to the unpaid subscription. Please check your card balance or bank statement to make sure the transaction was successful.
400 Error code
While testing Custom.MT API translation with Postman, the 400 Bad Response error appears.
Go to the Authorization tab, and choose API Key type from the dropdown menu.
Input your API Key from the Credentials tab in Custom.MT - Credentials , and Add to Header option below.
The value for the Key filed: token.
Go to Body.
Use POST: https://console.custom.mt/translation/translate
{
"text": [
"your text"
],
"template_name": "your template name"
}
The issue should be solved.
Error when uploading the DeepL glossary
Please see below an example of the Glossary. Make sure the glossary is saved in UTF-8 format and it has the following structure:
The first column is the source
The second column is the target
The first row is reserved for the language codes
It is recommended to use Google Chrome for uploading the glossary, so if you face any issues, try switching to Google Chrome.
Trados
Token is invalid
In case you have the error from the screen below, follow the suggested steps to solve the issue.
Check the API Key (or create the new one) from console.custom.MT and paste it into the field. For more detailed information, refer to this article Credentials .
The next option to try is the correct installation of the plugin. Make sure you have the correct version of the plugin from the RWS appstore.
Refer to this article RWS Trados Studio on how to install the plugin.
As an addition: before installing the plugin, make sure there are no old plugin files in the following folders.
C:\ProgramData\SDL\SDL Trados Studio\16\Plugins\Packages
C:\ProgramData\SDL\SDL Trados Studio\16\Plugins\Unpacked
Disabling the Lookup function in Trados
In order to prevent excessive character usage when working on preranslated files, you can disable the Lookup function in Trados. Please note that disabling the Lookup function will not allow for interactive search in MT in the pretranslated files for the segments with confirmed status, the segments with translation that was pulled from the TM, and for the segments pretranslated with translation from MT using the settings you have chosen (70% match and higher). You will still be getting translations from MT for the segments that are unconfirmed, with no translation from the TM and for the matches 70% and below.
To disable the Lookup function, you need to open Trados and choose File - Options in Trados menu.
The options highlighted in yellow in the following images should be unchecked. These settings should prevent duplicate translation.
Editor - Automation section. Uncheck the checkbox Perform automated translation lookup on confirmed segments.
Auto-suggest - Translation Memory and Automated Translation section. Uncheck the checkbox Automated translations.
Very important settings:
Language Pairs - All Language Pairs - Translation Memory and Automated Translation - Search section. Uncheck the checkbox Look up segments in MT even if a TM match has been found.
Trados Studio 2022 (SR2)
The 2022 SR1 plugin should be compatible with the SR2 version as well.
GPT
How to use your personal API Key from OpenAI
In the section Credentials next to the GPT version, click the three dots and select Edit. Next, check the box confirming that you want to use your API key and paste your API Key. You can find your API Key from OpenAI at https://platform.openai.com/account/api-keys
How to set up your OpenAI model?
Add the model as custom in the section Credentials. To do this, click New engine in the Custom Engine section and specify the parameters of your model.
Step 1.
Step 2.
GPT Prompts
Prompt includes a request for translation from the source language to the target language, sets the GPT role, and also sets the task of using a style guide and a glossary uploaded by the user. If the user does not fill out the fields or upload the glossary, these conditions are ignored and the translation is performed without them.
You can also set the temperature for greater sensitivity to given conditions. The recommended setting for this value is 0.2.
Note: The text of the prompt is periodically revised, so it is not indicated for reference purposes so as not to misinform.
Why is the glossary so small?
OpenAI sets limits on the size of an incoming request. This size differs depending on the GPT version. For this reason, prompt cannot be infinite. We distribute the permitted volume into the source text, style guide, glossary and prompt text. Therefore, each of these partitions has size limitations.
How is a large file translated?
Typically, the user has two ways to complete the translation:
а) Segment by segment. As a rule, there are no issues here, except when the source contains long sentences, for example HTML with tags. In this case, it is recommended to use the Tags Encoding filter. It will collapse all tags into a short form and protect the tags from translation.
b) Translation of the entire file. Depending on the connector, the files are divided into batches of 10 - 30 rows. Batches are sent for http://translation.To work correctly with GPT, batches are divided into chunks of 10 rows for GPT3.5 and 5 rows for GPT4. After translation, the chunks are collected into batches and sent back.
How are costs calculated?
Each request is charged according to GPT rules. GPT counts not only the response, but also the request. The request is formed from: Source text, prompt, role, style guide and glossary. And the response includes what is returned in the form of a GPT translation (sometimes with comments). Thus, the volume of tokens for the request and for the response will be different.
The prompt will be applied every time a translation request is sent.
To reduce costs, minimise the size of details in Prompt engineering (style guide and glossary) and translate the document as a whole, not line by line. When translating an entire document, the prompt will be applied once per chunk (once per 10 lines for GPT3.5 and once per 5 lines for GPT4).
What happens if there is an error?
If GPT throws an error (unavailable, returns an empty response or an incomplete response), the console will make up to 5 attempts to receive the translation, after, if the issues still persist, it will return empty lines and continue working with another chunk.
I am using a Translator subscription. Can I use GPT-4 from OpenAI as a Custom Engine?
Translator Subscription that does not include an option to use Custom models. The MT character limit that is included in the subsciption price only include stock versions of MT providers. If you would like to use OpenAI as a Custom Engine, it is recommended to upgrade your subscription.