What does quality assurance look like for a translation project
Some clients ask how a translation project can have quality assurance and how that differs from “rereading” the translation. Quality assurance is actually much more than checking text and is actually considered separate from reviewing the text because the reviewer needs to be a specialist translator, while the rest of the quality assurance activities do not require linguistic capabilities. Below is how we break down quality assurance:
- Benchmark pre-process expectations
- Measure post-process outcomes
- Run automated QA procedures
- Perform manual QA procedures
- Perform project evaluation
Quality Assurance is any measure we take to ensure that an expectation is met. If any measurements we make post-process deviate significantly from our pre-process benchmarks, we question them and make adjustments either to our expectations or to the outcome (i.e. we fix the problem) so that they are aligned. Comparing pre- and post- process measures is split between automated and manual procedures.
Benchmark pre-process expectations
There can be many things to benchmark in a translation project. Benchmarks are taken from three sources: client instructions, client profile, and the files themselves. Most clients only think of text from the files themselves (i.e. all the text should be translated accurately into the target language), but this alone can easily get complicated if files are not properly prepared beforehand, there are special instructions (translate this, but not that, etc.) or . Admittedly, the real value of quality assurance comes into play with more complicated projects, which is why quality assurance is free for small or simple projects. The challenge for clients seems to be what defines “complicated”. The more languages, the more files and the more pre-production worked involved, the more complicated the project is. Here are some benchmarks taken pre-process:
- Source language
- Target language and region
- Scheduled delivery date
- Estimated margin
- Client special instructions or preferences
- Client corporate terms or style guide
- File count
- References files available
- Page count
- Paragraph count
- Word count
- Images, fonts, and other digital assets within each file
- Scheduled process
- Vendor availability and estimated deadlines
Something that some clients don’t realize is that sometimes translations are not done within the source files, but rather within a new file (not a copy of the source files) that needs to be manually recreated to mirror the source as best as possible. This is an important distinction because digital assets, fonts, and other assets need to be lifted from the source and adjusted to accommodate the space of the new language and properly reflect the conventions of the target culture. As a result, every parameter of every file needs to be accounted for.
Measure post-process outcomes
After benchmarking pre-process expectations, the actual process is executed. After execution, the same benchmark parameters are remeasured to ensure the project was a success. Some measures should or will be identical due to permission parameters in the technology used, such as target language and region, client special instructions, application of reference material, number of files, usage of some digital assets, etc. Other measures are expected to change in a predictable way, such as word count, text real estate due to expansion or contraction and other benchmarks directly related to the translation process.
The collection of a lot of these measures is automated. The remaining measures and the management of all the measures is manual but critical in documenting the quality control process and addressing any issues later if necessary.
Run automated QA procedures
After collecting post-process outcomes, the next step is to compare them to the pre-process measures – this is the center of quality assurance. Automated quality assurance is great, but has its limitations. It can be used for measuring and comparing highly finite things like word count, whether or not every segment (a sentence, title, or other phrase) has been translated, every number or proper noun has not been translated, whether or not a term in a glossary was applied, and more. In general, the more automated QA tools used the better as the time it takes for them to assess quality is well worth the value of the report.
Typically, the use of automated QA tools requires a degree of human intervention, and validation, so their conclusions cannot be interpreted without human input.
Perform manual QA procedures
After the automated QA tools have been run, any remaining measures are manually assessed against the pre-process benchmarks. Manual QA procedures are only left for parameters that are not yet automated, are subjective, or are not currently defined well enough for technology to understand, such as some elements of compliance, completeness or aesthetics.
The more pages or permutations in the target files (not source files as target files can be in more than one language) the longer the manual QA process is. For large projects, this means that “review fatigue” can affect the quality of the deliverable. Two ways to address this is by dividing the manual QA process between several people, or by spreading the QA process over several days. The approach used is based on the complexity of the project as onboarding additional people onto a complex project creates more opportunity for error oversight than usual while spreading QA over several days risks delaying a project if not budgeted for during the planning stage of the project. At BURG, since Project Managers are personally responsible for their deliverables, they must personally review everything before delivery, which leads Project Managers to favor spreading QA over several days rather than enlisting team members to contribute to the QA process.
Perform project evaluation
As an LSP certified in the ISO 9001 and ISO 17100, documentation is a critical part of every project. After a project is complete, it is eligible for a full project evaluation audit. The audit is simply a review of a project’s documentation as logged by the Project Manager and includes the following criteria:
- Response Time: Did the Project Manager respond to the client’s initial project request within 60 minutes?
- Client Instructions: Were client instructions logged and followed?
- Pre-processing: Is there evidence of proper pre-process activity?
- Translator/Editor selection: Where the right vendors chosen for the project?
- Translator/Editor instructions: Where client instructions correctly relayed to the vendors?
- Tracking/Deadlines: Where project deadlines tracked and met?
- PM QA: Is there evidence of both automated and manual quality assurance?
- Post-processing: Are the target files accurate renderings of the source files?
- Project Management Records: Are project emails with the client and vendors correctly archived with the project?
- Invoicing/Delivery: Is the margin within range? Were all deliverables correctly submitted?
The Production Manager reviews each aspect of the project and determines if the above criteria is met. Some of these are measured automatically, while others require a manual inspection. The value of this activity is to confirm that the team is properly trained and that, should there be an issue during the debrief process or later with the client, all relevant documentation is available and accessible.
Summary
Quality assurance is a fundamental component of every project. Without it, there would be no confirmation that we have performed the service requested of us or no documentation to improve our services and capabilities. Technology is used wherever possible to automate quality assurance activities, but ultimately some measures require human intervention and validation. The more important or complex the project is, the greater the level of rigor in ensuring proper quality assurance procedures took place.
If you’d like to learn more about how BURG Translations helps you ensure high-quality translations, contact us today.