PET is a stand-alone, open-source (under LGPL) tool written in Java that should help you post-edit and assess machine or human translations while gathering detailed statistics about post-editing time amongst other effort indicators.
If you are interested in evaluating translations through post-editing, this is an easy and cheap solution: you only need to provide source and translation segments (from one or multiple MT systems - it does not depend on any MT system) to set an experiment. Translators then post-edit the translations, while implicit quality indicators such as post-editing time, keystrokes, edit operations, edit distance and possibly others are stored for each segment. Explicit quality assessments can also be collected. Monolingual and bilingual dictionaries can also be provided.
The tool also works for monolingual revision, can show reference translations, can render html for special markups, and allows establishing constraints for jobs on a per segment basis (for example, the maximum time or length for a given post-edited segment).
People who looked at this resource also viewed the following:
- WPTP12 dataset - machine translations with post-editing performed by multiple translators with different levels of expertise
- TSD13 dataset - English-Spanish WMT12 machine translations by various MT systems, post-edited by 10 translation students
- PELCRA EN Lemmatizer
- WMT12 dataset - machine translations with human judgements and post-editions