Erin Tan
Abstract
Low-resource languages such as isiZulu and isiXhosa continue to face persistent challenges in machine translation (MT) due to limited parallel corpora and scarce linguistic resources [Haddow et al., 2022; Joshi et al., 2020]. Recent advances in large language models (LLMs) suggest that self-reflection—the ability of a model to critique and revise its own outputs—can enhance reasoning and factual consistency [Madaan et al.]. Building on this idea, we investigate reflective translation, a process in which an LLM internally evaluates and corrects its translations to improve semantic fidelity without multi-round prompting.
Video
Chat is not available.
Successful Page Load