Speech-to-speech machine translation is one of the most strategically relevant areas for L2F. The state of the art in speech translation is crucially dependent on the state of the art of several core technologies: speech recognition, machine translation and text-to-speech synthesis (namely in what concerns voice morphing, in order to reproduce the source speakers’ characteristics in the target speaker’s voice). The main limitations of current machine translation systems are the lack of semantic interpretation and world knowledge as well as insufficient coverage of the large proportion of idiosyncratic linguistic phenomena in lexicon and syntax. The most promising approaches combine improved statistical methods with the improved knowledge-driven methods in a variety of clever ways.
The research at L2F started by investing in statistically based speech-to-speech machine translation approaches based on weighted finite state transducers [Picó 2005] [Caseiro 2006], aiming at a tight integration between recognition and translation. WFSTs are especially well suited for combining different type of approaches, whether statistical or knowledge-based. The combination may be advantageous for achieving two different goals (i) include morpho-syntactic linguistic knowledge into the statistical machine translation paradigm and (ii) tackle the data sparseness problem for speech translation. The work was carried out within the scope of a national project on “Weighted Finite State Transducers Applied to Spoken Language Processing”.
In 2007, L2F participated in the 4th International Workshop on Spoken Language Translation [Graça 2007] where a standard combination of phrase based machine translation and translation reranking was used. During the reranking some new features using linguistic information were used, which showed promising results.
The current focus of research is now centered in text statistical machine translation, namely on word alignments, since these are an important starting point for most state of the art statistical machine translation systems. As so, a new algorithm that presents state of the art results was developed in cooperation with the University of Pennsylvania [Graça 2007, Ganchev 2008]. Also, a guideline for building manual alignments between different language pairs was proposed, along with the gold alignments for six different European languages pairs [Graça 2008]. This can be a valuable resource both for evaluating/tuning word alignment models.
In September 2008, the Machine Translation team is going to be augmented with four Master students.
A demonstration of tightly integrated speech-to-text translation is available. The translation module is implemented as a single WFST that is used as the language model in the speech recognizer. This architecture produces sentences in the target language directly from source language speech.
A demonstration of large vocabulary translation is also available. The output of the WFST-based speech recognition module was translated using a WFST-based machine translation module trained in the European Parliament domain.