A special version of Automatic Speech Recognition (ASR) is Forced Alignment. With ASR, the software has to determine which of the ≈264K words was spoken. ASR uses the acoustic features in combination with the language model to estimate the words spoken. As input, ASR requires the audio file only.
Forced Aligment works a bit different because it is presupposes that it is known before what was said. The only thing the software has to figure out, is the timing information: when did the word start and when did it stop. To do so, the software needs the audio-file and the transcription-file.
A more elaborated explanation, with some phonetic examples, can be found at technology/forced-alignment.
WebMAUS
A very useful service to align audio and text is the CLARIN WebMAUS-basic service of the Phonetic department of the LMU (München). The service needs two files: the audio-file and an ASCII/UTF-8 text file, containing the transcription.
For Oral History, the orthographic output is enough.
WebMAUS has more then 36 languages to choose from. In case a specific language is not available, the software works well with a language that is (acoustically) close to the spoken language. For example, we have forced aligned a Russian spoken interview with Dutch as the selected language ? .
Problems
There are a couple of issues when starting to align the transcriptions. The main issue are the "abbreviations": is the difference between the way a word is written and how it is pronounced by the majority of the native speakers.
Abbreviations
Easy problems are words without a vowel: the are spelled out or "replaced" by the original word.
- NCRV → /E n - s e - E r - v e/ (A Ducth broadcast organisation)
- Mr → Mister → /M I s - t @ r/
If words contain one or more vowels, it depends on the local attitudes.
- NOS → N. O. S. → /E n - o - E s/ (the national Dutch broadcast organisation). NOS can be pronounced as nos /n O s/, but no one does it.
- RAI→ /r Ai/ (the national Italian broadcast organisation). The word is perfectly pronouncable in Italian, so no one is using the spell-mode.
A particular kind of abbreviations are those that depends of the context.
- An appointment with dr. Corti → an appointment with doctor Corti
- An appointment on the Corti dr. → an appointment on the Corti drive
Numbers
A special kind of abbreviations are numbers. A normal number like 19 → sounds as nineteen → /n Ai n - t i n/. But numbers are context sensitive as well.
- my phone number is 621888146 → 6 2 1 8 8 8 (or 3 times 8) 1 4 6
- the cost in euro of that bridge is 621888146 → sixhundred twenty one million etc.
So, before using the G2P, one needs to preprocess the text in order to know what the most likely way of pronouncation will be.
Human transcriptions
When a transcription is made by experienced transcribers, one may expect that these kind of "problems" are (partly) solved by the way the speech is orthographically transcribed. So if someone say "the doctor Luther King drive" you may hope that it is not transcribed as "the dr. Luther King dr.". In general, it is preferable when transcribers do not use abbreviations or numbers, but for practical reasons (speed, out of habit) they do ?.
So, transcribers will probably use numbers as shows the transcription example in fig. 3 below. In order to align the text with the audio however, WebMAUS first rewrites abbreviations and numbers into the full-words (Mr → Mister, 19 → nineteen). This means that the transcription you present and the resulting TextGrid-file differ on abbreviations and numbers.
[Input WebMAUS] "Mr Arjan is 19 years old" → [Output WebMAUS] "Mister Arjan is nineteen years old".
Speaker segmentation
Another issue is the speaker segmentation. In many human-made transcriptions, the speakers are annotated with their name or role, followed by a column ("John: I was working in London, when....." or "Int: I was working in London, when...")
The words John or Int are not spoken, so they have to be removed before aligning the audio with the text. But this means that all information about who-said-what, is removed as well.
So, the end-result of forced alignment by WebMAUS, is one string of words, independent of who was speaking and without dots, question marks, commas and other punctuations.
Solution
Cleaning the transcription
The original transcription (aka the word-file), contains all the required information: the words, the speakers, the punctuations and the non-verbal utterances as can be seen in the screenshots of the English (Fig. 3 above) and Italian (Fig. 7 below) transcripts.The required input for WebMAUS, is a plain ASCII/UTF-8 text file without all information other than the spoken words. So, the first step is to save the MS-word transcription files in Text-only UTF-8. The easiest way to do so, is to do it by hand from within the Word-application (save-as..).
Once the UTF-8 files are available, the can be read with Transcription Cleaner, a small Windows/OSX application that allows the user to select the transcription text and saves it in a XML-file with turn's containing the speaker as an attribute and the <original> and <cleaned> text as two text-elements (see Fig. 5 below).
Besides this XML-file, the program saves the cleaned text-elements as an UTF-8 text-file with just the cleaned text. This cleaned text-file can be used for the WebMAUS forced alignment.
Aligning the FA-result with the non-cleaned transcription
The results of the Forced Alignment (FA) by WebMAUS is a TextGrid-file (as can be seen in Fig. 1 above) with just the aligned words with their start and end-times. In another program these WebMAUS results and the cleaned text are 'aligned' resulting in an aligment result that includes the speaker-ID. The next months we will try to clue the WebMAUS-results not only to the cleaned text, but to the original text. If this can be done, it will result in the original transcription including the beginning and end of each transcribed word.
Example
We have tried to do so for some Italian transcripts. We succeed due to the fact that in the original Italian transcriptions were rather strait-forward and did not contain abbreviations or numbers (for the explanation, see below) .
In fig 6 we see the original transcription and in fig. 7 the final result: a player that highlites the word spoken.
Problems
The software works well but there are some "issues". In the English transcriptions, ofte number are used (for example: "I saw 2 ladies of 36 years").
WebMAUS internally rewrites 2 → two and 36 → thirtysix. This is done because WebMAUS is basically a phonetic aligner and thus needs the numbers to be spelled out in order to do this "phonetic alignment". The same is true for abbreviations: mr. → mister etc.
But as a result, the application has to align 36 with thirtysix, and mr. to mister which makes the software much more complex.