provide scripted test-scenarios so we can repeat tests and get comparable results
We should have more testing and testing should be reproducible and comparable etc.. Here is a basic test-sequence:
- Hans (only german) und Chad (only english) have a conversation
- two interpreters (DE->EN and EN->DE)
- settings:
| Person | floor-mic | Available Language | interpeter-Mic |
|---|---|---|---|
| Hans | on | DE | off |
| Chad | on | EN | off |
| interpreter DE->EN | off | floor | EN |
| interpreter EN->DE | off | floor | DE |
config of the client and specific version should be documented here
- sequence:
| # | Hans says | Chad says | Chad hears | Hans hears |
|---|---|---|---|---|
| 1 | Hallo! Wie geht es dir? | Hello! How are you? | ||
| 2 | Wie geht es dir? | |||
| 3 | Hi! I am fine thank you. | Hallo! Danke mir geht es gut. | ||
| 4 | I can even understand you! | Ich kann dich sogar verstehen! | ||
| 5 | Das war ja einfach. | That was easy. | ||
| 6 | Ich kann ja doch Deutsch. | Ich kann ja doch Deutsch. |
- at step 6 Hans hears the original audio, otherwise both hear the interpretation
- instead of having two interpreters, we could have one who quickly switches back and forth between the languages
I would like to document these cases in the wiki and i would like some feedback before i do. What am i missing? Would you contribute your test cases?
Edited by Ghost User