Results:
In the first experiment, I found myself constantly trying to please the algorithm—fixing my posture, speaking in higher frequency to appear more “energetic” for the machine, sometimes even deliberately censoring myself to sound more uplifting. Though my Zoom visitors seemed to be generally happy with how the conversations went, I felt distracted and dishonest during the calls.
In the second experiment, the existing biases in the translation program resulted in mistranslations. Speech Recognition API, because of the database it is trained on, may be better at recognizing some accents than others. Another notable instance is that in Chinese, there is no difference between the pronunciation of “he, she, or it,” they’re all pronounced as “ta.” Google Translate translates “ta” into “he” by default, no matter what the context is.
The mistranslations, frustrating at times, also had an upside. They brought the conversations to unexpected places, where I was able to learn more about my visitors from unplanned perspectives. And however strange it may be, I felt safe and comfortable hiding behind the technological mask, talking freely in my first language without worrying about how I sound.