16 Mar An Interview with ALEX: Prototype 2.0 Feedback
I sent the prototype and the survey to some of my friends, colleagues, and others in the Creative Media Awards cohort. So far I have gotten a dozen responses. There are many valuable observations and suggestions. Thanks so much to those who tested it out!
My takeaways from prototype 2.0 user testing:
- For most participants, ALEX’s presence was extremely negative. It’s been described as “distant, cold, inhuman, robotic, unsupportive, annoying, stilted, rude, aggressive and militaristic.”
- With that said, most people chose to corporate with ALEX and follow its instructions.
- A few of the participants say that even though ALEX was unpleasant and robotic, it successfully pressured them into wanting to pass the test, and managed to hurt their feelings.
- All participants failed the test. I wonder if I should make it easier to pass to see if one would feel any different toward the interview and ALEX if they win the game.
- Many participants are confused about what exactly they would be hired to do. Perhaps I should add a short video introducing the role of a mini gamer before the interview.
- For some, the interview felt too long and almost neverending. Would adding a progress bar help?
- In order to reach more people, especially those who are skeptical about going through a 15-minute interactive experience, I’m considering making a 1-3 minute video introducing the project from the creator’s point of view. The video could potentially include reactions from first-time participants as well.
Overall, prototype 2.0 was successful in achieving my purpose of provoking people to think about the role of automation in the hiring process, as well as how they would react to such a system. From looking at the responses, I find that many who went through the interview did not just analyze ALEX and its mechanism, but reflected on themselves and how their human brains reacted in such a robotic process. This is exactly what I hoped for. After all, when we talk about artificial intelligence, we are actually talking about humans. All A.I. systems are essentially reflections of existing socio-economic systems. To make better A.I. systems, we have to be better humans.
No Comments