On treating users with respect.
When things are slow at work, or if I’m feeling bored and listless, I write thank-you notes to users I’ve interviewed recently. My handwriting is both small and sloppy, so I use a black sharpie, and I keep my notes direct and concise: “Thank you for taking the time to talk with me during my visit last week. I really appreciate your feedback!”
I make a point of this, not because of any good Texan manners and effortless feminine nicety, but because it’s the only available option. In my current job, I’m doing design research for enterprise software and I’m interviewing the company’s employees during their work hours. Doing consumer-facing user research, I loved feeling like a game show host at the end of an interview by handing over an envelope of cash or a gift card. In my current job, compensating the users isn’t on the table. “If I can’t pay them, I should properly thank them.” I told a coworker. Users, after all, are the most crucial resource here.
Despite the User Experience boom of the past decade, discussion of ethics and codes of conduct has yet to see much meaningful discussion in professional circles. How many discussions have I sat through about the Hamburger Icon, versus how many about how we should treat or inform our users during testing? During secondary data collection? About deception or dishonesty?
I try to be very officious when reading an informed consent statement for usability testing or user interviews:
“You can leave at any time, you can stop at any time, you can ask questions at any time, and there is no deception involved”.
I might wager that for most tech communities, the maddening, no-good solution quandary about how to treat users and their data is less of a crowdpleaser than how our work is going to impress our peers, give our customers something new, or make things better for them. I’d wager to say that if privacy and subjectivity were given the same sort of intense treatment as the Hamburger Icon, we’d be in a much better place in this regard.
This might be a symptom of the larger disease of chronic technological myopia: by emphasizing singular instances of “solving the problem” for users with the design of solutions, or “delighting” the user in interaction, or just pulling off feats of engineering without anything breaking, there’s little room for identifying ground rules or having hard discussions. Like those about how to approach people who have sensitivities or vulnerabilities different than our own. About diversity issues or labor issues or privacy issues in the places we work.
Product teams should all use as much data as they can get their hands on- it’s good business practice. Moreover, data scientists have no reason to not work closely with product teams and customer-facing groups. It makes everyone more informed and more collaborative to know what else goes on where they work. When I facilitate user research sessions with business teams, I like to start with the questions, “What do we know and how do we know it?”
The Facebook emotional contagion experiments (there have been several) aren’t watershed moments for science: they’re crappy experiments done with crappy rationale and techniques that badly attempt to model humanity vis a vis computing. While some might be shocked to see that these went on in the confines of university labs, this is anything but a surprise to anyone who has spent time in a cash strapped and revenue-oriented R1 university. Tech companies can and do freely pick and choose between stressed-out assistant professors and underfunded grad students to take their research dollars. Institutional Review Boards will approve animal testing while grilling anyone doing ethnographic interviews, but machine learning barely merits their attention.
It’s a huge unsolved, largely unacknowledged problem, how technology companies regard their users. The only reasonable thing I can do is point to the problem, argue that it’s actually a problem, and try to make a case for addressing it. I have yet to figure out how to start to solve it on my own front, aside from using human decency as a metric: I try to build relationships with users when I can, and try to act in the best interest of that relationship when going about my work. It’s the least I can do.