More mundanely, and more readily within reach, let's build systems that let us know the certainty with which a recommendation is made. People work well with suggestions and recommendations that have a rationale; given the evidence for a suggestion or recommendation, people can decide whether the suggestion or recommendation is really for them. By contrast, when no argument is offered there is little to work with, so what is suggested or recommended is taken on blind faith. Even offering a confidence for the suggestion or recommendation would be helpful.
Focus on failure—don't assume success. While designing and building systems, we should consider what is at stake, and ask ourselves: What is the price of failure, and what would UNDO look like? We need to ask what the cost is of undoing the consequences of actions taken on the basis of algorithmic suggestions and recommendations. It is easy to dismiss or ignore a product recommendation, but less easy to recover from the trauma of being apprehended incorrectly by authorities convinced of your guilt, filled with certainty that is powered by uncritically accepted and little understood computation.
To avoid an escalation in the number of negative unintended consequences, we need to rethink algorithmic recommendation. We need to think about the why, where, and how of algorithmic suggestions and recommendations. We need to be more proactive in exploring the potential for high-consequence versus low-consequence errors. We need to ask: How trustworthy is the information presented? How is the information presented—what is present and what is missing? What is salient? What is the expertise of the person to whom the recommendation or filtered information is presented? We HCI researchers and practitioners have been grappling with these kinds of issues for a long time—perhaps having more influence on the design of recommendation systems would be a good thing.
3. Merton, R.K. The unanticipated consequences of purposive social action. American Sociological Review 1, 6 (1936), 895; http://www.d.umn.edu/cla/faculty/jhamlin/4111/2111-home/CD/TheoryClass/Readings/MertonSocialAction.pdf
4. Baeza-Yates, R. Bias on the web. Communications of the ACM 61, 6 (Jun. 2018), 54–61.
5. Also see the November–December 2018 issue of Interactions, which featured some special topic articles curated by myself, Phillip van Allen, and Mike Kuniavsky.
6. Pearl, J. and Mackenzie, D. The Book of Why: The New Science of Cause and Effect. Basic Books, 2018.
Originally from the U.K., Elizabeth Churchill has been leading corporate research at top U.S. companies for the past 18 years. Her research interests include social media, distributed collaboration, mediated communication, and ubiquitous and embedded computing applications. email@example.com
Copyright held by authors
The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.