Summary: “Nudging” — the strategy of changing users’ behavior based on how apparently free choices are presented to them — has come a long way since the concept was popularized by University of Chicago economist Richard Thaler and Harvard Law School professor Cass Sunstein in 2008. With so much data about individual users and with the AI to process it, companies are increasingly using algorithms to manage and control individuals — and in particular, employees. This has implications for workers’ privacy and has been deemed by some to be manipulation. The author outlines three ways ways that companies can take advantage of these strategies while staying within ethical bounds: Creating win-win situations, sharing information about data practices, and being transparent about the algorithms themselves.
Companies are increasingly using algorithms to manage and control individuals not by force, but rather by nudging them into desirable behavior — in other words, learning from their personalized data and altering their choices in some subtle way by Mareike Möhlmann, assistant professor at Bentley University.Photo: Hiroshi Watanabe/Getty ImagesSince the Cambridge Analytica Scandal in 2017, for example, it is widely known that the flood of targeted advertising and highly personalized content on Facebook may not only nudge users into buying more products, but also to coax and manipulate them into voting for particular political parties.
University of Chicago economist Richard Thaler and Harvard Law School professor Cass Sunstein popularized the term “nudge” in 2008, but due to recent advances in AI and machine learning, algorithmic nudging is much more powerful than its non-algorithmic counterpart. With so much data about workers’ behavioral patterns at their fingertips, companies can now develop personalized strategies for changing individuals’ decisions and behaviors at large scale. These algorithms can be adjusted in real-time, making the approach even more effective.
Algorithmic nudging tactics are increasingly being employed in work environments as companies are using texts, gamification, and push notifications to influence their workforce...
One way to approach this problem is counterfactual explanations. These show what the outcome of a decision-making algorithm would have been for a specific individual if they had different characteristics or attributes — a simple and non-technical way to show how the algorithm works.
Source: Harvard Business Review