MIT Sloan Management Review Article on Why ‘Explicit Uncertainty' Matters for the Future of Ethical Technology
- 3m
- Mark Nitzberg
- MIT Sloan Management Review
- 2021
The biggest concerns over AI today are not about dystopian visions of robot overlords controlling humanity. Instead, they’re about machines turbocharging bad human behavior. Social media algorithms are one of the most prominent examples.
Take YouTube, which over the years has implemented features and recommendation engines geared toward keeping people glued to their screens. As The New York Times reported in 2019, many content creators on the far right learned that they could tweak their content offerings to make them more appealing to the algorithm and drive many users to watch progressively more extreme content. YouTube has taken action in response, including efforts to remove hate speech. An independently published study in 2019 claimed that YouTube’s algorithm was doing a good job of discouraging viewers from watching “radicalizing or extremist content.” Still, as recently as July 2021, new research found that YouTube was still sowing division and helping to spread harmful disinformation.
About the Author
Mark Nitzberg is executive director of the UC Berkeley Center for Human-Compatible AI and coauthor of The AI Generation: Shaping Our Global Future With Thinking Machines (Pegasus Books, 2021).
In this Book
-
MIT Sloan Management Review Article on Why ‘Explicit Uncertainty’ Matters for the Future of Ethical Technology