MIT Sloan Management Review Article on Nudge Users to Catch Generative AI Errors

  • 5m
  • Arnab D. Chakraborty, Patrick Connolly, Paul Daugherty, Philippe Roussiere, Renée Richardson Gosline, Yunhao Zhang, Haiwen Li
  • MIT Sloan Management Review
  • 2024

Using large language models to generate text can save time but often results in unpredictable errors. Prompting users to review outputs can improve their quality.

OpenAI’s ChatGPT has generated excitement since its release in November 2022, but it has also created new challenges for managers. On the one hand, business leaders understand that they cannot afford to overlook the potential of generative AI large language models (LLMs). On the other hand, apprehensions surrounding issues such as bias, inaccuracy, and security breaches loom large, limiting trust in these models.

In such an environment, responsible approaches to using LLMs are critical to the safe adoption of generative AI. Consensus is building that humans must remain in the loop (a scenario in which human oversight and intervention places the algorithm in the role of a learning apprentice) and responsible AI principles must be codified. Without a proper understanding of AI models and their limitations, users could place too much trust in AI-generated content. Accessible and user-friendly interfaces like ChatGPT, in particular, can present errors with confidence while lacking transparency, warnings, or any communication of their own limitations to users. A more effective approach must assist users with identifying the parts of AI-generated content that require affirmative human choice, fact-checking, and scrutiny.

About the Author

Renée Richardson Gosline is head of the Human-First AI Group at MIT’s Initiative on the Digital Economy and a senior lecturer and research scientist at the MIT Sloan School of Management. Yunhao Zhang is a postdoctoral fellow at the Psychology of Technology Institute. Haiwen Li is a doctoral candidate at the MIT Institute for Data, Systems, and Society. Paul Daugherty is chief technology and innovation officer at Accenture. Arnab D. Chakraborty is the global responsible AI lead and a senior managing director at Accenture. Philippe Roussiere is global lead, Paris, for research innovation and AI at Accenture. Patrick Connolly is global responsible AI/generative AI research manager at Accenture Research, Dublin.

Learn more about MIT SMR.

In this Book

  • MIT Sloan Management Review Article on Nudge Users to Catch Generative AI Errors