The Ethics of Using Generative AI in Business: A Lesson for Leaders to Exercise Caution
Student Leadership and Ethics Board (SLEB) member Liv Erickson ’24 considers the ethics and impact—not just the productivity—of generative artificial intelligence for business leaders.
By Liv Erickson '24
ChatGPT exploded onto the scene seemingly overnight, unearthing the need for conversation around how this type of technology is built and where it is deployed. The latest in generative artificial intelligence (AI), a category of AI that generates outputs based on the data it’s been trained on, business leaders are quick to extol ChatGPT’s productivity benefits, while skeptics warn of negative impacts related to training bias, copyright abuse, and even environmental concerns.
In practice, generative artificial intelligence is—in all likelihood—here to stay. While the technology only recently made it into the hands of the consumer, a small handful of technology firms that own the data centers required to train AI models at the scale of ChatGPT have been working on the problem of “general purpose artificial intelligence” for decades. The goal, as purported by OpenAI, the company behind ChatGPT, is to create an AI that “benefits all of humanity.”
Today’s business leaders need to understand the ethical implications and tradeoffs of using ChatGPT and other widely available AI tools. Artificial intelligence is developed using machine learning, a way of teaching computing systems to identify patterns based on large-scale training data sets, and companies often keep their training data and models private to maintain a competitive edge. However, without transparency into training sets, ethical challenges related to the sourcing and usage of content are introduced. Algorithmic bias that replicates sexist and racist ideologies can be found in many machine-learning models, including those that power ChatGPT, yet leaders are racing to produce AI-assisted content.
“Today's business leaders need to understand the ethical implications and tradeoffs of using ChatGPT and other widely available AI tools."
Generative AI also poses challenges around the acquisition of training data. Several lawsuits have been filed against generative AI companies alleging that the United States’ “Fair Use Doctrine” is being used as justification for companies stealing copyrighted works and using them in their training sets. In the case of CoPilot, Microsoft’s code-generating AI, full sections of code from training data have been found in the generated output code, raising questions as to how much is truly being “transformed” by the algorithm versus lifted directly from the input.
Over the last year, we’ve seen AI go from being a highly specialized tool for corporations to a personalized source of “human knowledge” accessible to those with an internet connection. And, in academia, questions of how to handle ChatGPT have prompted some professors to incorporate the software into their curriculum explicitly, adopt stricter grading guidelines, or ban the software altogether. (In fact, as I sit writing this essay before my Leadership Through Fiction class, several of my classmates are discussing how ChatGPT summarized the assigned readings.) As both a powerful tool for learning and a quick way to generate essays, ChatGPT has forced educators to learn and adapt quickly.
Despite the ambitious goal of general-purpose AI consolidating all human knowledge into a convenient chat bot, it’s important for business leaders to realize that we are far from equitable access. The computational power required to train models like ChatGPT is restricted to those who can afford powerful hardware, cloud access, and the transfer of training data. This creates a challenging dynamic where firms in positions of power can keep smaller firms from competing effectively as estimates for the cost of running ChatGPT range from $1.5M - $8M per month.
Finally, the environmental cost to run and use large-scale generative AI systems should be considered. The graphics-intensive hardware used to store, train, and process queries draw significant power with corresponding CO2 emissions. Firms should evaluate how the use of machine learning and AI in their operations may or may not meet corporate sustainability goals.
In the coming years, leaders across all industries will need to understand and adopt strategies to take advantage of AI’s capabilities while mitigating the risks and impact of doing so. Education about the ethical implications of using AI in business will allow organizations to make enlightened decisions about the tradeoffs and costs associated with the rapid acceleration of machine-generated content.
Liv Erickson is a Director at Mozilla. Prior to joining Mozilla, she worked at AWS and Microsoft, and holds an undergraduate degree in Computer Science from the College of Engineering at Virginia Tech.