Skip to Main Content

Ethical Uses of AI

Resources for Enhancing Teaching and Learning with Advanced Language Models

Ethical Considerations

The plausibility of ChatGPT's output can give users the illusion of sentiences, but generative AI should not be confused with artificial general intelligence, which does not exist.  Humans are very prone to anthropomorphization, which could lead users to imbue ChatGPT with abilities and consciousness it does not have.  There are additional dangers and liability when generative AI is used as a substitute for human decision-making, human companionship, and human psychology.

An excellent starting point is Leon Furze's Teaching AI Ethics.


In the modern age of globalization and interconnectedness, many people have become aware of the importance of investigating what the companies they support, in turn support themselves.  Recently, many people have choosen not to patronize companies whose views do not align with their own.

Many of Silicon Valley's power players, including ChatGPT maker OpenAI's CEO Sam Altman, have expressed support for Longtermist philosophies.  "Longtermism," which should not be confused for mere long-term thinking, is a fringe philosophical belief about the future of humanity.


ChatGPT's Terms of Service allow the company to use any and all data entered as prompts to the tool, unless 'incognito mode' is activated.  OpenAI is currently being investigation by the Federal Trade Commission (FTC), in part for potential privacy violations. Users should avoid entering any personally identifiable information in a ChatGPT prompt, as well as financial information. 

See 5 Things You Must Not Share With AI Chatbots.


An often-overlooked concern is the environmental impact of generative AI tools.  The huge amount of computing power needed to generate predictive text or images with AI tools necessitates a huge amount of energy and water-- much more so than a traditional search.  In fact, researchers have estimated that just 5 ChatGPT prompts uses 16oz. of water and each individual prompt consumes energy equivalent to running a 5 watt LED bulb for more than 1 hour.  This is about 15 times the resource use of a Google search.


One of the primary concerns about text generators is that they are known to produce inaccurate information.  Large Language Models (LLMs) like ChatGPT cannot tell fact from fiction, sometimes giving incorrect answers or nonexistent sources.  This is sometimes called 'hallucinating,' though some researchers feel this term inappropriately anthropomorphizes AI models, which is a concern itself. Often the innaccuracies produced by ChatGPT are harmless, but sometimes they have real-world negative or dangerous consequences.  ChatGPT maker OpenAI is currently under investigation by the Federal Trade Commission, in part because of concerns over harmful inaccuracies produced by the text generator.

Students should be reminded that the onus is always on the end user to verify any and all information that comes from a text generator.


It is necessary to use human-created content for training, as training LLMs on synthetic media (e.g., text or images created by generative AI tools) leads to model collapse.  Most generative AI tools are trained on massive amounts of copyright-protected text, usually without permission or compensation.  Currently, there are numerous open lawsuits against ChatGPT maker OpenAI and other generative AI companies for copyright infringement, including from prominent authors, publications, and artists.  Companies have argued the LLM training falls under 'fair use,' but given the extant and predicted effects on the marketplace and the commercial nature of OpenAI's business model, this claim is questionable. 

Please note that entering copyrighted work into ChatGPT prompts (including your students' work) may infringe on intellectual property rights. 


More on Model Collapse:

Even before the advent of ChatGPT, AI tools had well-documented problems with bias.  Generative AI models often perpetuate the biases found in the training data, and training data often includes text from some of the worst corners of the internet.


It's easy to think of generative text tools like ChatGPT as totally machine-based, but a vast amount of human labor is needed to ensure it works correctly.  The data annotation and content moderation that enables LLMs to function as desired is mostly done by underpaid freelancers in the Global South.  Sometimes, they must view and label the very worst internet content, including sexual violence and child pornography.


A big worry about generative text tools is their potential to be used by bad actors to mislead, deceive, and generate spam and phishing attempts on a massive scale. ChatGPT and other generative tools are already being used to create and spread propoganda, spam, disinformation, and conspiracy theories.


Efforts to Mitigate Damage

I hear OpenAI employed Kenyan workers to train ChatGPT to filter out harmful content, including violent and sexual material. Is this true?

Yes, this is true. 


Glendale Community College | 1500 North Verdugo Road, Glendale, California 91208 | Tel: 818.240.1000  
GCC Home  © 2024 - Glendale Community College. All Rights Reserved.