Skip to Main Content

Artificial Intelligence

Bias

Misinformation and Hallucinations

Chatbots can accidentally create plausible answers that are false. The New York Times reported in November 2023 that these 'hallucinations' can happen in 3% to 30% of generative AI queries. See the article links below for more information.

More specifically, when ChatGPT is asked to generate citations, it may create links to sources that are not real. For example, a real author might be attached to a made-up journal, or an actual title will be listed next to the wrong facts, with the wrong dates.

Hallucinations by ChatGPT and other generative models are accidental. But AI images, audio, and text can also be created with the intention of providing false information. See the links below for more information:

Additional Concerns

Climate Concerns:

Chatbots and other forms of AI use large amounts of processing power. As the technology expands, carbon emissions may rise. The articles here discuss these concerns and potential solutions.

Exploitation of Workers:

Many AI tools function at the expense of underpaid workers in the United States and around the world. 

Intellectual Property: 

AI has raised concerns over intellectual property especially in art, music, and film industries. 

Research and Advocacy

The links below include groups are studying the ethical implementation of AI technology. Other groups on this list are working to change AI policy or pioneering more ethical uses of the technology. Some of the links included here are publications by or about these organizations.