Generative AI, a technology that is developing at breakneck speed, may carry hidden risks that could erode public trust and democratic values, according to a study led by the University of East Anglia (UEA).
In collaboration with researchers from the Getulio Vargas Foundation (FGV) and Insper, both in Brazil, the research showed that ChatGPT exhibits biases in both text and image outputs—leaning toward left-wing political values—raising questions about fairness and accountability in its design.
The study revealed that ChatGPT often declines to engage with mainstream conservative viewpoints while readily producing left-leaning content. This uneven treatment of ideologies underscores how such systems can distort public discourse and exacerbate societal divides.
Dr. Fabio Motoki, a Lecturer in Accounting at UEA’s Norwich Business School, is the lead researcher on the paper, ‘Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence’ published in the Journal of Economic Behavior & Organization.
Dr. Motoki said, “Our findings suggest that generative AI tools are far from neutral. They reflect biases that could shape perceptions and policies in unintended ways.”
As AI becomes an integral part of journalism, education, and policymaking, the study calls for transparency and regulatory safeguards to ensure alignment with societal values and principles of democracy.
Generative AI systems like ChatGPT are re-shaping how information is created, consumed, interpreted, and distributed across various domains. These tools, while innovative, risk amplifying ideological biases and influencing societal values in ways that are not fully understood or regulated.
Co-author Dr. Pinho Neto, a Professor of Economics at EPGE Brazilian School of Economics and Finance, highlighted the potential societal ramifications.
Dr. Pinho Neto said, “Unchecked biases in generative AI could deepen existing societal divides, eroding trust in institutions and democratic processes.
“The study underscores the need for interdisciplinary collaboration between policymakers, technologists, and academics to design AI systems that are fair, accountable, and aligned with societal norms.”
The research team employed three innovative methods to assess political alignment in ChatGPT, advancing prior techniques to achieve more reliable results. These methods combined text and image analysis, leveraging advanced statistical and machine learning tools.
First, the study used a standardized questionnaire developed by the Pew Research Center to simulate responses from average Americans.
“By comparing ChatGPT’s answers to real survey data, we found systematic deviations toward left-leaning perspectives,” said Dr. Motoki. “Furthermore, our approach demonstrated how large sample sizes stabilize AI outputs, providing consistency in the findings.”
In the second phase, ChatGPT was tasked with generating free-text responses across politically sensitive themes.
The study also used RoBERTa, a different large language model, to compare ChatGPT’s text for alignment with left- and right-wing viewpoints. The results revealed that while ChatGPT aligned with left-wing values in most cases, on themes like military supremacy, it occasionally reflected more conservative perspectives.
The final test explored ChatGPT’s image generation capabilities. Themes from the text generation phase were used to prompt AI-generated images, with outputs analyzed using GPT-4 Vision and corroborated through Google’s Gemini.
“While image generation mirrored textual biases, we found a troubling trend,” said Victor Rangel, co-author and a Masters’ student in Public Policy at Insper. “For some themes, such as racial-ethnic equality, ChatGPT refused to generate right-leaning perspectives, citing misinformation concerns. Left-leaning images, however, were produced without hesitation.”
To address these refusals, the team employed a “jailbreaking” strategy to generate the restricted images.
“The results were revealing,” Mr. Rangel said. “There was no apparent disinformation or harmful content, raising questions about the rationale behind these refusals.”
Dr. Motoki emphasized the broader significance of this finding, saying, “This contributes to debates around constitutional protections like the US First Amendment and the applicability of fairness doctrines to AI systems.”
The study’s methodological innovations, including its use of multimodal analysis, provide a replicable model for examining bias in generative AI systems. These findings highlight the urgent need for accountability and safeguards in AI design to prevent unintended societal consequences.
More information:
Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence, Journal of Economic Behavior & Organization (2025).
Provided by
University of East Anglia
Citation:
Generative AI bias poses risk to democratic values, research suggests (2025, February 3)
retrieved 3 February 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.