If you are a journalist using artificial intelligence (AI) tools to help your daily work, chances are you have used tools developed by companies with unclear transparency about ownership, finances, and other critical data.
This conclusion emerges from a recent report published by the Media and Journalism Research Centre (MJRC), an independent think tank, investigating 100 companies behind the AI tools most commonly used and recommended by journalists.
According to the research, only 33 per cent of the companies are transparent about who owns and finances them, whereas 67 per cent of the companies fail to adequately disclose related information.
The study, led by Sydney Martin, assessed transparency based on 12 criteria, such as company location, funding, and investor details. Only 24 shared revenue details and just 43 disclosed total funding.
“In the absence of this data, it is challenging to ascertain how an AI tool company is influenced by investors or stakeholders, its size, or the individuals or entities that can be held accountable for the tool,” says the report’s executive summary.
The report argues that the lack of transparency serves as a “significant indicator” of potential challenges, not only for the future of journalism but impacting the future of communications as well.
Bias in facts
The report highlights that consumers may not be aware of the source of AI-generated information, which is presented as factual by journalists. These tools are predominantly owned and operated by private-sector entities.
“It is essential to understand who has a stake in these AI tool companies and how AI is being used by the media. This will ensure the protection of consumers, democracy, and truth,” reads the summary of the report.
The report further suggests that while some larger companies like Claude demonstrated better transparency, questions remain about whether this is driven by genuine accountability or social expectations.
It takes issue with the disparities in AI tool ownership, with most companies based in the Global North, further complicating their relevance for journalists in the Global South.
“The way in which AI tools work is often unsuitable culturally or in developmental terms to the Global South and it can obscure some of the truth of the experiences there,” Martin told the Reuters Institute for the Study of Journalism.
Transparency matters
Most companies examined in the study are small businesses or startups, with only 48 linked to publicly listed investors. Prominent investors include Y Combinator, involved in eight companies, and Google, in four.
Martin stresses the importance of transparency in funding, as financial backers can indirectly shape the biases and outputs of AI tools, impacting public perception and truth, particularly in fact-checking and research.
The report marks the start of a broader initiative to promote transparency in AI use within media and journalism, calling for longitudinal studies to track trends in ownership and investment.
Martin advocates for AI companies to voluntarily disclose key information, such as headquarters, investors, and accountability structures, stating that, “Making that information available is important and very simple.”
She also urges newsrooms to embrace a culture of transparency, ensuring journalists critically assess the origins, development, and potential biases of AI tools before integrating them into their work.
“Ask where the information is coming from and whether it is going to influence your journalism. What are the potential biases the AI could proliferate?” she told the Reuters Institute.
[Edited By Brian Maguire | Euractiv’s Advocacy Lab ]