ChatGPT and other AI tools will soon be free for all Cal State students, faculty and staff, according to a system-wide email from the Cal State Office of the Chancellor on Monday.
According to the announcement, OpenAI will collaborate with the Cal State campuses to deploy ChatGPT Edu, an advanced version of ChatGPT based on the latest GPT-4o model that advertises enhanced security and control for educational institutions.
The announcement sparked discourse amongst faculty, addressed in an email sent by the California Faculty Association on Wednesday.
“Faculty should have the power to decide how and whether to use these tools and should not be subject to repercussions for using AI in responsible ways, nor for refusing to use it,” the email read.
Patrick Lin, director of Cal Poly’s Ethics + Emerging Sciences Group and a philosophy professor, raised concerns about Cal State’s shared governance policies, which require formal faculty consultation ahead of decision-making by the Chancellor’s Office. The faculty was not consulted about the initiative, Lin said.
“At Cal Poly, we’re internationally known for our work in AI and other technology ethics, so it’s a bit mind-boggling why the CSU wouldn’t even consult with their own in-house experts for free,” Lin said.
The faculty association said they hope to meet with Cal State management to discuss potential impacts on faculty, according to their email.
Cal State Spokesperson Hazel Kelly told Mustang News in an email Thursday that the system plans to work closely with faculty in this new initiative.
Foaad Khosmood, research director at Cal Poly’s Institute for Advanced Technology and Public Policy, said he felt blindsided by the announcement.
Leo Horwitz, Computer Science and Artificial Intelligence Club president and a computer science senior, said the initiative is promising for productivity and student success. However, Horwitz believes maintaining academic integrity and privacy should be Cal State’s main priorities.
“We hope that the CSU will help students and staff learn how and when AI tools can be useful while maintaining academic integrity so that they can be used to their fullest extent,” Horwitz said.
Horwitz explained that Large Language Models (LLMs), a type of AI that can understand and create human-like text, often exclude their user data from the larger model when made for specific enterprises like this one.
“We hope the CSU guarantees at least that privacy, at minimum, for its students,” he added.
Khosmood also voiced concerns over data privacy.
“Half a million CSU individuals, faculty and students can be providing lots of data to these companies,” Khosmood said. “We want to be sure that there are at least good agreements that safeguard the privacy and the ethics of the situation.”
Ethical concerns stem from the potential misuse of student data, Lin explained.
“For all the talk about the importance of securing data at Cal Poly and every other modern-day organization, using ChatGPT and other LLMs run entirely counter to that,” Lin said.
He went on to say that Cal State using AI vendors is risky, as Cal State might not know these companies’ true motives.
“It’s not just philanthropy since they’re for-profit companies,” Lin said. “It could involve building profiles of students that can be monetized or otherwise exploited later.”
Accompanying this program is a newly established CSU AI Workforce Acceleration Board, which will identify usages for AI in the workplace, according to the press release. Board members include representatives from Amazon, LinkedIn and other tech companies involved in the development.
However, Lin expressed concern about the sole presence of AI vendors on the board.
“This is a clear conflict of interest when it comes to a highly contentious domain like LLMs in education,” Lin said.
This is a developing story, Mustang News will update as more information becomes available.