The nonprofit Opportunity Labs issued a step-by-step framework last month to help ed-tech funders determine which generative artificial intelligence-based tools are worth the investment.
The 15-page document offers nine baseline standards for student safety and potential usefulness to guide not only investment decisions, but product development and school procurement teams, according to Andrew Buher, founder and managing director of Opportunity Labs. He added that the goal is to move funding, development and demand in the direction of AI tools that safeguard students while still allowing them to explore and perhaps benefit from emerging technology.
“I think we are always looking for middle ground that is practical and pragmatic but responsible and doesn’t put kids at risk, either from a well-being or learning perspective,” he said.
The framework was developed by a team of more than 50 educators, policymakers, developers and researchers during four roundtable discussions, according to Buher, who is the former chief operating officer for the New York City Department of Education and a lecturer in the Princeton School of Public and International Affairs.
“We heard again and again in these roundtables we convened that the product needs to solve a well-defined problem that kids or educators actually face, and then that product or intervention needs to be created with input from the people that will use it,” Buher said. “And that just doesn’t happen as often as it should.”
Those common discussion threads became the first two standards, or “strategic investment principles,” of the framework. The next two focus on whether the tool is based on a proven theory of learning, and whether the data used to train the AI is reliable, relevant and free of errors and bias.
“It’s not only about asking where the data is coming from, but it’s asking about whether your team has the capacity and expertise and necessary infrastructure to collect, if you’re collecting it, but more importantly to evaluate the data required to actually train the AI,” Buher said. “We don’t actually know that those teams exist consistently and are doing that with a proven theory of learning guiding how they’re training the model, for instance. And so I think that’s where you start to see some of the fractures in the development of these tools.”
The remaining principles in the framework call for investments in AI tools for education that allow for human oversight, are accessible to all users, protect student data privacy, come with quality professional support during and after implementation, and are backed by evidence that they will do what they intend to do.
The nine investment principles are:
- The tool is designed to solve a well-defined problem for students or educators.
- Potential users had input during development.
- It’s based on a theory of learning supported by research.
- It uses data that has been evaluated to ensure reliable results.
- It includes opportunities for human oversight.
- There is clear evidence to suggest it will achieve intended outcomes.
- All students could benefit from it.
- It addresses and reduces risks such as safety and privacy.
- It comes with support to make implementation successful.
For the standard on evidence, the document states that “a detailed theory of change, small-scale pilots or case studies that provide preliminary positive outcomes will suffice until more rigorous evaluations are feasible.”
“I think the real problem is that the ecosystem of these products is so nascent that it is very hard to assess efficacy,” Buher said. “I think these principles are the precursors to get to the place where we’re able to evaluate for efficacy.”
To use the framework to make decisions, investors should assign points rating how well an AI tool adheres to each principle, based on specific benchmarks and examples, then add them up to see if it meets a set threshold for investment.
The framework complements a document Opportunity Labs issued last summer, Buher said, called Procurement Benchmarks for AI in K-12 Education.