Leveraging the Power of AI to Broker Evidence Use and Challenge Compliance-Driven Decision-Making

Amy, a curriculum director in a mid-sized district, stared at her computer screen in frustration. Her team had identified the perfect evidence-based intervention (EBI) for their struggling mathematics students, backed by rigorous research and aligned with their student demographics. But it wasn’t on the What Works Clearinghouse list. “We can’t risk non-compliance,” her supervisor said. “Stick to the approved list.” Sound familiar? If you’re a school improvement support provider working in today’s districts, you’ve probably encountered this scenario: rich research evidence that could transform student outcomes, but trapped behind a compliance-driven culture enabled by the state and local policies created to enforce the Every Student Succeeds Act. 

Here’s how we partnered with districts like Amy’s to use AI as a knowledge brokering tool to break through barriers that keep high-quality, relevant evidence from improving students’ lives. 

The Challenge: Compliance-Driven Stifling of Evidence Use

Through deep partnership with local school districts, our research team discovered a troubling pattern. The Every Student Succeeds Act (ESSA), designed to increase evidence use by district and school leaders, has created an unintended consequence: a compliance-driven culture limiting the use of research evidence in school improvement planning. Under ESSA, districts must comply with state policies to develop school improvement plans that identify EBIs and strategies to address needs for Targeted Support and Improvement (TSI) and Comprehensive Support and Improvement (CSI) schools. These are schools designated as lowest-performing or lowest-performing for some subgroups. 

During our project, we found that districts have different understandings of what the state considers acceptable EBIs to comply with ESSA. As a result, districts often play it safe and limit the selection of their EBIs to the What Works Clearinghouse (WWCH) and Evidence for ESSA repositories. Doing this excludes hundreds of well-researched EBIs and does not allow the flexibility to identify EBIs that are closely aligned to the local context and demographics. Beyond those two repositories, some districts were aware that they could identify other EBIs but would need documentation to support their selection. Both the legwork to identify novel EBIs and interpreting related research to defend the selection of the EBI are time-consuming and a drain on resources when districts are operating under budget cuts and tight deadlines. 

We challenged the compliance-driven culture using two specific AI tools to identify and translate research to increase and customize the use of EBIs in school improvement planning. To do this, we deeply engaged with participating school districts to build an understanding of their school improvement planning process and existing challenges in the process that AI-enabled tools could address. Identified challenges included a limited body of EBIs to choose from based on the district’s understanding of state policy and a concern that the EBIs were not the most appropriate fit for their student population. Furthermore, the recent cuts in funding for education research raised additional questions about existing central evidence repositories, like the WWCH, and whether those resources would be available in the future for district and school use. 

AI to Advance Knowledge Brokering

After extensive collaboration with district partners, we developed a practical workflow using two key AI tools. It should be noted that there are many AI tools similar to the ones we chose. However, we selected these tools after piloting them in a graduate level course for aspiring principals. We randomly assigned students the use of these tools to serve as “research assistants.” The aspiring principals reported that they found these tools useful for future roles as education leaders and found them easy to use. The tools selected are Consensus, a web-based application powered by AI that can intuitively search for and synthesize research in practitioner-friendly ways. The other tool is Humata, which extracts information from uploaded files and allows users to upload far more files than typical generic AIs like ChatGPT. Unlike other AIs that mesh information from uploaded files with pretraining knowledge, Humata users only receive information from the uploaded files, meaning only from the uploaded research articles in our project. 

With these tools, we developed a three-step workflow:

  • Identify: Use Consensus to find EBIs beyond traditional clearinghouses
  • Analyze: Upload relevant research to Humata for targeted questioning
  • Document: Generate evidence trail to support EBI selection for compliance

Through this workflow, the AI tools act as a resource to increase efficiencies for knowledge brokers by identifying and absorbing a larger body of research in a shorter amount of time. The knowledge brokers, the district and school leaders who engage with the research to inform policy and practice, are able to pull a much larger number of EBIs. Rather than combing through each one to identify alignment to their school needs and populations, they can ask very specific questions to Humata and receive answers with text directly from the articles, as well as be given references with page numbers to go back to the article for further understanding. With the more expansive body of research and synthesis, it is assumed that the knowledge broker is better equipped to make critically informed decisions.

The Value of Human Judgement & Other Ethical Considerations

“We still need that human touch in everything. We don’t want anyone to think it’s a replacement for what they’re doing, especially at the administrative level.” – District Supervisor of IT

In our trainings and discussions, the issue of AI versus human judgement was a common theme. To be clear, the AI tools will never replace the need for human judgment to evaluate the EBI and make the appropriate decision of what research will advance educational goals. Rather, the process helps humans make research informed decisions to produce hopefully stronger plans with additional knowledge and thinking about closing inequities in education. The research that Consensus identifies still needs a human to review it and upload to Humata only the relevant articles of interest. There is also the need for a strong human touch in reviewing the outputs by Humata and to make decisions as to which EBIs might be the most impactful to the school based on individual knowledge of the school community. 

As the experts of their schools, school and district leaders must be part of the design and implementation of the use of AI-enabled tools in research use. Their insights and understandings should ultimately inform decisions and the AI tools act as resources to help in that process. This requires more than just training on the AI tool, but building AI literacy to design the guardrails to support the ethical use of AI-enabled tools and iterating on the use of these tools in practice. This requires investing in a long-term partnership between researchers and school districts. Additionally, researchers are only just learning about the effects of AI on the environment; before scaling AI use, leaders should weigh the benefits against the drawbacks, such as environmental damage and labor market effects. 

The Shift from Compliance to Novel Research Use

By making knowledge more easily accessible, the AI tools push the compliance-driven culture created by those who uphold the enforcement of ESSA to engage in a more expansive body of research and to apply that knowledge to practices that center equity. We begin to move from checking the box on EBIs to thinking more critically about what is possible because that knowledge is now at our fingertips. It no longer requires deep time investments or extensive research training, and there is no or low cost involved. However, this project has shown that it is not just the AI tools that transform compliance culture, but it is also human engagement. Through this project, we brought together state DOE and district leadership to walk through our process of using AI-enabled tools to facilitate the identification of EBIs. This was a pivotal point because it allowed for moving beyond compliance to a deeper understanding of constraints and possibilities of research use in policy contexts. The project workshops also created a space for district leaders to challenge each other’s thinking by providing new ideas and learnings, and in doing so, mindsets begin to shift away from the compliance culture and into novel ways to apply research to practice. 

Moving Forward

This work emphasizes that knowledge brokers must navigate complex tensions between innovation and compliance, efficiency and human judgment. Our experience with AI tools to expand access to evidence offers one pathway, but it requires careful attention to relationships, capacity building, and ethical implementation. 

As knowledge brokering continues to develop as a recognized practice, documenting these experiments—both successes and challenges—becomes increasingly important. The field benefits when practitioners share their approaches to overcoming systemic barriers that hinder evidence use, especially those impacting historically marginalized communities.

The tools we’ve described are merely starting points. The real challenge lies in developing the partnerships, processes, and professional capacity that ensure technology enhances rather than replaces the relational aspects of knowledge brokering that foster lasting change.

About the Authors

  • Elizabeth Davis is a Postdoctoral Fellow at George Mason University. She uses mixed methods and participatory approaches to research to examine issues of equity, particularly around educational opportunity for immigrant-origin students. Her research has been published in numerous peer-reviewed journals and funded by the Spencer Foundation, William & Flora Hewlett Foundation, and the American Educational Research Association. Dr. Davis can be reached at edavis32@gmu.edu.

  • Seth B. Hunter is an Associate Professor of Education at George Mason University. He applies rigorous computational and mixed methods to examine educator and organizational effectiveness, human-machine partnerships, and research use by practitioners and policymakers. Dr. Hunter’s work is regularly published in top-tier education journals, and the Hewlett Foundation, EBSCO, and the Gates Foundation have funded his research. Dr. Hunter can be reached at shunte@gmu.edu.