Reimagining Grant Applications in the Age of GenAI

Apr 1, 2026

Who Asks the Question Determines the Answer 

Last month, Jean Westrick and Gozi Egbuonu of TAG and John Mohr, CIO of the MacArthur Foundation and Co-Director for the Philanthropy Data Commons (PDC) initiative, led a 60-minute interactive session at the 26th Nonprofit Technology Conference (NTC) called “Reimagining Grant Applications in the Age of GenAI.” That title was both a provocation and invitation to consider the deeper roles data governance and AI play in how we envision solutions for common problems in philanthropy. 

Participants described the familiar gauntlet of grant applications: multiple application cycles, rejections paired with vague feedback, relationships that had to be cultivated over years before a funder would seriously look at their work. Just as we learned during the London design studio last fall hosted by TAG, CAST Centre for the Acceleration of Social Technology, and Institute for Voluntary Action Research, the current system extracts enormous effort from the organizations least equipped to give it. Centering the experience of under-resourced nonprofits sets the ethical stakes for designing new processes in the era of AI.  Simply adding AI on top of a system that is already too onerous and inequitable will never truly solve the core challenges.    

Middle aged white male with short, silver hair stands to the left of a middle aged, white woman with blonde hair and a younger black woman with shoulder length dark dreadlocked hair. They stand next to a presentation screen in front of three tables.

Form-based applications are labor-intensive and tend to reward well-resourced organizations with dedicated grant writers, established relationships, and the administrative bandwidth to track deadlines and compliance requirements – with 92% of nonprofits with budgets under $1M, that leaves many organizations excluded from grant opportunities. They generate enormous amounts of duplicative work, and they produce outputs that often tell funders less about an organization’s real impact than a single conversation might.  AI-generated applications have the potential to remove some of the application burdens; however, they also may introduce new issues, speculative applications, and a reduction in open grant opportunities.  

New AI tools are already showing up in all phases of the grant lifecycle. For example, Temelio, a GMS platform, uses AI to streamline the application process by helping nonprofit organizations draft and refine their submissions.  The tool also reduces burdens for program officers too by using GenAI to create recommendations summaries.  Some tools like Grant Guardian, an AI tool developed by the McGovern Foundation currently being used by over 400 funders to help improve financial due diligence by evaluating nonprofit financial data and creating a custom report for funders, holds clear benefits for funders. Then John Mohr shared how the PDC would enable a secure, consent-based and bi-directional data sharing to reduce redundancies, improve transparency, and strengthen collaboration across philanthropy. 

Transforming the Grantmaking Process

The question posed to the room afterward was “How might solutions like these transform your own processes?” The answers, brainstormed in small groups, ranged from automating repetitive data entry to using AI to surface patterns in past grant outcomes that could inform future strategy. However, the proposed solutions don’t resolve quite as neatly as we might like, especially when it comes to ethical concerns such as privacy, bias, transparency, trust, responsibility, acceptable use. 

Jean Westrick’s framing crystallized the central tension: Can AI disrupt grantmaking for the better, or does it risk reinforcing privilege? The honest answer is that it can do both, depending on how it’s deployed. An AI system trained on historical grant data will encode historical patterns — including patterns of who has been funded and who hasn’t. Without deliberate design choices and ongoing audit, efficiency gains for funders could compound disadvantages for applicants from communities that were already underrepresented in the data. 

Co-Designing the Future of Grant Applications 

What emerged from this discussion was an observation about power and design: whoever defines the question gets the biggest benefit. When foundations are the ones deciding what data to collect, what outcomes to measure, and what an “ideal” applicant looks like, AI systems will optimize for their priorities, not necessarily for the communities those grants are meant to serve. Therefore, if foundations want AI-assisted grantmaking to be genuinely equitable, they need to co-design the solutions with their nonprofit partners so that the benefits can be shared, and the new processes that emerge are fair, transparent and transformative. 

That leads directly to the session’s second animating principle: trust requires transparency. Funders asking nonprofits to share data through AI-enabled systems are asking for significant, detailed information about organizations, staff, communities, and outcomes that could be used in ways applicants never anticipated. John Mohr offered a practical bright line to hold funders accountable: if a funder cannot tell a grantee how their data is collected, where it is stored, and what it will be used for, they shouldn’t be collecting it. 

This session was an argument that technology choices in grantmaking are choices that can enable equity or make processes less fair. Thoughtfully designed tools, deployed with transparency and accountability, could meaningfully reduce the barriers that keep well-led, community-embedded organizations from reaching the resources they need.  

Problem definition is power. Three principles from the session are worth carrying forward as a practical test for any foundation considering AI tools: 

What challenge are you trying to solve (and who is informing the problem)? If you are not including your grantee partners, the system will optimize for the funder’s convenience and may create disadvantages inadvertently. Genuine partnership means involving grantees in shaping what gets asked, measured, and valued.

Are we being transparent enough to earn trust? Funders that leverage AI-assisted processes need to be honest about how they are using them and what role they play in decision-making.  

Are we being good stewards of data we collect? If a funder can’t clearly explain how data is gathered, where it lives, and what it will be used for, that’s a signal to pause and rethink. 

About the Technology Association of Grantmakers

TAG is a 501(c)(3) non-profit membership organization that promotes the strategic, innovative, and equitable use of technology in philanthropy to solve problems and improve lives. With over 2000 members in 300 foundations throughout North America and beyond, TAG is the voice of technology in the philanthropic sector, providing technology professionals, tech funders, and “accidental techies” with knowledge, networks, mentoring, and educational opportunities.

Since 2008, the Technology Association of Grantmakers (TAG) has built a global community, conducted groundbreaking research, and become an advocate for investment in tech infrastructure throughout the charitable sector. For more information, visit tagtech.org.