HRF is seeking proposals from researchers, engineers, and organizations building AI tools that expand human rights and put individuals — not corporations or governments — in control of their intelligence.
For decades, HRF has supported freedom technology — from Bitcoin wallets to censorship-resistant communications. Now we're expanding to fund AI tools built for dissidents, activists, and individuals who need sovereign, private, and powerful intelligence without depending on governments or corporate platforms. Around the world, dictators increasingly weaponize AI. We want AI in the hands of those fighting back.
We encourage applications of all sizes. Don't let scale stop you from applying — if your idea needs more, tell us.
For individuals, small teams, and early-stage projects. Ideal for proof-of-concept work, research, and tools for specific activist communities.
For established teams with a track record. Ideal for scaling proven tools, infrastructure projects, and multi-country rollouts.
For transformative ideas. We can't fund everything at this scale, but we want to hear your vision — and if we can't fund it, we may help you find someone who will.
We're looking for projects that push the frontier of sovereign, open, and private AI — especially for people who cannot afford to be surveilled, censored, or locked out.
Research and tooling enabling powerful open-source AI models to run fully on-device — on smartphones, laptops, and low-cost hardware — without cloud connectivity, censorship, or data leakage. A dissident in Tehran with an iPhone should run AI that rivals frontier corporate platforms.
For users who must rely on cloud AI, we fund research into privacy-preserving inference — secure enclaves, trusted execution environments (TEEs), homomorphic encryption, and other techniques that let people use powerful models without exposing their queries, identities, or data.
Many at-risk activists lack access to affordable, trustworthy compute. We fund community-owned AI servers — for example, a $50K grant to buy GPU hardware shared among civil society groups across a region — and projects that dramatically reduce the cost of sovereign AI tools.
Agent frameworks that help activists automate research, translation, communication, legal documentation, and digital security tasks — without routing sensitive information through corporate surveillance infrastructure. Priority: offline-first or censorship-resistant networks.
Bitcoin, nostr, e-cash (Cashu/Fedimint), and BitChat are natural complements to sovereign AI. We fund projects integrating AI with these censorship-resistant, permissionless protocols — enabling pseudonymous AI access, AI-powered nostr clients, AI-assisted Bitcoin transactions, and more.
Autocrats already use AI for facial recognition, predictive policing, and mass surveillance. We fund tools that detect, counter, or help dissidents evade AI-powered repression — including adversarial AI defenses, detection of AI-generated disinformation, and digital security guidance.
Human rights defenders who don't master AI tools will be left behind. We fund programs and platforms that aggressively train activists to use AI for their work — from vibe coding to agent workflows to privacy-preserving research. Special focus on high-risk environments with limited connectivity.
Rigorous, independent research exposing how authoritarian regimes — CCP, Russia, Iran, Saudi Arabia — use AI to surveil, censor, predict dissent, and oppress minorities. Includes investigative journalism, technical audits of surveillance systems, and policy work countering authoritarian AI exports.
Today, even the best open-source setup still typically routes through a corporate LLM. We believe that's a transitional state, not a permanent one. Within 12–18 months, a human rights defender should be able to run a world-class AI entirely on their own hardware — no corporate intermediary, no surveillance risk.
HRF is committed to funding the engineers, researchers, and builders who can get us there.
A sample of projects HRF's AI for Individual Rights program has supported. Grantees may keep their status private for safety.
A fully open-source agentic coding platform. Unlike proprietary agents, OpenCode can be run entirely locally — allowing users to inspect code, avoid surveillance, and build software without routing sensitive work through corporate infrastructure. Best-in-class for dissidents locked out of corporate tools.
opencode.ai →An open-source, end-to-end encrypted AI assistant built by OpenSecret. Maple uses secure enclaves and confidential computing so that activists in authoritarian environments can use LLMs without risking sensitive data being scanned, stored, or handed to governments. Zero data retention.
trymaple.ai →A decentralized LLM routing marketplace built on the Nostr protocol. Routstr enables pseudonymous, uncensorable access to AI systems — bypassing government or corporate blocks. Pay with Bitcoin or e-cash. No account required. Built for the people most at risk from surveillance.
routstr.com →An education platform launching an AI development course specifically for beginners in repressive environments. Focuses on deploying open-source, privacy-preserving tools — enabling frontline activists to build their own AI-powered tools without technical backgrounds.
plebdevs.com →The Centre for Applied Nonviolent Action and Strategies is developing GENE (Global Education for Nonviolent Engagement) — an AI platform trained on decades of frontline organizing data. Helps activists plan campaigns and respond to crises using the world's largest database of nonviolent resistance.
canvasopedia.org →Conducts rigorous research on how the Chinese regime weaponizes AI for digital dictatorship and exports surveillance tools globally. Also identifies open-source alternatives to help Chinese dissidents and civil society groups operate securely in a highly surveilled environment.
citizenpowerinitiatives.org →We review applications on a rolling basis. Strong applications are specific, technically grounded, and clearly connected to the sovereignty and safety of real individuals in closed societies.
Complete the online form with project summary, budget, team background, and expected impact. Rolling intake — no fixed deadline.
HRF's AI for Individual Rights team reviews submissions within 4–6 weeks for mission alignment and technical feasibility.
Shortlisted applicants may be invited for a call. We review technical approach, team credibility, and open-source commitments in depth.
Awards can be denominated in USD or Bitcoin (sats). Milestone-based disbursement is standard for larger grants. Grantees may keep status private for safety.
If you're building AI tools that give individuals — especially those under tyranny — more freedom, privacy, and control, we want to hear from you.
Rolling applications · Grants in USD or Bitcoin · Questions: ai@hrf.org