Request for Proposals · 2026

AI for Individual Rights Fund

HRF is seeking proposals from researchers, engineers, and organizations building AI tools that expand human rights and put individuals — not corporations or governments — in control of their intelligence.

$10K–$1M+Grant range
8Focus areas
170+Countries reached
RollingApplications

For decades, HRF has supported freedom technology — from Bitcoin wallets to censorship-resistant communications. Now we're expanding to fund AI tools built for dissidents, activists, and individuals who need sovereign, private, and powerful intelligence without depending on governments or corporate platforms. Around the world, dictators increasingly weaponize AI. We want AI in the hands of those fighting back.

Pick Your Funding Tier

We encourage applications of all sizes. Don't let scale stop you from applying — if your idea needs more, tell us.

Starter Grant
$10K – $50K

For individuals, small teams, and early-stage projects. Ideal for proof-of-concept work, research, and tools for specific activist communities.

  • Individual researchers or developers
  • Early-stage open-source tools
  • Education programs or curricula
  • Localized activist tooling
Think Big Grant
$200K – $1M+

For transformative ideas. We can't fund everything at this scale, but we want to hear your vision — and if we can't fund it, we may help you find someone who will.

  • Foundational research projects
  • Open-source model development
  • Global sovereign AI infrastructure
  • Breakthrough privacy technology

8 Areas of Focus

We're looking for projects that push the frontier of sovereign, open, and private AI — especially for people who cannot afford to be surveilled, censored, or locked out.

01
Sovereign Edge AI
Local Models & On-Device AI

Research and tooling enabling powerful open-source AI models to run fully on-device — on smartphones, laptops, and low-cost hardware — without cloud connectivity, censorship, or data leakage. A dissident in Tehran with an iPhone should run AI that rivals frontier corporate platforms.

On-device LLMs Edge inference Quantization Mobile AI
02
Private Cloud AI
Secure Inference & Privacy Tech

For users who must rely on cloud AI, we fund research into privacy-preserving inference — secure enclaves, trusted execution environments (TEEs), homomorphic encryption, and other techniques that let people use powerful models without exposing their queries, identities, or data.

TEEs / Enclaves Private inference Zero-knowledge ML
03
Access & Compute
Distributed Compute for Activists

Many at-risk activists lack access to affordable, trustworthy compute. We fund community-owned AI servers — for example, a $50K grant to buy GPU hardware shared among civil society groups across a region — and projects that dramatically reduce the cost of sovereign AI tools.

Community servers P2P compute Africa / MENA / Asia
04
Open-Source Agents
AI Agents for Human Rights Work

Agent frameworks that help activists automate research, translation, communication, legal documentation, and digital security tasks — without routing sensitive information through corporate surveillance infrastructure. Priority: offline-first or censorship-resistant networks.

Open agents Translation Research automation
05
Freedom Tech Integration
AI × Bitcoin, Nostr & E-Cash

Bitcoin, nostr, e-cash (Cashu/Fedimint), and BitChat are natural complements to sovereign AI. We fund projects integrating AI with these censorship-resistant, permissionless protocols — enabling pseudonymous AI access, AI-powered nostr clients, AI-assisted Bitcoin transactions, and more.

Bitcoin + AI Nostr AI clients E-cash payments
06
Counter-Surveillance
AI Safety for Dissidents

Autocrats already use AI for facial recognition, predictive policing, and mass surveillance. We fund tools that detect, counter, or help dissidents evade AI-powered repression — including adversarial AI defenses, detection of AI-generated disinformation, and digital security guidance.

Surveillance evasion Deepfake detection Adversarial ML
07
Education
Capacity Building for Defenders

Human rights defenders who don't master AI tools will be left behind. We fund programs and platforms that aggressively train activists to use AI for their work — from vibe coding to agent workflows to privacy-preserving research. Special focus on high-risk environments with limited connectivity.

Training programs Curricula Offline-capable
08
Research
Documenting AI Repression

Rigorous, independent research exposing how authoritarian regimes — CCP, Russia, Iran, Saudi Arabia — use AI to surveil, censor, predict dissent, and oppress minorities. Includes investigative journalism, technical audits of surveillance systems, and policy work countering authoritarian AI exports.

China AI exports Surveillance audits Policy advocacy

Our North Star:
Fully Sovereign AI

Today, even the best open-source setup still typically routes through a corporate LLM. We believe that's a transitional state, not a permanent one. Within 12–18 months, a human rights defender should be able to run a world-class AI entirely on their own hardware — no corporate intermediary, no surveillance risk.

HRF is committed to funding the engineers, researchers, and builders who can get us there.

Featured Grantees

A sample of projects HRF's AI for Individual Rights program has supported. Grantees may keep their status private for safety.

Open-Source Agentic Coding
OpenCode

A fully open-source agentic coding platform. Unlike proprietary agents, OpenCode can be run entirely locally — allowing users to inspect code, avoid surveillance, and build software without routing sensitive work through corporate infrastructure. Best-in-class for dissidents locked out of corporate tools.

opencode.ai →
Private AI Assistant
Maple AI

An open-source, end-to-end encrypted AI assistant built by OpenSecret. Maple uses secure enclaves and confidential computing so that activists in authoritarian environments can use LLMs without risking sensitive data being scanned, stored, or handed to governments. Zero data retention.

trymaple.ai →
Decentralized AI Access
Routstr

A decentralized LLM routing marketplace built on the Nostr protocol. Routstr enables pseudonymous, uncensorable access to AI systems — bypassing government or corporate blocks. Pay with Bitcoin or e-cash. No account required. Built for the people most at risk from surveillance.

routstr.com →
Activist Education
PlebDevs

An education platform launching an AI development course specifically for beginners in repressive environments. Focuses on deploying open-source, privacy-preserving tools — enabling frontline activists to build their own AI-powered tools without technical backgrounds.

plebdevs.com →
AI + Nonviolent Strategy
CANVAS / GENE

The Centre for Applied Nonviolent Action and Strategies is developing GENE (Global Education for Nonviolent Engagement) — an AI platform trained on decades of frontline organizing data. Helps activists plan campaigns and respond to crises using the world's largest database of nonviolent resistance.

canvasopedia.org →
Research
Citizen Power Initiatives for China

Conducts rigorous research on how the Chinese regime weaponizes AI for digital dictatorship and exports surveillance tools globally. Also identifies open-source alternatives to help Chinese dissidents and civil society groups operate securely in a highly surveilled environment.

citizenpowerinitiatives.org →

Eligibility & Criteria

We Encourage Applications From

  • Independent researchers and engineers building open-source AI tools
  • Nonprofit organizations supporting human rights defenders
  • Small teams or individuals with a focused, concrete project scope
  • Projects that are open-source or commit to open-source release
  • Applications from the Global South, MENA, Southeast Asia, and other high-impact regions
  • Projects integrating AI with freedom tech ecosystems (Bitcoin, nostr, e-cash)
  • Prior HRF grantees with strong track records

Out of Scope for This Fund

  • Proprietary, closed-source AI products with no path to openness
  • Projects primarily serving corporate interests or commercial surveillance
  • General AI ethics work with no direct connection to authoritarian repression
  • Projects requiring all user data to route through a single corporate cloud
  • AI safety research not connected to individual rights or authoritarianism
  • Large institutional grants to established Western universities as primary use

Application Process

We review applications on a rolling basis. Strong applications are specific, technically grounded, and clearly connected to the sovereignty and safety of real individuals in closed societies.

01
Submit Proposal

Complete the online form with project summary, budget, team background, and expected impact. Rolling intake — no fixed deadline.

02
Initial Review

HRF's AI for Individual Rights team reviews submissions within 4–6 weeks for mission alignment and technical feasibility.

03
Due Diligence

Shortlisted applicants may be invited for a call. We review technical approach, team credibility, and open-source commitments in depth.

04
Grant Award

Awards can be denominated in USD or Bitcoin (sats). Milestone-based disbursement is standard for larger grants. Grantees may keep status private for safety.

Ready to Apply?

If you're building AI tools that give individuals — especially those under tyranny — more freedom, privacy, and control, we want to hear from you.

Apply for a Grant → Learn About the Program

Rolling applications · Grants in USD or Bitcoin · Questions: ai@hrf.org

Join the AI for Individual Rights Newsletter