AI Alignment Fellow

Washington, DC
Temporary
Entry Level

About National Fair Housing Alliance 

The National Fair Housing Alliance (NFHA) leads the fair housing movement and is the nation's only national organization exclusively dedicated to eliminating all forms of housing discrimination and ensuring equitable housing opportunities for all people and communities. We have a diverse, experienced, mission-driven, and impactful team that has developed equity-based policies at the federal, state, and local levels to expand fair housing opportunities; brought precedent-setting litigation to eliminate some of the most heinous forms of housing discrimination; conducted groundbreaking research to promote equitable solutions; and invested millions of dollars in underserved communities. We have solid relationships, built on trust, with national, regional, and local organizations, and we effectively draw upon these connections to reach vital goals. We are game changers that millions of people rely upon to advance fair housing. 

Where you live matters. It affects every aspect of your life and determines whether you have access to the options and opportunities we all need to thrive. Yet despite important existing federal laws, more than 4 million acts of housing discrimination occur in the U.S. each year, and housing inequality remains stubbornly entrenched. That is why—through its education and outreach, member services, public policy, advocacy, housing and community development, responsible AI, enforcement, and consulting and compliance programs—NFHA is dismantling longstanding barriers to equity, rooting out bias, and building diverse, inclusive, well-resourced communities.

Position Summary

The AI Alignment Fellow will advance original mathematical and computational research at the intersection of algorithmic fairness, civil rights law, and AI governance. This fellowship directly addresses a foundational challenge in responsible machine learning: the structural tension between individual fairness and group fairness in automated decision-making systems. Situated within the Responsible AI Lab of the National Fair Housing Alliance, the Fellow will contribute to investigating a unified theoretical framework — grounded in Lipschitz continuity constraints and distributional parity conditions — capable of characterizing when individualized least discriminatory alternatives (iLDA) imply or are implied by group-level fairness guarantees, and deriving tight bounds on the mappings between these two regimes. 

The Fellow will leverage large language models and AI-assisted research tools to accelerate formal mathematical inquiry, conduct comparative legal and policy analysis, and translate technical findings into accessible policy recommendations for civil rights practitioners, regulators, and technology developers. This is an intellectually ambitious role for a researcher who combines rigorous quantitative training with a commitment to computational justice. The Fellow will work in close collaboration with other teams, including Legal and Public Policy teams, and will be expected to contribute to peer-reviewed scholarship, public-facing technical reports, and stakeholder engagement activities that advance NFHA’s mission of eliminating housing discrimination through responsible AI oversight. 

This fellowship position will report to the Chief AI Officer at NFHA, working full-time for a period of eight (8) weeks and expected to work in our DC Office on Pennsylvania Avenue on Mondays and Thursdays and may work remotely the remaining days of each week.

Essential Job Functions 

Consultation

  • Collaborate with NFHA program staff, legal counsel, and external civil rights partners to identify high-priority algorithmic fairness problems in housing, lending, and related domains where the individual/group fairness tension has material legal or policy implications. 

  • Advise internal teams and external stakeholders on the technical implications of competing fairness definitions, translating mathematical distinctions — such as the difference between demographic parity and equalized odds — into operationally relevant guidance for non-technical audiences. 

  • Engage with peer researchers, policy advocates, and regulatory bodies to represent the Responsible AI Lab’s research agenda, including participation in working groups, expert panels, and interagency consultations on AI accountability standards. 

  • Support the Chief AI Officer in providing technical input to organizational partners deploying or auditing algorithmic systems in high-stakes civil rights contexts, including reviewing fairness audits and offering structured recommendations grounded in the Lab’s research findings. 

AI-Driven Mathematical Research  

  • Employ AI-assisted research environments — including large language models, automated proof assistants, and symbolic computation tools — to investigate the conditions under which the set of individually fair mechanisms is a subset of, equivalent to, or disjoint from the set of group-fair mechanisms, and to derive or verify formal proofs of these inclusion relationships. 

  • Develop and analyze tight functional bounds for the mappings that translate between individual fairness Lipschitz parameters and group fairness tolerance parameters across a range of metric space geometries, distributional assumptions, and subgroup family structures. 

  • Investigate extensions of the core research problem to multi-attribute protected classes, intersectional subgroup families, and randomized mechanism settings, using AI tools to explore the combinatorial complexity of these configurations and identify tractable boundary conditions. 

AI-Driven Law and Policy Research

  • Use AI-assisted legal research tools to systematically analyze how U.S. anti-discrimination law — including Fair Housing Act disparate impact doctrine, Equal Credit Opportunity Act standards, and Title VII jurisprudence — maps onto individual versus group fairness paradigms, identifying doctrinal gaps where current legal frameworks fail to account for the mathematical tension addressed in the research problem. 

  • Conduct comparative policy analysis across domestic and international AI governance frameworks — including the EU AI Act, CFPB guidance on algorithmic lending, and emerging federal AI risk management standards — to assess how fairness definition choices are operationalized in regulatory compliance requirements and what implications formal mathematical findings carry for those frameworks. 

  • Develop policy translation documents that render formal research findings in terms directly usable by civil rights enforcement agencies, fair lending compliance officers, and legislative staff engaged in algorithmic accountability rulemaking. 

  • Monitor and synthesize emerging litigation, regulatory enforcement actions, and legislative developments related to algorithmic discrimination, using AI tools to maintain a structured knowledge base that informs both the Lab’s research agenda and NFHA’s advocacy and enforcement activities. 

Documentation & Communication

  • Co-author a peer-reviewed paper, one technical reports, and a white paper documenting research findings on the structural relationship between individual and group fairness, ensuring that all written products meet the standards of the algorithmic fairness, machine learning, and legal scholarship literatures as appropriate. 

  • Produce accessible summaries, policy briefs, and public-facing communications that translate core mathematical findings for civil rights advocates, journalists, and policymakers, ensuring NFHA’s research is legible and actionable for audiences without formal mathematical training. 

  • Maintain rigorous documentation of AI-assisted research workflows — including prompting strategies, tool configurations, verification protocols, and reproducibility procedures — to support research integrity standards and to contribute to the Lab’s developing best practices for responsible use of AI in formal mathematical inquiry. 

  • Present research findings at academic conferences, practitioner convenings, and policy forums, representing the Responsible AI Lab’s work to diverse audiences and contributing to the public discourse on computational justice, fair lending, and algorithmic accountability. 

Qualifications and Competencies 

  • Doctoral degree in progress or conferred in Physics, Mathematics, Statistics, Computer Science, Operations Research, or a closely related quantitative discipline; candidates with a master’s degree and demonstrated research experience in algorithmic fairness or formal machine learning theory will be considered. 

  • Demonstrated familiarity with algorithmic fairness literature, including working knowledge of group fairness criteria (demographic parity, equalized odds, calibration) and individual fairness formulations grounded in metric spaces and Lipschitz continuity. 

  • Prior exposure to or coursework in U.S. civil rights law, anti-discrimination frameworks, or AI governance policy is strongly preferred; experience working in or with civil rights organizations, regulatory bodies, or public interest technology contexts is a significant asset. 

  • Evidence of research productivity appropriate to career stage, such as peer-reviewed publications, conference presentations, thesis work, or technical reports demonstrating the ability to produce original scholarship at the intersection of formal mathematical analysis and applied sociotechnical problems. 

  • Proficiency in formal mathematical reasoning and proof construction, including measure theory, probability theory, metric space topology, and optimization, with the capacity to formalize and rigorously analyze fairness constraints of the type specified in the research problem statement. 

  • Practical experience with AI-assisted research tools, including large language models used for literature synthesis, proof exploration, and code generation, as well as familiarity with symbolic computation environments such as Mathematica, SageMath, or equivalent platforms. 

  • Programming proficiency in Python and/or R for statistical analysis, simulation, and implementation of algorithmic fairness methods; familiarity with fairness toolkits such as Fairlearn, AIF360, or equivalent libraries is preferred. 

  • Ability to use AI-driven legal and policy research tools to systematically analyze regulatory texts, case law, and governance frameworks, and to synthesize findings across legal and technical literatures in an intellectually rigorous and citable manner. 

  • Exceptional written and oral communication skills, including the demonstrated ability to translate highly technical mathematical content into clear, accurate, and compelling language for legal, policy, and advocacy audiences without sacrificing analytical precision. 

  • Strong collaborative orientation and interpersonal effectiveness in cross-disciplinary environments, with the capacity to work productively alongside legal staff, civil rights advocates, data scientists, and senior organizational leadership in pursuit of shared research and mission objectives. 

  • Intellectual humility and rigorous epistemic standards, including a commitment to acknowledging the limits of AI-assisted research outputs, verifying formally derived results through independent analysis, and maintaining transparent documentation of methods and assumptions. 

  • Deep personal commitment to equity, civil rights, and computational justice, with the professional maturity to engage responsibly with research that has direct implications for the protection of legally and historically marginalized communities in automated decision-making contexts. 

Compensation and Benefits 

The compensation for this role is $10,000 for the duration of the 8-week fellowship.

This fellowship does not include any additional benefits or leave accrual.

How to Apply 

Interested applicants need to submit a resume and cover letter. Applications will be accepted until the position is filled. Please no phone calls and incomplete applications will not be considered. 

The earliest start date for this position is Monday, July 6, 2026.

Affirmative Action / Equal Employment Opportunity Statement 

NFHA values and encourages diversity in its workforce. NFHA supports affirmative action and is dedicated to promoting equal employment opportunities. NFHA does not discriminate on the basis of race, color, religion, national origin, ancestry, citizenship, sex, age, marital status, personal appearance, sexual orientation, family responsibilities, disability, matriculation, political affiliation, or any other category or characteristic protected by the laws of the United States or the District of Columbia.

Share

Apply for this position

Required*
We've received your resume. Click here to update it.
Attach resume as .pdf, .doc, .docx, .odt, .txt, or .rtf (limit 5MB) or Paste resume

Paste your resume here or Attach resume file

Human Check*