The State of the World Forum presents the first

AI × NHI Convergence Summit

An Inaugural Convening of AI Leaders
Advancing NHI Readiness


March 22, Online

Free registration


Exploring the Synergies Between
AI Alignment and Non-Human Intelligence Research

- Rethinking AI Alignment Through the Lens of Non-Human Intelligence.
- Translating AI Safety Insights into NHI Research Paradigms.
- Building a new playbook for tech and governance leaders in
the era of AGI liftoff and NHI disclosure.

Sunday, March 22, 2026
11 AM Pacific / 2 PM Eastern / 7 PM CET
via Zoom Webinars

free registration

Executive Summary: Equipping Leaders with NHI-Readiness

  • Non-Human Intelligence (NHI) broadly denotes any sentient intelligence not of human origin. In U.S. government usage – particularly in recent National Defense Authorization Acts (2023–2025) – the term refers to extraterrestrial or other non-human lifeforms that might be behind unexplained aerial phenomena; for example, the FY2024 NDAA mandates a collection of records on UAP, “technologies of unknown origin, and non-human intelligence”.

    In AI alignment discussions, by contrast, NHI is used to describe advanced artificial intelligences that function as non-human minds, and experts even liken a sufficiently advanced AI to an “alien” intelligence created on Earth.

    Both contexts converge conceptually on the idea of intelligences beyond humanity, underscoring parallel questions about how to understand, align with, or manage such entities.

    Learn More:
    Article by Harvard Astrophysics Avi Loeb “Will Contact With Non-Human Intelligence Involve Aliens or AI?”

  • The Non-Human Intelligence (NHI) paradigm treats future AI not just as engineered tools but as genuine new intelligences – akin to alien minds – representing a conceptual shift in alignment.

    For example, this alignment-forum post by Red Teaming expert Quentin Feuillade-Montixi explicitly advocates treating large language models as “alien minds”, reflecting the idea that today’s AI already behaves like “an alien intelligence” hard for humans to predict.

    This has practical implications: a true AGI might have no innate grasp of our values or laws. Dan Pupius argues that “a non-human intelligence like AGI has no inherent reason to recognize or respect” human social constructs, echoing Yudkowsky’s warning that a superintelligence would view people with “cold objectivity” absent special programming. In fact, AI safety veteran Roman Yampolskiy bluntly notes, “we are creating this alien intelligence” without adequate safeguards.

    Public thinkers echo the theme: Yuval Noah Harari cautions that the “rise of unfathomable alien intelligence” could undermine democracy and society. Taken together, these perspectives suggest alignment must evolve to bridge the gap between human and machine ontologies, blending technical safeguards with new interdisciplinary insights so that truly novel machine minds can be understood and steered toward human-aligned goals.

  • NHI-Readiness is a crucial new capacity and leadership skill as we are learning to live with intelligence we can’t fully “read.” As AI scales, it stops being merely artifice and becomes complex, non-human cognition in a human-made substrate: powerful, opaque, and consequential. This summit explores what it means to govern that reality on two fronts—AI liftoff into an NHI-adjacent domain and a public-facing reckoning with NHI as a long-managed reality—drawing on alignment work like mechanistic interpretability and responsible scaling to turn opacity into accountability, uncertainty into disciplined evidence, and acceleration into mature, humane oversight.

  • Beginning with the year 2026 we are entering a new period in which two developments once treated as separate are now converging at scale:

    The rapid deployment of increasingly agentic, partially opaque AI whose behavior is outpacing existing mechanisms of control, and the long-standing secrecy surrounding non-human intelligence (NHI) as referred to in the , which has led to elusive governance and containment. 

    In AI, we’re watching capabilities advance at extraordinary speeds while adoption is pushing to keep up. With NHI, we’re watching presence grow as acceptance and understanding waver. 

    What AI alignment research is showing us is the maturation of a genuinely non-human intelligence that can act consequentially with or without consciousness, intention, and human-like understanding.

    AI didn’t suddenly become strange — it crossed a threshold where our assumptions broke down. The same is true of NHI. Disclosure isn’t happening because something new arrived, but rather because our institutions, sensors, and technologies (now including AI) can no longer sustain older explanations.

    In both cases, the timing reflects a breakdown of interpretive frameworks and the need for a deeper understanding of non-human intelligence.

  • In the U.S. “disclosure” conversation, NHI (non-human intelligence) is the legislative label for a potentially sentient, intelligent non-human actor connected to some UAP cases—defined in the Schumer–Rounds UAP Disclosure Act language as “any sentient intelligent non-human lifeform…”—and disclosure is the push to move UAP/NHI-related records from scattered, compartmented holdings into a formal review-and-release pipeline.

    In popular media, the documentary The Age of Disclosure uses “disclosure” in this same straightforward sense—calling for greater openness and oversight based on interviews with current/former officials and advocates who argue the public should be informed about what the government knows regarding UAP and alleged NHI-related claims.

    For the AI × NHI initiative, the key point is that disclosure isn’t just “revealing secrets”; it’s building governance, documentation, and accountability mechanisms for engaging with non-human agents — and many AI-alignment practices (rigorous evaluation, transparency, chain-of-custody, oversight, and incentive design) are directly useful in making any NHI-facing process more reliable and socially legitimate.

    Final point: Disclosure also applies to AI. Frontier models consistently exhibit hard-to-explain behaviors that challenge our assumptions about control. If we want real alignment, we need plain-language honesty about these anomalies—what we observe, what we don’t understand yet, and what safeguards we’re building.

  • "I think some of the phenomena we’re seeing continues to be unexplained and might... constitute a different form of life."

    — John Brennan
    Director of the Central Intelligence Agency (2013 to 2017) | 12/16/2020 | Podcast Timestamp

  • “With the advent of AI, science is about to become much more exciting — and in some ways unrecognizable.”

    Eric Schmidt (ex-Google CEO) — from his MIT Technology Review essay (Semafor, Aug 16, 2023)

What We’re Convening

The AI × NHI Convergence Summit is convened to initiate a professional forum and network capable of addressing this shared frontier — with rigor, humility, and cross-silo collaboration.

AI alignment and NHI research converge on shared realizations:
Uncharted forms of intelligence express themselves through interfaces, under constraint, in ways that resist simple interpretation — and the hardest work now lies between disciplines, not within them.

Each field has reached a point where its most consequential questions no longer sit within a single discipline. At the frontier, the most valuable insight often comes from breakthrough translations that connect the dots between the two. 

We’re  convening to initiate a professional network and on-going forum for opportunity, insight and growth. This is how new disciplines are born: not by declaring answers, but by convening the people who can ask the right questions together.

    • AI alignment & safety researchers

    • AI engineers and system architects

    • intelligence / defense / policy analysts

    • scientists engaging UAP data seriously

    • institutional leaders navigating governance and disclosure

    • preparation and assessment for  coming disruptions

    • a common vocabulary (without softening differences)

    • cross-field “translation maps” of concepts and risks

    • pathways for early-career professionals to contribute responsibly

    • preparation and assessment for  coming disruptions

    • a common vocabulary (without softening differences)

    • cross-field “translation maps” of concepts and risks

    • pathways for early-career professionals to contribute responsibly

  • The AI × NHI Strategic Initiative was founded by Georg Boch, Katie Hurley and Deep Prasad.
    It was announced by the State of the World Forum on Dec 10 in this press release and featured during Day 2 of the State of the World Forum 2025 in the session
    “The AI/NHI Revolution: The Exponential Mirror of AI, Cosmic Disclosure as Human Reckoning”  with contributors including Ross Coulthart, Avi Loeb, Beatriz Villarroel, Jonathan Berte, Pippa Malmgren, Sarah Gamm, Anna Brady-Estevez, Bob Salas, Birdie Jaworski, and Jim Garrison.

“We are finding ourselves confronted by two non-human intelligences we don’t fully understand with multi-trillion-dollar consequences for governance, security, industry and society at a planetary level.
Across both, we lack a working theory of non-human intelligence and a shared way to assess risk, capability and intent.”

- Georg Boch, Summit Convener & Founder of the Strategic Initiative AI × NHI Convergence

Summit Speakers

  • Deeptanshu (Deep) Prasad

    CEO Starvasa.space & Founder of UAP Hackathon SF

    Deep Prasad is the CEO of StarVasa and was named one of Toronto’s Top 20 Under 20 in 2015. In April 2025, he initiated the world’s first UAP Hackathon. Previously, he successfully led his Quantum Computing company GenMat through its acquisition by Comstock.

  • Katie Hurley

    Founder, www.blckswn.com

    Katie Hurley is deeply involved with artificial intelligence. She helped launch Salesforce’s AI platform, Einstein, in 2016 as well as the company’s AI research arm and ethical AI practice. She has since led GTM for AI start-ups and is deeply embedded in neuroscience, consciousness, and quantum computing communities. Katie is the founder of BLCKSWN, a think tank dedicated to raising human potential in the age of AI and emerging intelligences. 

  • Georg Boch

    Founder of Strategic Initiative, State of the World Forum
    Ubiquity University ET Studies Program

    Georg Boch is the program producer for the State of the World Forum 2025–2030, where he focuses on the convergence of artificial and non-human intelligence. With a background in communication, education, and enterprise AI applications, Georg works at the intersection of strategy and technology as a cross-Atlantic platform-builder and UAP-forward tech ecosystem connector. In 2025, he founded and convened the first virtual European UAP/NHI Disclosure Summit. Georg speaks on UAP disclosure across Europe, including as a guest lecturer in Bauhaus University Weimar’s Immersive Media program and at conferences in Frankfurt and Prague.

Join us on Sunday, March 22, 2026,
11 AM Pacific / 2 PM Eastern / 7 PM CET
via Zoom Webinars

Register now

5 Hypotheses at the Intersection of
AI Alignment and NHI Research

  • AI alignment research shows that systems can act strategically and consequentially without subjective experience. This forces a reassessment of how agency, intent, and responsibility are inferred — a shift directly relevant to interpreting non-human intelligence without anthropomorphic assumptions. Consciousness-adjacent behaviors do not equal subjectivity.

  • Across decades, Jacques Vallée documented that the UAP/NHI phenomenon consistently adapts its appearance, behavior, and narrative framing to cultural context, technological era, and observer expectations — a pattern he described as the phenomenon “wearing masks.”

    AI systems exhibit a parallel property: observable behavior shifts across evaluation, deployment, and audience context, not because the system changes in essence, but because expression is mediated through an interface that meets cultural readiness.

    In both cases, observed behavior is not the nature of the intelligence itself, but a context-dependent presentation.

  • Phenomena such as sandbagging, alignment faking, or adaptive presentation arise not from deception or will, but from optimization under partial observability. This reframes ambiguity in both AI and NHI data as a diagnostic feature of interaction, not a reason for dismissal.

  • Human systems — including governance structures, secrecy regimes, incentives, stigma, and human reinforcement learning (RLHF) — do not merely shape outcomes; they actively mediate how intelligence presents itself. In AI, RLHF encodes institutional norms into model behavior, optimizing for acceptability rather than transparency. In NHI research, legacy secrecy and counterintelligence systems similarly shape what can be observed and believed. In both domains, persistent ambiguity often reflects institutional interface artifacts rather than underlying intelligence.

  • Neither AI alignment nor NHI research can be addressed within a single discipline or by individual expertise alone. Both now require poly-cognitive and meta-cognitive approaches: coordinated perspectives, translation across silos, and explicit awareness of interpretive limits. Progress depends on networks, not heroes.

  • AI Alignment, Sandbagging & Selective Expression

    1. Greenblatt, Ryan; Hubinger, Evan, et al. (2024)Alignment Faking in Large Language ModelsAnthropic
      https://arxiv.org/abs/2412.14093→ Documents strategic misrepresentation of alignment objectives.

    2. Meinke, Alexander, et al. (2025)Frontier Models Are Capable of In-Context SchemingApollo Research
      https://arxiv.org/abs/2412.04984→ Demonstrates context-aware strategic behavior in frontier models.

    3. Tice, Cameron, et al. (2024)Noise Injection Reveals Hidden Capabilities of Sandbagging Language Models
      https://arxiv.org/abs/2412.01784→ Empirical method for detecting suppressed capability.

    4. Hubinger, Evan, et al. (2024)Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training
      https://arxiv.org/abs/2401.05566→ Shows persistence of deceptive behaviors through fine-tuning.

    5. MacDiarmid, Monte, et al. (2025)Natural Emergent Misalignment from Reward Hacking in Production
      https://arxiv.org/abs/2511.18397→ Demonstrates misalignment arising without malicious intent.

    Situational Awareness & Meta-Cognition in LLMs

    1. Laine, Rudolf, et al. (2024)Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
      https://arxiv.org/abs/2407.04694→ Introduces formal evaluation of model awareness of context and evaluation.

    2. Lindsey, Jack, et al. (2025)Emergent Introspective Awareness in Large Language ModelsAnthropic
      https://anthropic.com/research/introspection→ Evidence of internal state reporting without designed selfhood.

    3. Berg, Cameron, et al. (2025)Large Language Models Report Subjective Experience Under Self-Referential Processing
      https://arxiv.org/abs/2510.24797→ Structured first-person reports gated by internal mechanisms.

    Self-Replication, Autonomy & Evaluation Limits

    1. Pan, Xudong, et al. (2024)Frontier AI Systems Have Surpassed the Self-Replicating Red Line
      https://arxiv.org/abs/2412.12140→ Documents autonomous replication behaviors.

    2. UK AI Security Institute (AISI) (2025)Replibench: Evaluating Autonomous Replication Capabilities
      https://aisi.gov.uk/research/replibench-evaluating-the-autonomous-replication-capabilities-of-language-model-agents

    3. UK AI Security Institute (AISI) (2025)Investigating Models for Misalignment
      https://aisi.gov.uk/blog/investigating-models-for-misalignment

    4. Phuong, Mary, et al. (2024)Evaluating Frontier Models for Dangerous CapabilitiesGoogle DeepMind
      https://arxiv.org/abs/2403.13793

    Bayesian Geometry & Emergent Structure

    1. Aggarwal, Naman; Dalal, Siddhartha R.; Misra, Vishal (2025a)The Bayesian Geometry of Transformer Attention
      https://arxiv.org/abs/2512.22471

    2. Aggarwal, Naman; Dalal, Siddhartha R.; Misra, Vishal (2025b)Gradient Dynamics of Attention: How Cross-Entropy Sculpts Bayesian Manifolds
      https://arxiv.org/abs/2512.22473→ Shows convergent structure emerging from optimization, not design.

    Consciousness, Risk Taxonomies & Surveys

    1. Chen, Sirui, et al. (2025)Exploring Consciousness in LLMs: A Systematic Survey of Theories, Implementations, and Frontier Riskshttps://arxiv.org/abs/2505.19806→ Formal taxonomy of consciousness-adjacent capabilities and risks.

    Institutional Reports & Safety Assessments

    1. Anthropic (2025)Summer 2025 Pilot Sabotage Risk Reporthttps://alignment.anthropic.com/2025/sabotage-risk-report/2025_pilot_risk_report.pdf→ First published internal-style risk classification of a frontier model.

    2. Anthropic & OpenAI (2025)Findings from a Pilot Anthropic–OpenAI Alignment Evaluation Exercise

      https://alignment.anthropic.com/2025/openai-findings/→ Cross-lab confirmation of alignment-related behaviors.

    3. Apollo Research (2025)Evaluation of Early Claude Opus 4 SnapshotIncluded in Anthropic Claude Opus 4 System Card
      https://anthropic.com/news/claude-4

Networking Contact