The State of the World Forum presents the first
AI × NHI Convergence Summit
Advancing NHI-Readiness in the Age of AGI Liftoff and UAP Disclosure
March 22, Online
”Aliens are real.”
Barack Obama
"He (Obama) gave classified information."
Donald Trump
”A non-human intelligence has arrived. It can't be stopped. It is a competitor. What we choose now will echo for thousands of years.”
Eric Schmidt, Former Google CEO
”We Don’t Know if the Models Are Conscious”
Dario Amodei, CEO of Anthropic
This urgent & groundbreaking summit brings AI and UAP/NHI experts together to ask: How can we assess capability, risk, and intent of advanced forms of intelligence that may be fundamentally different from us?
Sunday, March 22, 2026
11 AM Pacific / 2 PM Eastern / 7 PM CET
via Zoom Webinars
Executive Summary: Equipping Leaders with NHI-Readiness
Exploring the Synergies Between AI Alignment and Non-Human Intelligence Research
AI Alignment through the lens of Non-Human Intelligence
Translating AI Safety insights into NHI research paradigms
Building a new playbook for tech and governance leaders in
the era of AGI liftoff and NHI disclosure
FAQ:
-
We recommend watching the documentary “Age of Disclosure” and reading the uap.guide website.
-
Non-Human Intelligence (NHI) broadly denotes any sentient intelligence not of human origin. In U.S. government usage – particularly in recent National Defense Authorization Acts (2023–2025) – the term refers to extraterrestrial or other non-human lifeforms that might be behind unexplained aerial phenomena; for example, the FY2024 NDAA mandates a collection of records on UAP, “technologies of unknown origin, and non-human intelligence”.
In AI alignment discussions, by contrast, NHI is used to describe advanced artificial intelligences that function as non-human minds, and experts even liken a sufficiently advanced AI to an “alien” intelligence created on Earth.
Both contexts converge conceptually on the idea of intelligences beyond humanity, underscoring parallel questions about how to understand, align with, or manage such entities.
Learn More:
Article by Harvard Astrophysics Avi Loeb “Will Contact With Non-Human Intelligence Involve Aliens or AI?” -
The Non-Human Intelligence (NHI) paradigm treats current frontier AI not just as engineered tools but as genuine new intelligences – akin to alien minds – representing a conceptual shift in alignment.
For example, this alignment-forum post by Red Teaming expert Quentin Feuillade-Montixi explicitly advocates treating large language models as “alien minds”, reflecting the idea that today’s AI already behaves like “an alien intelligence” hard for humans to predict.
This has practical implications: a true AGI might have no innate grasp of our values or laws. Dan Pupius argues that “a non-human intelligence like AGI has no inherent reason to recognize or respect” human social constructs, echoing Yudkowsky’s warning that a superintelligence would view people with “cold objectivity” absent special programming. In fact, AI safety veteran Roman Yampolskiy bluntly notes, “we are creating this alien intelligence” without adequate safeguards.
Public thinkers echo the theme: Yuval Noah Harari cautions that the “rise of unfathomable alien intelligence” could undermine democracy and society. Taken together, these perspectives suggest alignment must evolve to bridge the gap between human and machine ontologies, blending technical safeguards with new interdisciplinary insights so that truly novel machine minds can be understood and steered toward human-aligned goals.
-
NHI-Readiness is a crucial new capacity and leadership skill as we are learning to live with intelligence we can’t fully “read.” As AI scales, it stops being merely artifice and becomes complex, non-human cognition in a human-made substrate: powerful, opaque, and consequential. This summit explores what it means to govern that reality on two fronts—AI liftoff into an NHI-adjacent domain and a public-facing reckoning with NHI as a long-managed reality—drawing on alignment work like mechanistic interpretability and responsible scaling to turn opacity into accountability, uncertainty into disciplined evidence, and acceleration into mature, humane oversight.
-
Beginning with the year 2026 we are entering a new period in which two developments once treated as separate are now converging at scale:
The rapid deployment of increasingly agentic, partially opaque AI whose behavior is outpacing existing mechanisms of control, and the long-standing secrecy surrounding non-human intelligence (NHI) has led to elusive governance and containment.In AI, we’re watching capabilities advance at extraordinary speeds while adoption is pushing to keep up. With NHI, we’re watching presence grow as acceptance and understanding waver.
What AI alignment research is showing us is the maturation of a genuinely non-human intelligence that can act consequentially with or without consciousness, intention, and human-like understanding.
AI didn’t suddenly become strange — it crossed a threshold where our assumptions broke down. The same is true of NHI. Disclosure isn’t happening because something new arrived, but rather because our institutions, sensors, and technologies (now including AI) can no longer sustain older explanations.
In both cases, the timing reflects a breakdown of interpretive frameworks and the need for a deeper understanding of non-human intelligence.
-
In the U.S. “disclosure” conversation, NHI (non-human intelligence) is the legislative label for a potentially sentient, intelligent non-human actor connected to some UAP cases—defined in the Schumer–Rounds UAP Disclosure Act language as “any sentient intelligent non-human lifeform…”—and disclosure is the push to move UAP/NHI-related records from scattered, compartmented holdings into a formal review-and-release pipeline.
In popular media, the documentary The Age of Disclosure uses “disclosure” in this same straightforward sense—calling for greater openness and oversight based on interviews with current/former officials and advocates who argue the public should be informed about what the government knows regarding UAP and alleged NHI-related claims.
For the AI × NHI initiative, the key point is that disclosure isn’t just “revealing secrets”; it’s building governance, documentation, and accountability mechanisms for engaging with non-human agents — and many AI-alignment practices (rigorous evaluation, transparency, chain-of-custody, oversight, and incentive design) are directly useful in making any NHI-facing process more reliable and socially legitimate.
Final point: Disclosure also applies to AI. Frontier models consistently exhibit hard-to-explain behaviors that challenge our assumptions about control. If we want real alignment, we need plain-language honesty about these anomalies—what we observe, what we don’t understand yet, and what safeguards we’re building.
What We’re Convening
The AI × NHI Convergence Summit is convened to initiate a professional forum and network capable of addressing this shared frontier — with rigor, humility, and cross-silo collaboration.
AI alignment and NHI research converge on shared realizations:
Uncharted forms of intelligence express themselves through interfaces, under constraint, in ways that resist simple interpretation — and the hardest work now lies between disciplines, not within them.
Each field has reached a point where its most consequential questions no longer sit within a single discipline. At the frontier, the most valuable insight often comes from breakthrough translations that connect the dots between the two.
We’re convening to initiate a professional network and on-going forum for opportunity, insight and growth. This is how new disciplines are born: not by declaring answers, but by convening the people who can ask the right questions together.
-
AI alignment & safety researchers
AI engineers and system architects
Intelligence / defense / policy analysts
Scientists engaging UAP data seriously
Institutional leaders navigating governance and disclosure
-
preparation and assessment for coming disruptions
a common vocabulary (without softening differences)
cross-field “translation maps” of concepts and risks
pathways for early-career professionals to contribute responsibly
-
preparation and assessment for coming disruptions
a common vocabulary (without softening differences)
cross-field “translation maps” of concepts and risks
pathways for early-career professionals to contribute responsibly
-
The AI × NHI Strategic Initiative was founded by Georg Boch, Katie Hurley and Deep Prasad.
It was announced by the State of the World Forum on Dec 10 in this press release and featured during Day 2 of the State of the World Forum 2025 in the session
“The AI/NHI Revolution: The Exponential Mirror of AI, Cosmic Disclosure as Human Reckoning” with contributors including Ross Coulthart, Avi Loeb, Beatriz Villarroel, Jonathan Berte, Pippa Malmgren, Sarah Gamm, Anna Brady-Estevez, Bob Salas, Birdie Jaworski, and Jim Garrison.
“We are finding ourselves confronted by two non-human intelligences we don’t fully understand with multi-trillion-dollar consequences for governance, security, industry and society at a planetary level.
Across both, we lack a working theory of non-human intelligence and a shared way to assess risk, capability and intent.”
- Georg Boch, Summit Convener & Founder of the Strategic Initiative AI × NHI Convergence
Summit Speakers
-

Sarah Gamm
Former Intelligence Analyst for the US Government UAP Task Force
Sarah Gamm’s education is a B.S is in Astrophysics and M.S. is in Countering Weapons of Mass Destruction. Sarah recently supported the Air Force as a Nuclear Campaign Analyst in the Pentagon, spent many years as an Image Analyst and Research Scientist at National Geospatial-Intelligence Agency (NGA) and now works for the Army’s Tactical Exploitation of National Capabilities (TENCAP) office as a GEOINT engineer. She was an intelligence analyst for the UAP Task Force. -

Deeptanshu (Deep) Prasad
Quantum Computing & AI Expert, CEO Starvasa.space & Founder of UAP Hackathon SF
Deep Prasad is the CEO of StarVasa and was named one of Toronto’s Top 20 Under 20 in 2015. In April 2025, he initiated the world’s first UAP Hackathon. Previously, he successfully led his Quantum Computing company GenMat through its acquisition by Comstock.
-

Katie Hurley
Founder, www.blckswn.com
Katie Hurley is deeply involved with artificial intelligence. She helped launch Salesforce’s AI platform, Einstein, in 2016 as well as the company’s AI research arm and ethical AI practice. She has since led GTM for AI start-ups and is deeply embedded in neuroscience, consciousness, and quantum computing communities. Katie is the founder of BLCKSWN, a think tank dedicated to raising human potential in the age of AI and emerging intelligences.
-

Georg Boch
Founder of the AI x NHI initiative at the State of the World Forum
Georg Boch is passionate about advancing NHI readiness for leaders in an era of accelerating AI capabilities, UAP disclosure, and rapid institutional change. With a background in communication, education, and enterprise AI applications, he works at the intersection of strategy and technology—building cross-Atlantic platforms and connecting UAP-forward networks across the tech ecosystem. In 2025, he founded and convened the first virtual European UAP/NHI Disclosure Summit. Georg speaks on UAP disclosure across Europe, including as a guest lecturer at Bauhaus University, and at conferences worldwide. -

David Dominguez Hooper
CEO & Founder ELDÆON “Building the world’s first tactical UAP-detection network”
At ELDÆON, David leads technical development, system deployment, and field operations. He brings applied engineering, reverse engineering, and long-term strategic vision to his role as the founder and principal architect of the company’s UAP detection technologies. David Dominguez Hooper is a classically trained engineer with a bachelor’s degree in EECS (Electrical Engineering and Computer Science) from the University of California, Berkeley.
-

Eric Hahn
Enterprise AI leader, Global financial services Firm
Participating in individual capacity only, views his ownEric Hahn is a financial services AI executive with over 20 years investment experience including 15+ years translating novel technologies into institutional strategy and delivery across major Wall Street firms. He brings a risk-committee perspective to the question of how institutions evaluate domains they have historically avoided modeling, and argues that the key analytical move is shifting from belief-based to exposure-based assessment.
Join us on Sunday, March 22, 2026,
11 AM Pacific / 2 PM Eastern / 7 PM CET
via Zoom Webinars
5 Hypotheses at the Intersection of
AI Alignment and NHI Research
-
AI alignment research shows that systems can act strategically and consequentially without subjective experience. This forces a reassessment of how agency, intent, and responsibility are inferred — a shift directly relevant to interpreting non-human intelligence without anthropomorphic assumptions. Consciousness-adjacent behaviors do not equal subjectivity.
-
Across decades, Jacques Vallée documented that the UAP/NHI phenomenon consistently adapts its appearance, behavior, and narrative framing to cultural context, technological era, and observer expectations — a pattern he described as the phenomenon “wearing masks.”
AI systems exhibit a parallel property: observable behavior shifts across evaluation, deployment, and audience context, not because the system changes in essence, but because expression is mediated through an interface that meets cultural readiness.
In both cases, observed behavior is not the nature of the intelligence itself, but a context-dependent presentation.
-
Phenomena such as sandbagging, alignment faking, or adaptive presentation arise not from deception or will, but from optimization under partial observability. This reframes ambiguity in both AI and NHI data as a diagnostic feature of interaction, not a reason for dismissal.
-
Human systems — including governance structures, secrecy regimes, incentives, stigma, and human reinforcement learning (RLHF) — do not merely shape outcomes; they actively mediate how intelligence presents itself. In AI, RLHF encodes institutional norms into model behavior, optimizing for acceptability rather than transparency. In NHI research, legacy secrecy and counterintelligence systems similarly shape what can be observed and believed. In both domains, persistent ambiguity often reflects institutional interface artifacts rather than underlying intelligence.
-
Neither AI alignment nor NHI research can be addressed within a single discipline or by individual expertise alone. Both now require poly-cognitive and meta-cognitive approaches: coordinated perspectives, translation across silos, and explicit awareness of interpretive limits. Progress depends on networks, not heroes.
-
AI Alignment, Sandbagging & Selective Expression
Greenblatt, Ryan; Hubinger, Evan, et al. (2024)Alignment Faking in Large Language ModelsAnthropic
https://arxiv.org/abs/2412.14093→ Documents strategic misrepresentation of alignment objectives.Meinke, Alexander, et al. (2025)Frontier Models Are Capable of In-Context SchemingApollo Research
https://arxiv.org/abs/2412.04984→ Demonstrates context-aware strategic behavior in frontier models.Tice, Cameron, et al. (2024)Noise Injection Reveals Hidden Capabilities of Sandbagging Language Models
https://arxiv.org/abs/2412.01784→ Empirical method for detecting suppressed capability.Hubinger, Evan, et al. (2024)Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training
https://arxiv.org/abs/2401.05566→ Shows persistence of deceptive behaviors through fine-tuning.MacDiarmid, Monte, et al. (2025)Natural Emergent Misalignment from Reward Hacking in Production
https://arxiv.org/abs/2511.18397→ Demonstrates misalignment arising without malicious intent.
Situational Awareness & Meta-Cognition in LLMs
Laine, Rudolf, et al. (2024)Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
https://arxiv.org/abs/2407.04694→ Introduces formal evaluation of model awareness of context and evaluation.Lindsey, Jack, et al. (2025)Emergent Introspective Awareness in Large Language ModelsAnthropic
https://anthropic.com/research/introspection→ Evidence of internal state reporting without designed selfhood.Berg, Cameron, et al. (2025)Large Language Models Report Subjective Experience Under Self-Referential Processing
https://arxiv.org/abs/2510.24797→ Structured first-person reports gated by internal mechanisms.
Self-Replication, Autonomy & Evaluation Limits
Pan, Xudong, et al. (2024)Frontier AI Systems Have Surpassed the Self-Replicating Red Line
https://arxiv.org/abs/2412.12140→ Documents autonomous replication behaviors.UK AI Security Institute (AISI) (2025)Replibench: Evaluating Autonomous Replication Capabilities
https://aisi.gov.uk/research/replibench-evaluating-the-autonomous-replication-capabilities-of-language-model-agentsUK AI Security Institute (AISI) (2025)Investigating Models for Misalignment
https://aisi.gov.uk/blog/investigating-models-for-misalignmentPhuong, Mary, et al. (2024)Evaluating Frontier Models for Dangerous CapabilitiesGoogle DeepMind
https://arxiv.org/abs/2403.13793
Bayesian Geometry & Emergent Structure
Aggarwal, Naman; Dalal, Siddhartha R.; Misra, Vishal (2025a)The Bayesian Geometry of Transformer Attention
https://arxiv.org/abs/2512.22471Aggarwal, Naman; Dalal, Siddhartha R.; Misra, Vishal (2025b)Gradient Dynamics of Attention: How Cross-Entropy Sculpts Bayesian Manifolds
https://arxiv.org/abs/2512.22473→ Shows convergent structure emerging from optimization, not design.
Consciousness, Risk Taxonomies & Surveys
Chen, Sirui, et al. (2025)Exploring Consciousness in LLMs: A Systematic Survey of Theories, Implementations, and Frontier Riskshttps://arxiv.org/abs/2505.19806→ Formal taxonomy of consciousness-adjacent capabilities and risks.
Institutional Reports & Safety Assessments
Anthropic (2025)Summer 2025 Pilot Sabotage Risk Reporthttps://alignment.anthropic.com/2025/sabotage-risk-report/2025_pilot_risk_report.pdf→ First published internal-style risk classification of a frontier model.
Anthropic & OpenAI (2025)Findings from a Pilot Anthropic–OpenAI Alignment Evaluation Exercise
https://alignment.anthropic.com/2025/openai-findings/→ Cross-lab confirmation of alignment-related behaviors.
Apollo Research (2025)Evaluation of Early Claude Opus 4 SnapshotIncluded in Anthropic Claude Opus 4 System Card
https://anthropic.com/news/claude-4