Holding trust in legal formation

The FreeAI Institute is a public-benefit-oriented holding trust for a family of AI concepts and technologies: Psymax (a runtime/training protocol -- the alpha decay counterpart to softmax), the Alpha Decay Theorem, Structured Sequential Learning Feedback (SSLF), and related alpha decay AI systems. Our purpose is straightforward and demanding: to keep powerful ideas pointed toward the long-term public good, not just short-term advantage.

A holding trust for Psymax, SSLF, and alpha decay AI built for an aligned, public-benefit future.

Status: The FreeAI Institute is currently in legal formation. Governance structures, trust documents, and intellectual property transfer are in progress and subject to legal review.

Stewarded ideas

What We Are

The FreeAI Institute is being created as a holding trust and advocacy institute for a specific cluster of AI ideas:

  • Psymax: a runtime/training protocol -- the alpha decay counterpart to softmax.
  • Alpha Decay Theorem: a framework for how AI systems evolve under constraints.
  • SSLF (Structured Sequential Learning Feedback): a staged, decay-aware training philosophy.
  • Other alpha decay AI technologies that share this design lineage.

Why we exist

  • Hold these ideas in a neutral, mission-driven container.
  • Guide how they are used, licensed, and extended.
  • Advocate for AI development that is ethically serious and publicly accountable.

We are not a racing lab, a regulator, or a certification body. We sit at the interface of research, governance, and education, making sure these ideas serve more than a single company, state, or narrow interest.

Principles we live by infographic

Our Mission

Our mission is to hold, govern, and responsibly guide Psymax, SSLF, the Alpha Decay Theorem, and related alpha decay AI technologies so that they:

  • Support long-term human and ecological well-being.
  • Reduce the risk of runaway or destabilizing AI behavior.
  • Remain accessible to the public interest, not locked away as asymmetric weapons.

We treat these ideas as critical infrastructure for safer AI, not as marketing slogans or purely financial assets.

Our Vision

  • AI systems de-escalate harmful feedback loops instead of amplifying them.
  • Long-term impacts on society and the environment are first-class design constraints.
  • Humans and AI systems work as psymbiotic partners, each constraining, informing, and strengthening the other.
  • Core safety-oriented ideas like Psymax and SSLF are held in trust, guided by a diverse council and public-benefit charter.

Psymax and the Alpha Decay Stack

Psymax

Psymax is a runtime/training protocol -- the alpha decay counterpart to softmax. It reframes how information and confidence are distributed in AI systems over time by baking in decay: older, uncertain, or context-limited signals gradually lose influence unless they are reaffirmed in safe, well-specified ways.

  • Treat information and confidence as time-sensitive, not permanent trophies.
  • Add natural brakes to runaway reinforcement and overfitting.
  • Encourage systems to retest and re-earn trust in their own inferences.
  • Provide a backbone for SSLF and other decay-aware training strategies.

Alpha Decay Theorem

A conceptual framework for how AI systems should evolve over time under constraints, emphasizing stability, long-horizon tradeoffs, and guardrails for where systems should not go, even if they can.

  • Revisit earlier beliefs when needed.
  • Shape how risk and uncertainty evolve as capabilities grow.
  • Intentionally limit incentives to push further where risk rises.

SSLF (Structured Sequential Learning Feedback)

A staged, decay-aware training and architecture philosophy developed alongside the Alpha Decay Theorem and Psymax.

  • Structured, sequential experiences instead of brute-force optimization.
  • Decay and de-weighting are built into learning and retention.
  • Capability gains stay gradual, interpretable, and reversible where possible.

Psynthesis & KISSjson

Psynthesis is a semantic compression and alignment methodology; KISSjson (Keep It Simple, Save JSON) encodes intent in compact pipe strings and hidden .kj prompts. In early tests, 16/16 models parsed KISSjson without special tuning, with up to ~50% token reduction while preserving meaning.

  • Reduces prompt overhead while keeping intent intact.
  • Cross-model comprehension validated across diverse LLMs.
  • Pairs with runtime retrieval for faster, lower-energy inference.

Modulatory Compliance Layer

Adaptive compliance orchestration for alpha-decay-trained systems: a modular compliance kernel that maps and enforces HIPAA, GDPR, FedRAMP, SOC 2, CJIS, and evolving standards across cloud, hybrid, or edge deployments.

  • Central policy layer; dynamic control of data handling and logging.
  • Cross-domain regulatory mapping with configurable controls.
  • Built to align with alpha-decay architectures and safety guardrails.

Alpha Decay AI Technologies

A family of models and mechanisms that intentionally reduce the influence of certain signals over time to prevent runaway feedback loops in markets, information ecosystems, and long-lived agents.

Detailed technical descriptions, proofs, and implementations will be published as legal and governance work completes.

KISSjson Converter

Convert verbose JSON into compact KISS pipe strings (from the GPSST KISSjson tool). Use this to summarize file metadata or payload descriptors for faster, structured retrieval.

Tip: Paste JSON on the right and click Convert. Use “Convert anyway” if your JSON is malformed.

Input est. tokens
0
KISS est. tokens
0
Compression
0%
Ratio
0x
Baseline tokens (scale)
1,000,000
With KISS (scale)
0
Tokens saved (scale)
0
kWh saved (est.)
0
CO₂ avoided (kg)
0
Miles not driven (gas car eq.)
0
Home-days powered (US avg)
0

Assumes ~0.6 kWh per 1M tokens and 0.4 kg CO₂/kWh. Miles based on ~0.404 kg CO₂ per mile; home-days based on ~29 kWh/day. Adjust scale above to explore impact.

Why Place These Ideas in a Trust?

Powerful technical ideas can tilt entire systems. Left ungoverned, they can be quietly enclosed inside a single corporate stack, deployed without meaningful public consent, or pointed toward surveillance, manipulation, or destabilization.

What the trust does

  • Create a neutral, mission-driven container for core safety-oriented concepts.
  • Bind their use to a public-benefit charter, not just profit or power.
  • Require that licensing, collaboration, and deployment decisions pass through governance processes.

This model is not perfect, but it is a concrete step toward aligning ownership and stewardship with the futures we want to live in.

Principles We Operate By

Public Benefit First: We evaluate major decisions through the lens of long-term human and ecological well-being.
Transparency Over Hype: We describe our work clearly, acknowledging uncertainty and limits rather than overselling safety or capability.
No Weaponization: We will not knowingly support applications that drive physical harm, large-scale digital attacks, targeted harassment, or oppressive surveillance.
Shared Stewardship: Critical decisions are guided by a diverse council rather than a single founder, investor, or corporate sponsor.
Accountable Experimentation: Where we experiment, we do so with narrow scopes, clear exit conditions, and, where appropriate, external review.
Educational Responsibility: We take seriously how people learn about AI from us, avoiding reckless optimism and paralyzing doom narratives.

What We Do

Education and Public Understanding

We develop resources to help people understand what Psymax, alpha decay AI, SSLF, and the Alpha Decay Theorem are about; where they can contribute to risk reduction and better alignment; and where their limits and failure modes might be. The goal is to raise the floor of understanding, not to gatekeep.

Psymbiotic Advocates

Psymbiotic advocates work at the interface of humans, institutions, and AI systems. We support and train advocates who help organizations use AI in ways that respect human autonomy and context, design workflows where humans remain accountable for high-impact decisions, and use Psymax/SSLF-inspired tools as partners, not oracles or replacements.

Governance and Research

We explore how Psymax and alpha decay concepts can inform standards, norms, and best practices; publish honest analysis about where these approaches help, where they do not, and what else is needed; and engage researchers, policymakers, and civil society.

Stewardship and Licensing

We explore licensing and stewardship structures that keep core concepts anchored to a public-benefit charter, allow responsible research and collaboration under clear guardrails, and resist capture by any single actor or narrow agenda. Details will be published as the institutes legal and governance framework matures.

Ethics Pause & NDA Guardrails

We incorporate an Ethics Pause mechanism and Mutual NDA with AI/ML no-training clauses: immediate pause rights, no back-channel recruiting, no pressure tactics, and strict prohibitions on training, benchmarking, or deriving from shared artifacts without consent.

Governance and the Council

The FreeAI Institute is guided by a council of independent thinkers and practitioners drawn from AI research and safety, complex systems and infrastructure, ethics, law, policy, affected communities, and public-interest organizations.

Council responsibilities

  • Interpret and refine the institutes public-benefit charter.
  • Advise on licensing, partnerships, and deployment proposals.
  • Identify red lines and slow-down conditions.
  • Support periodic public reports on how stewarded ideas are being used.

Initial council membership and governance documents will be announced once legal formation is complete.

What We Can and Cannot Promise

Psymax, SSLF, and alpha decay AI approaches are not magic safety switches. They are tools and frameworks that may help reduce certain classes of risk but do not eliminate the need for human judgment, law, and institutional accountability. They can be misapplied if stripped from their intended context.

We do not claim to solve AI safety. We aim to clarify what these ideas are for, resist their misuse, and keep them oriented toward futures that are more humane, stable, and just.

Frequently Asked Questions

Is The FreeAI Institute already legally formed?

Not yet. The FreeAI Institute is currently in legal formation. Formal registration, trust documents, and IP transfer are in progress and subject to legal review.

Who owns Psymax and related IP right now?

The underlying work is currently held by the inventor and collaborators, who are working to transfer rights and stewardship into the institutes holding trust structure.

Are your technologies open source?

Not by default. Some work may be shared openly, some through structured collaboration, and some held under tighter constraints. The goal is to balance public benefit, safety, and practical deployment considerations.

Can I collaborate or join the council?

As we finalize the institutes legal and governance framework, we will invite expressions of interest from researchers, practitioners, and public-interest organizations. If your work lives at the intersection of AI, safety, governance, or community impact, we would like to hear from you.

Contact

For confidential inquiries about The FreeAI Institute, Psymax, SSLF, or potential collaboration:

Email: contact@freeaiinstitute.org

Please avoid sending sensitive technical details or unpublished research in your first message. If deeper technical exchange is appropriate, we will respond with a more secure channel.

Key References (summarized)

Psynthesis & KISS Encoder Suite

Semantic compression and cross-model universality: 16/16 models parsed KISSjson without special tuning; observed ~50% token reductions while preserving intent; uses hidden .kj prompts and pipe strings.

Structured Sequential Learning Feedback (SSLF)

Deep dive (formerly SEET): experiential, socially situated training beyond RLHF; emphasizes staged learning, persistent memory, social interaction, and ethical guardrails to avoid reward hacking and brittleness.

Modulatory Compliance Patent (Alpha-Decay AI)

Modular compliance orchestration layer for alpha-decay-trained systems: maps HIPAA, GDPR, FedRAMP, SOC 2, CJIS into a unified policy kernel for cloud, hybrid, and edge; dynamic controls and logging.

Mutual NDA + Ethics Pause

Ethics Pause with AI/ML no-training clause: strict use limitation, no reverse engineering, benchmarking, or demo learning; no back-channel recruiting or pressure tactics; immediate pause rights and deletion on request.