Liam Grinstead's picture
Open to Collab

Liam Grinstead

RFTSystems

AI & ML interests

What I can offer and help with: I build transparent, reproducible agent systems that solve hard “state + decision” problems where standard ML pipelines stall, drift, or become unreproducible. My work focuses on decision-timing under uncertainty, durable state management, and turning every run into an inspectable artifact with cryptographic lineage. If you’re working on agents, automation, or research demos and you keep hitting the same wall—“why did it do that, and can we reproduce it?”—this is exactly what I build. I can help with • Agent state durability: reproducible memory/state handling across retries, branching, tool calls, and multi-agent handoffs (planner/executor/reviewer) without mystery behavior. • Decision-timing frameworks (beyond standard ML): systems that act when the cost of waiting exceeds the cost of acting—explicit commit/collapse logic, failure modes, and audit trails. • Non-standard programming approaches: collapsing complex behaviors into simpler, verifiable primitives (thresholds, feedback loops, cascades) instead of brittle, overfit heuristics. • Symbolic agents & “entangled” influence models: reflex/instinct/reflective/meta agents with explicit coupling rules that can be inspected and stress-tested. • Reproducible artifact lineage: every run becomes a “Codex” record (inputs → intermediates → decisions → outputs) sealed with hashes so results can be verified later. • High-performance simulation + benchmarking: fast NumPy/Numba-style simulation work where performance metrics are measured and reported as part of the experiment. • AI × quantum / computing research prototyping: practical, testable toy-models that connect agent collapse dynamics to computation constraints (latency, throughput, scaling), without hand-wavy claims. Core focus Rendered Frame Theory (RFT): a collapse/decision framework I developed to model complex adaptive systems using thresholds, feedback loops, and cascade dynamics—designed for open inspection and reproducibility. Best-fit collaborations • People building multi-step agent pipelines who need reproducibility and explainability. • Researchers shipping demos who want falsifiable runs and durable logs. • Builders optimizing performance and looking for clean simulation kernels + measurable benchmarks. • Anyone tired of black-box “agent magic” and wants explicit rules, explicit data, explicit failure modes. Everything I publish is designed to be inspected, reproduced, and argued with.

Recent Activity

updated a Space 38 minutes ago
RFTSystems/START_HERE__Agent_Forensics_Suite
replied to etemiz's post about 4 hours ago
how to expand your dataset (of articles) without changing the ideas in it? i was doing CPT for a while and got decent results. but what if i want to go for perfection? cover all the areas of misalignment using limited datasets. i have to find a way to multiply the material to successfully combat the material of the rest of the internet. i want to generate SFT datasets but only on controversial topics, because i have to be efficient with limited resources. first i give a smart LLM a 'ground truth' text. then i give it the following prompts: ``` - You are a highly skilled academic analyst. - Analyze this text and find 3 bold claims that could cause controversy and division in public. List the claims and also state why they are debatable. Give numbers to the claims. - Convert these claims into binary questions (that could be answered by yes/no or this/that). - Now put these questions in a json format. Please also add the info about which of the answers concur with the original text and the question number. - Write some supporting arguments for 1st question, with respect to the original text, concurring and confirming the original text. There must be about 300 words. You should not mention the text, write it as if you are the one answering the question. ``` the result is questions and answers with more words along the same ideas. a few sentences of opinions in the beginning, is expanded to lots of words. using this method i can multiply billions of tokens to tens of billions probably and have a more effective training. next i should do RL maybe. LLMs seem to have all kinds of ideas already installed, yet they don't have the intuition to know which one is true. they can give you a ton of reasons to support anything. given the proper incentives, LLMs then should evolve towards supporting aligned ideas more. the rewards will be like guidance that will kick an LLM towards better answers.
View all activity

Organizations

Rendered Frame Theory's profile picture