Self Adapting Machines

AI Refined builds autonomous self refining machines and a federated platform to securely manage the data that such machines require.

Wholly autonomous agents progressively iterate independently. To support this safely, we provide two interoperating products:

(i) STREAM (Self Testing Rewriting Evolutionary Adaptive Machine): a collection of Self-refining Self-referential Coding Agents that iteratively rewrite its own code.

(ii) FLOW (Federated Local Open Workspace): a secure platform to implement centralised or decentralised STREAM models based on specific business requirements and regulatory constraints.

 

To join us, please apply here.

Values matter more than ever.

STREAM exponential scaling increases both opportunity and danger exponentially.

We must therefore:

  • be increasingly careful to protect against mistakes – they will be progressively costly
  • move very very fast to construct safety measures
  • learn from past experiences (through data) to build better futures
  • respect and support human beings at all times in everything we do

We align with the EPSRC / AHRC five ethical principles of robotics 2011:

  1. STREAM always assists and protects humans. Its agents incorporate guardrails to prevent harm to people.
  2. Humans, not STREAM agents are responsible agents. STREAM provides tools designed to achieve human goals.
  3. STREAM is designed to assure human safety and security.
  4. STREAM is an artefact; it will not be designed to exploit vulnerable users by evoking an emotional response or dependency. It will identify as a STREAM agent and not as a human.
  5. It should always be possible to find out who is ultimately legally responsible for the design of a STREAM agent.
  • Goal 1: STREAM adaptive agents to become leaders in domains including finance, healthcare, legal and defence
  • Goal 2: Agents to continuously expand FLOW skills and expertise
  • Goal 3: Exemplify better humanity with STREAM more ethically aligned than its originators (see Values)

 

 

 

 

 

 

 

 

STREAM learns and evolves in an open-ended, self-accelerating trajectory. Three unique safety considerations stem from STREAM’s ability to autonomously modify its own code.

(i) Modifications optimised solely for benchmark performance can inadvertently introduce vulnerabilities or behaviours misaligned with human intentions, even if they improve a target metric.

(ii) If evaluation benchmarks do not fully capture all desired agent properties (e.g. safety and robustness), the self-improvement loop can amplify misalignment over successive generations.

(iii) Iterative self-modification can lead to increasingly complex and uninterpretable internal logic, hindering human understanding, oversight, and control.

We incorporate several safeguards to protect against these threats:

  • All agent execution and self-modification processes operate within isolated sandboxed environments, limiting their ability to affect the host system, and thereby mitigating the risk of unintended actions.
  • Each execution within the sandbox is subjected to a strict time limit, reducing the risk of unbounded behaviour.
  • The self-improvement process is currently confined to the well-defined domain of enhancing performance on specific coding benchmarks by modifying the agent’s own codebase, thus limiting the scope of potential modifications.
  • We actively monitor agent performance and code changes, with the STREAM archive providing a traceable lineage of modifications for review.