This workshop introduces AgentLoom, a dual-helix governance framework for reliable agentic AI in geospatial programming. Participants will leverage Persistent Knowledge and Enforceable Behavioral Constraints to stabilize LLM outputs, ensuring scientifically rigorous and reproducible software development across complex geospatial workflows.
Prerequisite Knowledge & Materials:● Intermediate understanding of programming
● Laptop with a modern web browser
● Ideally access to an LLM API key (details to obtain these will be provided) We will also provide a free-tier option
This workshop requires registration -
click here to register AbstractThe transition from passive, chat-based interfaces to autonomous Agentic AI has revealed a critical reliability gap in scientific software production (e.g. application development or programming-based data analysis). While Large Language Models (LLMs) demonstrate remarkable proficiency in generating localized code snippets, they consistently struggle with the structural requirements of software development. These models frequently fail to maintain architectural coherence across long-context development cycles, lack the "memory" to preserve scientific constraints across multiple sessions, and exhibit stochastic variability that undermines the reproducibility of complex geospatial code.
This workshop introduces a dual-helix governance framework designed to move beyond "prompt engineering" toward executable protocols for reliable agentic AI in geospatial contexts. The framework stabilizes agentic execution by decoupling the LLM’s reasoning capabilities from its volatile internal state through two orthogonal axes: Persistent Knowledge Externalization (auditable domain-specific memory) and Enforceable Behavioral Constraints (machine-executable protocols rather than suggestive instructions). The framework is implemented as an open-source software AgentLoom1 that implements a 3-track architecture (Knowledge, Behavior, and Skills). This serves as the structural foundation for building a project-specific Knowledge Graph that functions as a persistent, version-controlled and auditable repository of domain facts, architectural protocols, and validated workflows, ensuring that the agent’s reasoning remains grounded and scientifically rigorous across extended development and interaction cycles.
Participants will explore this framework and its open-source implementation that addresses five fundamental LLM limitations:
1. Long-context Fragmentation: Managing codebases that exceed the effective attention window of modern transformers.
2. Cross-session Forgetting: Maintaining knowledge over multi-day development cycles.
3. Output Stochasticity: Standardizing architectural patterns to ensure predictable, reproducible outputs.
4. Instruction Following Failure: Enforcing strict protocols (e.g., geospatial standards, or accessibility features).
5. Adaptation Rigidity: Facilitating the transparent evolution of a domain knowledge graph without the need for expensive model fine-tuning.
Learning Outcomes:● Set up AgentLoom’s Knowledge/Behavior/Skills tracks for a geospatial project.
● Identify common reliability failures in LLM-assisted geospatial coding
● Externalize key domain knowledge and project rules into auditable, version-controlled artifacts
● Apply enforceable protocols (checks/tests/constraints) to make agentic outputs more consistent and scientifically valid