Layered Intent Control Architecture in Practice

In my blog post which was kind of a hey surprise! I’m alive bit .. I reflected on software control architecture that I’ve settled on and I’ve been working on. It’s half working on hardware, half still in design.

I’ve been working on three (5-sh) separate robots over the last couple years and every tine I switch to working on another I have to remember what kind of code and strategies I was using. Well, enough of that.. I now have could which is flexible enough to run on 4 different robots. Now when I work on some code, it’s for 4 robots… PHEW! That takes a lot more effort it seems and better, more flexible, structure. Anyhow back to the meat a potatoes.. I started with the end in mind.. from the bottom up through the first 3.5 layers.. and then determined it was time to switch sides or I’d overload the more semantic layers. I have done a lot of reading and spent a lot of time on YouTube watching older applied robotics videos (applied robots seems like it’s no longer hot, well it is for me).

Here is my ChatGPT-ified version of my layered system.. which is based on a couple different types of layered systems from basically ~87 or so up through the mid 2000s.

Mission Layer

  • Declares which competition (SBC3, SBC5, PopCan, RoboMagellan, etc.)
  • Supplies parameters:
    • distances
    • zones
    • object classes
    • time limits
  • Feeds the Strategy Selector

✅ Mission defines context and success, not behavior.


Strategy Selector

  • Largely determined by Mission
  • May be modulated by:
    • battery
    • confidence
    • previous failures
    • time remaining
  • Produces bias, not commands

Examples:

  • conservative vs aggressive
  • bonus-seeking vs completion-first
  • manipulation-cautious vs speed-focused

✅ Strategy shapes how hard and how risky, never what.


Rule Engine

  • Large set (30–40 is reasonable)
  • Mostly always enabled
  • Each rule:
    • observes context
    • votes or biases
    • constrains feasibility
  • Strategy biases rule weights

✅ Rules are the physics of intent — they constrain, veto, or soften options.


Goal Selection

  • Outcome of rule voting
  • Produces a single active Goal (or none)
  • Goal is:
    • high-level
    • semantic
    • competition-relevant
    • contextual

Examples:

  • REACH_LOCATION
  • DELIVER_OBJECT
  • RETURN_TO_ZONE

✅ This is the pivot point of the system.


Context Goal Layer (Goal Interpretation / Decomposition)

This is a really important layer

Responsibilities:

  • Interpret Goal semantics
  • Expand into:
    • planning intent
    • waypoint sequences
    • perception needs
    • interaction requirements
  • Decide what kind of planning is needed, not how to move

This layer answers:

“Given this goal, what must be true before I can claim success?”

✅ This is where “open fridge”, “hallway traversal”, etc. emerge — not because the goal says so, but because reality demands it.


(Optional / Transparent) Global Map Layer

  • Sometimes active, sometimes bypassed
  • Holds:
    • partial maps
    • known waypoints
    • semantic zones
  • Informs feasibility and routing

Important:
It does not own intent — it only informs it.

✅ Transparency here is a feature, not a flaw.


Local Goal Generator

  • Final distillation
  • Converts abstract intent into:
    • immediate objectives
    • local sub-goals
    • headings, distances, approach modes
  • Operates in real-time
  • Hands off to motion authority

✅ This is where deliberative intent becomes executable.


Motion Authority

  • Consumes intent
  • Obeys Sentinel and safety overrides
  • Does not question why

✅ Correctly dumb, correctly constrained.


The bottom three layers are covered under the term motion authority in the above description and my sentinel layer isn’t spelled out at all; but if you’ve seen one reactive layer, you have seen them all and it was my comfort zone.

So far my favorite part has been testing the Heuristic Layer (Strategy->Voting->Goal) which I wrote as a test in NodeRED .. for fun, but I’m thinking that’s probably where it’ll live now and traverse my MQTT-> CAN Bridge. More on the juice hardware details some other time; but it’s a flexible multi-node (5-node) system on a CAN network with ODrives (CAN) and a Pi Zero W which is there mostly for logging and dashboarding now.

Layered Intent Control Architecture

LICA (Layered Intent Control Architecture) is a robotics control architecture designed to unify autonomy, safety, and human control while remaining observable, debuggable, and explainable in real time. LICA is particularly suited to mobile robots operating in uncertain environments, where layered autonomy must coexist with reactive safety systems and human override. The core principle of LICA is simple: no subsystem commands motors directly except a single authority resolver. All other subsystems express intent, not actuation. This single rule eliminates conflicting control, hidden priority inversions, and “ghost motion” caused by multiple layers fighting over outputs.

LICA is composed of independent intent-producing layers and a central arbitration point. Sensors feed into a Sentinel layer responsible for reactive safety intent, a Mission layer responsible for goal-directed autonomy, and an RC or Supervisor layer responsible for human or external control. All of these feed into a single Motion Authority Resolver, which is the only component permitted to drive the hardware backend. Each layer observes system state and publishes desired motion intent but never directly controls motors.

An intent in LICA is a structured description of desired motion rather than a command. Typical intent fields include linear velocity, angular velocity, and optional modifiers such as confidence, constraints, or urgency. Intents may be continuous, such as wall following; discrete, such as stop or one-shot actions; or reactive, such as obstacle avoidance.

The Sentinel layer exists to prevent damage or unsafe behavior. It runs continuously, is fast and reactive, does not plan, and does not know mission goals. Typical Sentinel triggers include bumper activation, time-of-flight hard stops, soft distance limits, sensor loss or staleness, and confidence collapse such as losing a wall during wall following. The Sentinel publishes both intent and explicit reason codes explaining why it intervened.

The Mission layer exists to execute structured behaviors over time. It is step-based, time-aware, and progress-aware, and may explicitly allow or disallow Sentinel override on a per-step basis. Typical mission actions include driving a distance, turning to a heading, wall following, waiting, or performing signaling actions such as buzzers. The Mission layer publishes intent along with explicit step lifecycle events including START and END, and always reports a completion or failure reason.

The RC or Supervisor layer exists to allow immediate human or external control. It has the highest priority when active, requires explicit arming, and is stateless with respect to mission execution. Typical examples include gamepads, teleoperation interfaces, or external supervisory controllers.

The Motion Authority Resolver is the keystone of LICA. There is exactly one authority resolver. It evaluates which layers are eligible to command motion, applies strict priority rules, and selects one and only one active authority. A typical priority ordering is RC when armed and fresh, then Sentinel when fresh and allowed, then Mission when active, and finally NONE when idle. Authority changes are edge-triggered and logged. Every transition is observable and includes context such as mission state and Sentinel reasoning, making behavior auditable after the fact.

LICA enforces explicit lifecycle transparency. Every mission step must emit a START event, must run for at least one control cycle, and must emit an END event with a reason. This guarantees that there are no zero-length steps, no invisible transitions, and that full post-run forensic reconstruction is possible from logs alone.

LICA exists to address common failure modes found in subsumption architectures, ad-hoc autonomy stacks, and monolithic planners. It avoids hidden suppression, implicit priority, poor fault tolerance, and non-explainable behavior. In return, LICA provides deterministic control, explicit authority, rich introspection, graceful degradation, and human-readable logs.

LICA systems obey strict design invariants: there is a single motor writer, layers express intent rather than actuation, authority changes are edge-triggered, all interventions are reasoned and logged, all lifecycles are observable, and the system fails safe to STOP. Violating any of these invariants is considered a design bug.

LICA is not a planner, not a behavior tree, and not subsumption. It is a control contract. Many layers may think, but only one may act.

Books & Articles Surrounding Ideas Related to LICA

Rodney A. Brooks (1986)

A Robust Layered Control System for a Mobile Robot

IEEE Journal of Robotics and Automation

→ Introduced subsumption and layered control in mobile robots

Rodney A. Brooks (1991)

Intelligence Without Representation

Artificial Intelligence Journal

→ Philosophical foundation for behavior-based, layered robotics

Ronald C. Arkin (1998)

Behavior-Based Robotics

MIT Press

→ Comprehensive treatment of behavior arbitration and layered robot control

Sebastian Thrun, Wolfram Burgard, Dieter Fox (2005)

Probabilistic Robotics

MIT Press

→ Covers perception, decision-making, and layered control in real robots

Hadas Kress-Gazit, Georgios Fainekos, George Pappas (2007)

Where’s Waldo? Sensor-Based Temporal Logic Motion Planning

IEEE ICRA

→ Formal methods for combining high-level intent with reactive constraints

Choset et al. (2005)

Principles of Robot Motion: Theory, Algorithms, and Implementations

MIT Press

→ Hybrid control and planning for mobile robots

M. Alami et al. (1998)

A Multi-Layer Architecture for Autonomous Robot Navigation

IEEE ICRA

→ Explicit three-layer architecture (deliberative, executive, reactive)

Ramadge & Wonham (1987)

Supervisory Control of a Class of Discrete Event Processes

SIAM Journal on Control and Optimization

→ Theoretical basis for safety supervision and override logic

Colledanchise & Ögren (2018)

Behavior Trees in Robotics and AI

CRC Press

→ Modern formalization of behavior selection and priority