Designing how an AI thinks

A behavioural specification for an adaptive AI companion — defining decision logic, state management, and communication rules across 6 user types and a 9-stage lifecycle

AI Systems Design Behavioural Specification Clinical Design State Management Engineering Handoff

Role

Lead AI Systems Designer

Solo ownership of a computable behavioural specification — from initial architecture through engineering handoff.

Platform

Pura Health — UAE consumer health platform

An AI companion guiding users through nutrition, diagnostics, care plans, and genomic insights.

Deliverable

Behavioural spec — not a prototype

A decision architecture expressed as typed fields, defined enums, and boolean flags an engineering team could implement directly.

Scope

6 user types — 9-stage lifecycle

AI tone, clinical override logic, lifecycle adaptation, and explicit conflict resolution rules for every edge case.

Completed

January 2026

XLSX specification + engineering handoff doc. Collaborators: clinical advisory, product, engineering.

The AI companion at the centre of Pura Health speaks to users constantly. It interprets results, delivers recommendations, and responds to health events in real time. The problem: it had no coherent model for how it should behave.

Tone was inconsistent. Clinical sensitivity was handled ad hoc. There was no logic governing when the system should be reassuring versus urgent, plain-spoken versus technical, empathetic versus data-forward.

My brief was to build that logic — not as a brand guidelines document, but as a computable specification an engineering team could implement directly. The deliverable wasn’t a prototype. It was a decision architecture.

Why Personas Weren’t Enough

The existing approach relied on static user personas. The assumption: if you knew who the user was, you knew how to speak to them.

The problem is that personas flatten context. A user classified as a ‘Health Beginner’ might be anxious today and confident tomorrow. An elderly user might have high clinical literacy but poor technical fluency. A caregiver managing a child’s condition needs completely different framing than someone managing their own.

Personas are useful for team empathy and design storytelling. They are not useful as runtime system inputs.

The core design decision was to replace persona-as-input with a layered attribute model: instead of asking “who is this user?” the system asks “what signals are we detecting right now?”

Before

After

Static personas

Dynamic attributes

Customer type

Decomposed dimensions (literacy, accessibility, role, lifecycle stage)

Reactive response

Rule-governed adaptation

The Architecture

The system evaluates every AI message through six layers, top to bottom. Higher layers override lower ones.

The sequence is not arbitrary — it reflects clinical priority, consistency requirements, and user agency in that order.

#

Layer

What It Does

Key Rule

1

Clinical Safety

OVERRIDE

Detects severity of health events; locks AI tone when clinical risk is present

Bypasses all other layers when triggered. Non-negotiable.

2

Tone Lock

CONSISTENCY

Maintains communication register across a thread once established

Set after 2–3 exchanges. Resets only on explicit request or sustained register shift.

3

User Profile

IDENTITY

Persona + health literacy + technical fluency → base tone and vocabulary level

Literacy controls vocabulary. Persona sets default tone. Both are stable attributes.

4

Session Context

SITUATION

Behavioural mode + lifecycle stage + message mirroring → adjusts base tone

Mirroring active only during establishment phase. Disabled once tone lock engages.

5

UI Overrides

PREFERENCE

Explicit user-controlled settings: expert mode, accessibility, language

User agency layer. Respects declared preferences above inferred ones.

6

Output Modifiers

POLISH

Confidence thresholds trigger transparency language; safe defaults apply when uncertain

Max 1 transparency phrase per thread. Never applied during clinical alerts.

Clinical Safety as Architecture

The most consequential decision in the system was treating clinical safety not as a feature but as an architectural layer — one that cannot be overridden by user preferences, lifecycle stage, or persona defaults.

The system operates a 5-level severity classification. Levels 1 and 2 are routine — the adaptive layers below handle these normally. At Level 3 and above, the system enters a protected state: tone is locked to a specific clinical register, transparency language is suppressed, and all lower-layer adaptation is bypassed until the event resolves.

This was not a UX decision. It was a clinical governance decision expressed as system architecture. Designing it required close collaboration with Pura’s clinical team to define what ‘concerning’, ‘critical’, and ‘emergency’ mean in the context of each condition the platform monitors.

“Good UX — gentle, reassuring, personalised — is the wrong call when someone is experiencing a health emergency.”

David Quill

State Management

The Tone Lock Mechanism

One of the more technically nuanced problems was consistency — how the AI maintains a coherent register across a multi-turn conversation without either rigidly locking into an initial tone or constantly recalibrating in ways the user would experience as erratic.

The solution was a Tone Lock mechanism: a state that activates after a short establishment phase (typically 2–3 exchanges), persists for the duration of a thread, and resets under two specific conditions — an explicit user request to change register, or three or more consecutive messages in a dramatically different style.

This mirrors how skilled communicators actually behave. You adapt early, find the right register, then hold it — adjusting for content, not recalibrating the whole relationship with every message.

Explicit Conflict Resolution

A layered system always produces edge cases where signals conflict. Rather than leaving these to be resolved at implementation time (where the wrong call gets made silently), the specification includes an explicit conflict resolution table.

A representative example: a user who writes in simple, plain language but frames questions with formal grammar. A naive system might classify them as low-literacy and simplify vocabulary accordingly. The correct behaviour is to separate the two signals — mirroring controls formality, health literacy controls vocabulary — and treat them independently. A retired cardiologist who texts casually should receive clinical vocabulary, not plain language.

Signal Conflict

Resolution Rule

Casual style × high clinical literacy

Mirroring controls formality. Health literacy controls vocabulary. Separate the signals.

Detection confidence < 50%

Default to mid-ground tone. Never guess at persona.

Detection confidence < 30%

Include transparency phrase. Escalate toward human support.

Celebratory lifecycle stage + distress signals

Celebratory tone is never used when distress signals are present, regardless of lifecycle stage.

Engineering Handoff

A behavioural specification is only useful if an engineering team can implement it. The final deliverable included a complete metadata tag schema: every system state, signal, and output variable expressed as typed fields with defined enums, boolean flags, and float confidence scores.

Engineers didn’t need to interpret design intent — they had a typed contract. The schema covered all six layers: harm level as an integer, tone lock as a boolean with a reset trigger string, health literacy as a three-value enum with a confidence float, lifecycle stage as an integer 1–9. Any given AI message should be traceable back to a specific combination of layer states.

“Designing AI behaviour means writing the rules that determine what the AI does when no one is watching.”

David Quill

Reflection

The most important shift in this project was learning to think in terms of precedence and state rather than scenarios and flows.

Traditional UX design asks: what happens in this situation? Systems design asks: what are the rules that determine what happens in any situation?

Writing a conflict resolution table is a different discipline than writing user journey annotations. You’re not describing a specific path — you’re writing the grammar the system uses to generate any path. Getting that right requires a degree of logical rigour that’s closer to engineering than to conventional design practice.

It also requires accepting that the right output isn’t always the most comforting one. The clinical safety layer exists precisely because good UX — gentle, reassuring, personalised — is the wrong call when someone is experiencing a health emergency.