mod prompt_ir#
- module prompt_ir#
Prompt Intermediate Representation (IR) types for the ACG system.
The Prompt IR decomposes LLM conversations into addressable blocks with structural metadata for cache analysis and prompt rewriting. This is deliberately different from the message-oriented
AnnotatedLlmRequestin core – the IR flattens the hierarchy into a sequence of blocks, each carrying provenance, sensitivity, and stability metadata.Enums
- enum BlockContentType#
Content type discriminant for a prompt block.
- Text#
Plain text content.
- ToolSchema#
Tool/function definition schema (JSON Schema).
- ToolResult#
Tool/function call result.
- StructuredOutput#
Structured output (e.g., JSON output).
- Image#
Image content (base64 or URL reference).
- enum PromptRole#
Role of a prompt block within the conversation.
- System#
System prompt.
- User#
User message.
- Assistant#
Assistant (model) message.
- Tool#
Tool / function call result.
- enum ProvenanceLabel#
Origin label for a prompt block.
Tracks where the content came from so downstream phases can apply provenance-specific caching and sharing rules.
- System#
System-level instructions (e.g., system prompt).
- Developer#
Developer-authored prompt templates.
- User#
End-user input.
- Tool#
Tool output / function call results.
- Retrieval#
Retrieved context (RAG).
- Memory#
Agent memory / conversation history.
- enum SensitivityLabel#
Sensitivity classification for a prompt block.
Gates downstream sharing decisions. Defaults to
Public– promotion toPrivateorRestrictedrequires explicit assignment (T-04-02).- Public#
Content may be shared freely.
- Private#
Content contains private information.
- Restricted#
Content is restricted and must not leave its originating scope.
Structs and Unions
- struct PromptBlock#
A single addressable block within the Prompt IR.
Each block carries provenance, sensitivity, and content type metadata along with an optional token count. Blocks are sequenced by
sequence_indexwithin the parentPromptIR.- sequence_index: u32#
Zero-based index in the prompt sequence.
- role: PromptRole#
Conversation role of this block.
- content: String#
Raw content of the block.
- content_type: BlockContentType#
Content type discriminant.
- provenance: ProvenanceLabel#
Origin of the content.
- sensitivity: SensitivityLabel#
Sensitivity classification (defaults to
Public).
- token_metadata: Option<TokenizationMetadata>#
Optional tokenization metadata.
- struct PromptIR#
Prompt Intermediate Representation – the full decomposed prompt.
A
PromptIRis produced by the IR construction phase (Phase 6) from anAnnotatedLlmRequest. It flattens the message hierarchy into an ordered sequence ofPromptBlocks, each carrying structural metadata for cache analysis and rewriting.- ir_id: Uuid#
Unique identifier for this IR instance.
- blocks: Vec<PromptBlock>#
Ordered sequence of prompt blocks.
- tool_schema_hashes: Option<Vec<ToolSchemaHash>>#
Hashes of tool schemas active at IR creation time.
- structured_output_schema_id: Option<String>#
Identifier of the structured output schema, if any.
- source_request_hash: Option<String>#
Optional hash of the source
AnnotatedLlmRequestfor traceability.
- created_at: DateTime<Utc>#
When this IR was created.
- struct SpanId(String)#
Stable span identifier for addressable prompt blocks.
A newtype wrapper around
Stringthat providesHashandEqsoSpanIdvalues can be used as keys inHashMap/HashSet.
- struct TokenizationMetadata#
Token count metadata for a prompt block.
Records the model family and token count so that downstream phases can compute cache-aware token budgets.
- model_family: String#
Model family used for tokenization (e.g., “claude”, “gpt”).
- token_count: u32#
Number of tokens in the block content.