[*AIURM]

AIURM Protocol v0.1 (Experimental Draft)

Artificial Intelligence Universal Reference Marker

AIURM is an experimental proposal that establishes a lightweight and universal layer to structure and organize interactions with artificial intelligences. It functions as a system of semantic anchors, transforming scattered inputs into cognitive workflows that are reproducible, auditable, and collaborative, applicable in any context while preserving adaptive flexibility.

Core Concept

AIURM defines a set of instructions that enable structured interaction between humans, AIs, and systems through a system of markers.

It is a fundamentally simple concept that, when combined with the power of current and future Large Language Models (LLMs), can unfold into different levels of complexity and possibilities still being explored.

More than just an actionable marker, it is a convention to bring structure, traceability, and governance to AI outputs, applicable to any interaction and capable of referencing any content in any context: APIs, agents, integrations, and chats..

AIURM can complement existing orchestration and automation solutions by offering an additional layer of structured control over interactions with AI and adding traceability, emerging governance, and semantic meaning to the inputs and outputs generated by the models.

In essence, AIURM bridges the gap between the fluidity of natural language and the need for control and structure in digital processes.

Automatic and Custom Markers

Markers themselves are not new. What AIURM proposes is to systematize their application: they cease to be simple labels and become a structured and semantic convention, used together with the AI to structure, track, and reuse interactions in a conscious and deliberate way.

In each response, the AI automatically generates a marker [ *n ], which acts as an immediate and sequential reference to the produced content.
Although not essential to AIURM’s core concept, this marker facilitates tracking the evolution of responses and allows for quick linking between blocks, without requiring user-defined markers.

For context entries or flows that require greater control and precision, users can define custom markers, such as [ *custommarker ]. These are the structural foundation of AIURM interaction, allowing data blocks, logic, or results to be clearly named, reused, and combined in a multidimensional way.

Both automatic and custom markers function as direct references to the generated content, and can be used for multiple purposes: cross-referencing, comparison, combination, analysis, transformation, summarization, export, or even the creation of new markers.

To reduce ambiguity and inference load for both humans and the AI, AIURM defines clear and distinct syntaxes for the two reference interaction moments: assignment (when a marker is created) and usage (when it is later referenced).

Automatic marker assignment syntax

Always added by the AI at the end of the response, without user intervention:

Question: What was the Big Bang?
Answer: The Big Bang was the initial explosion that gave rise to the universe. [*2]

Custom marker assignment syntax

For inputs, depending on the context, add the custom marker at the end of the block, after the content.
This type of marker is essential for building complex structured flows, allowing you to create clear semantic references and give meaning to any block of information, whether it contains data, logic, or results.

Any text… [*text_x]

{ "json": true } [*data_x]

Reference and Reuse:

Instead of repeating information or trying to explain previous context, you can reference the original marker. This avoids ambiguity and contextual confusion.
The AI’s behavior when consulting a marker depends on the type of content referenced: whether it is a static value, an instruction/logic, or a result.

Marker reference syntax

This syntax allows you to perform actions on already referenced content or generate new references in a clear and traceable way, always in the natural order of action followed by the marker.

Correct *text_x

Show *10

Analyze *data_x

Summarize *data_x into [*summary_data_x]

Intention Suffixes

In addition to the concept of markers, AIURM also defines a set of suffixes for predefined interactions with the AI, called Intention Suffixes in the context of the protocol, distinct from the common use of hashtags.

These suffixes determine the granularity of the response generated by the AI. By adding a suffix such as #0, #1, #2, or #3 at the end of an input, it is possible to define the level of detail and type of output desired. This way, it is possible to explicitly control when and to what extent the information will be provided in each response.

Intention Suffixes syntax

Add the suffix to control the intended response level.

#0: Responds ONLY with “Done *n [*m]”, suppressing the result.
#1: Short/concise response.
#2: Intermediate response.
#3: Most detailed response possible.
No #n: AI’s default response for the context.

What was the Big Bang #1
Response: (short, direct answer) [*3]

Show *analyzex #3
Response: (most detailed possible) [*4]

The #0 suffix only suppresses the display of the result. This is useful for silent operations or intermediate steps, and it confirms that the result has been associated with the marker.

Any text… [*text_x] #0
Response: Done *text_x [*5]

{ "json": true } [*data_x] #0
Response: Done *data_x [*6]

*3 #3

DLR (Data, Logic, Result)

The DLR methodology encourages organizing information under markers in a structured way. This means that each part of a process, which includes the original information, the instruction on what to do with it, and the final output, receives a clear marker.

Raw or processed information [*data_x] #0
Logical instructions or algorithms the AI should follow [*logic_x] #0
The output of an action or data processing [*result_x]
Apply *logic_x to *data_x and generate [*result_x]

AIURM HR Analytics Workflow
An example of a workflow applying the AIURM concept.

The following examples illustrate workflows typically designed for API integration with the LLM, but they can also be perfectly simulated via prompt for the purpose of understanding the concept. The practical result can be seen by following the steps on the Onboarding Page.

Load employee data:
employee data... [*employees] #0

Define performance analysis logic:
performance criteria... [*performance_logic] #0

Define retention logic:
retention criteria... [*retention_logic] #0

Apply the logics:
Apply *performance_logic to *employees [*performance_analysis] #0
Apply *retention_logic to *employees [*retention_analysis] #0

Generate a departmental summary:
Consolidate *performance_analysis and *retention_analysis [*department_summary]

Create a strategic action plan:
Based on *department_summary, generate [*hr_action_plan]

Build an executive dashboard:
Combine *performance_analysis, *retention_analysis, and *hr_action_plan [*executive_dashboard]

Test advanced scenarios:
conservative criteria... [*conservative_criteria] #0
aggressive criteria... [*aggressive_criteria] #0
Apply *conservative_criteria to *employees [*conservative_promotions]
Apply *aggressive_criteria to *employees [*aggressive_promotions]
Compare *conservative_promotions with *aggressive_promotions [*scenario_comparison]

Audit any marker:
show full *aggressive_promotions dependency tree 

Export to JSON:
gen json *aggressive_promotions

AIURM Investment Portfolio Analysis Workflow

This is an example workflow applying the AIURM concept to investment portfolio analysis and management, focusing on the logical layer and auditability.

Load portfolio data:
portfolio data... [*portfolio] #0

Define investment decision logic:
complex investment criteria... [*investment_logic] #0

Apply investment logic:
Apply *investment_logic to *portfolio [*investment_recommendations]

Test advanced scenarios:
conservative criteria... [*conservative_criteria] #0
aggressive criteria... [*aggressive_criteria] #0
Apply *investment_logic to *portfolio using *conservative_criteria [*conservative_results]
Apply *investment_logic to *portfolio using *aggressive_criteria [*aggressive_results]
Compare *conservative_results with *aggressive_results [*scenario_comparison]

Audit and dependencies:
explain reasoning for each result in *investment_recommendations
show full *scenario_comparison dependency tree 

Export:
*investment_recommendations as json

The ability to chain operations, converting the result of one logic into new data for the next, enables the construction of sophisticated and structured flows, driven by the AI itself.

While it is possible to send the entire context as a single large block in one input, this approach is not recommended because it can dilute the model’s attention and also to ensure greater control and traceability.
Ideally, context formation, including data, logic, result, or instruction, should occur in incremental and balanced steps, allowing the AI to infer more efficiently.

Due to the stateless nature of current LLMs, which require resending the context with each new interaction and balancing the model’s attention, the use of AIURM requires active management.
The intention suffix #0 should be applied in a balanced manner, and it is recommended to create checkpoints (without #0) whenever the resulting marker is chained across multiple levels in subsequent interactions.

Explicit References

In scenarios that require greater control and traceability, explicit references to markers and their attributes can be used to define comparisons, conditions, or joins in a clear and controlled way. Although modern LLMs can already mitigate these relationships in many cases, this practice ensures additional precision when needed.

compare *data_1.id = *data_2.id
filter *data_1.value > 1000 [*filtered_data_1]
calculate *risk_portfolio for *assets where *assets.volatility > 20% [*risk_result]

Potential and Possible Applications

Context Reuse

Markers allow data, logic, and results to be easily queried, analyzed, combined, reused, and assigned to new markers in other contexts.

Clarity and Reduced Ambiguity

By explicitly referencing information through markers, repetition, ambiguity, and the effort of inferring which information is being used are avoided. In essence, AIURM transforms interactions with AI into a structured and traceable record, in which each marker can be queried, analyzed, and reused. In this way, the context window, beyond the linear view, begins to offer a multidimensional perspective.

Governance and Auditing:

With a single input, the AI reconstructs the logical path leading to the marker, using the information processed during the interaction.
Commands like show dependency tree *marker reveal the hierarchy and connections between markers.

generate full marker dependency tree in Graphviz/DOT format

Why AI Explain Matters

The ability to explain is directly linked to trust, clarity, and traceability in AI interactions.
Understanding how a result was generated helps interpret, audit, and evolve decisions in a structured way.

Through a reference marker architecture, each piece of data, rule, transformation, and result can be identified and traced.

explain reasoning for *marker
trace source of *result_x
compare *option_a *option_b

Complex reasoning chains can be reused, ensuring consistency across queries.
Analytical layers such as dependency tree, logic trace, and scenario reasoning reveal the full structure of the applied reasoning.
An environment is created where explanation emerges naturally from the organization of the interaction.

Current Contex

AIURM, in its experimental draft stage, operates within the capabilities of currently available AI technologies.

The protocol’s instructions are transmitted to the AI through prompts or APIs, requiring the model to interpret and follow them with each new interaction. Current models understand the concept and are able to apply it at different levels, enabling testing, research, and the continuous evolution of the proposal.

AIURM markers are not maintained persistently, which makes it necessary to resend them depending on the context.

Conceived as an evolving paradigm, AIURM already anticipates how structured interaction could function in future stateful environments.

Why AIURM as a Minimal Semantic Protocol?

The fundamental premise is that Large Language Models (LLMs) are inherently capable of processing and understanding a wide range of data structures, languages, algorithms, pseudocode, and protocols.

For this reason, considering the cognitive environment, instead of focusing on complex syntaxes (the domain of a DSL), AIURM focuses on structural semantics (the domain of a protocol).
It acts as an elementary logical layer that helps the AI organize itself and communicate in a structured way with humans, agents, and systems.

AIURM transforms the linear context window into a multidimensional referential space.

AIURM symbols:
* # [ ]
Simple and efficient.

Experience AIURM in practice:

Use the step-by-step onboarding for a hands-on introduction.
See the Onboarding page for detailed instructions.