Shared State

AG Grid released version 34.3.0 yesterday, introducing their new AiToolkitModule. This module allows you to extract JSON schema from your column definitions, making it suitable for passing to Hashbrown or any other AI SDK to build natural language controls for your grid. We wrote a guide on how to use it in your apps with Hashbrown. Essentially, the schema it creates describes the grid’s state, and you can let the LLM generate entirely new grid states and apply them to the grid. From my experience, this kind of (state in) → (state out) approach is a common pattern when building conversational experiences on top of Hashbrown.

It’s so common that I’ve been thinking: should Hashbrown come with a set of primitives for connecting state to agents? There’s some precedence for this: AI agent frameworks like LangGraph have built-in state primitives that allow LLMs to read state on-demand as part of its workflows. The earliest alphas of Hashbrown had a similar concept, and it was ultimately pulled from our v0.1 release to let us spend more time thinking about the design.

I want to share our most recent designs of sharing state with LLMs ahead of their upcoming release in Hashbrown.

Our Sketch for Sharing State in Angular

In modern Angular, all state is modeled with signals, which can be broadly categorized into two buckets: WriteableSignal and the readonly counterpart Signal. You can create writeable signals using signal(…), linkedSignal(…), and input(…), whereas a readonly signal is created using either someSignal.asReadonly() or via computed(…).

We’ve not settled on a name for this function yet, but imagine an API called something like exposeSignal that wraps the creation of an Angular signal with enough metadata for an LLM to understand what kind of state is being contained in the signal:

const exposedBooks = exposeSignal({
  name: 'books',
  description: 'List of favorite book titles',
  schema: s.streaming.array('the list of book titles',
    s.streaming.string('the title of the book'),
  ),
  source: signal([
    'Station Eleven',
    'Dune',
    'Project Hail Mary',
  ]),
})

Let’s break this down:

  1. Name and description provide the LLM with context about the state contained in the exposed signal

  2. Schema lets the LLM know the structure of the contained state, and will be essential for letting the LLM write to the signal

  3. Source is the actual signal we want to expose. Depending on the type of signal we provide here, the LLM can either read the signal or read and write to the signal.

What we get back is an exposed signal, which will have the same API surface as whatever kind of signal you pass in.

The next part of the flow is connecting our state to our LLM. Let’s say we are building a book research assistant, and we want the LLM to have access to this state in chat. Perhaps uiChatResource lets you pass in a list of exposed signals:

class Chat {
  readonly chat = uiChatResource({
    model: 'gpt-5',
    system: `...`,
    components: [...],
    state: [this.exposedBooks],
  });
}

Since exposedBooks used a WritableSignal, this will add two new tools under-the-hood:

  • readState - takes a union of all state names as an argument, reads the associated signal for that piece of state, and returns it to the LLM

  • writeState - takes a union of all state names and their schema to let the LLM write new state to the signal.

The LLM can now choose to read our book state on-demand, and can write back to our signal if something in the conversation warrants a state change. The best part? We’ll honor the streaming keywrods on the signal’s schema, letting the signal propogate intermediary state changes throughout your application.

There’s still so much in this space to think through, and we’d love your help. Please let us know if you’ve encountered similar situations where you’ve wanted to share app state with an LLM and how you went about solving it. We’d love to collect as many use cases as possible while we continue to work through this design.

Hashbrown v0.4

We’re gearing up to release Hashbrown v0.4. It’s going to be a nice release, with support for Anthropic, OpenAI’s Responses API, deepSignal for Angular, and the first building blocks for our audio support. We’re also entering Portland’s rainy season, which means less conference travel and more commits to the repo. Stay tuned!

The Sides

Keep Reading

No posts found