Skip to content

Adding technical report: Resolver #289

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 8 commits into
base: main
Choose a base branch
from
Draft

Adding technical report: Resolver #289

wants to merge 8 commits into from

Conversation

mikekamminga
Copy link
Contributor

This commit adds the Resolver specification working document which was originally created by the team at Hyma/Tokens Studio as a contribution to the DTCG working group and the evolution of Design Tokens standards and ecosystem.

All comments from the source Google doc — where earlier collaboration found place — which were open/unresolved have been included in this version as ISSUE blocks throughout the document and need to be discussed and resolved.

Changes

Introduction of resolver specification

How to Review

Early PR for early internal review

This commit adds the Resolver specification working document which was orginally created by the team at Hyma/Tokens Studio as a contribution to the DTCG working group and the evolution of  Design Tokens standards and ecosystem.

All comments from the source Google doc — where earlier collaration found place — which were open/unresolved have been included in this version as ISSUE blocks throughout the document and need to be discussed and resolved.
Copy link

netlify bot commented Jul 30, 2025

Deploy Preview for designtokensorg ready!

Name Link
🔨 Latest commit a51e049
🔍 Latest deploy log https://app.netlify.com/projects/designtokensorg/deploys/68a26fe5ec08d5000814f264
😎 Deploy Preview https://deploy-preview-289--designtokensorg.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

* chore: format resolver module into markdown

* chore: run Prettier on CHANGELOG.md
@@ -72,7 +72,7 @@ <h1>Modules</h1>
<a href="./format/">Format</a>
</li>
<li><a href="./color/">Color</a></li>
<li>Animations (coming soon)</li>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 I have no idea what happens to animations but in the future I’d love to resume this exploration (I’m no Val Head but I’m interested and have some knowledge here)

@drwpow
Copy link
Contributor

drwpow commented Aug 4, 2025

I have 2 PRs here that I’d like to break out into individual reviews, and go section-by-section:

1: #291
2. #292

In each PR I have made suggestions to improve the first 2–3 sections of this module, along with reasons behind each change. Would love thoughts/reviews on each individually to keep this PR less noisy 🙏

- **description** (optional): A description of the resolver's purpose.
- **sets** (required): An array of token sets to be used as the base for resolution.
- **modifiers** (optional): An array of modifiers that can alter or override tokens from the base sets.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel that a version property could also be useful. Better have it and never change it than relive the same experience of unversioned changes in the DTCG format.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That’s really smart. Will take this to a vote but I suspect others would be in favor

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also agree. Let's add a version property.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm should we just start out at 1.0.0 for now? I need to look back at #265 but IIRC most folks were generally in favor of plain old semver

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually a huge believer in calver for things like this; happy to bike shed it out. I think it takes a lot of stress off of deciding whether a release is major/minor/etc and worrying about violating some user expectation in that regard.

Comment on lines 7 to 8
- **sets** (required): An array of token sets to be used as the base for resolution.
- **modifiers** (optional): An array of modifiers that can alter or override tokens from the base sets.
Copy link

@Sidnioulz Sidnioulz Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm of the opinion that there should not be two distinct properties here. When I was first introduced to the resolver spec, coming from a multi-product, multi-platform context, I did not understand at all that I was looking at resolving tokens for a single platform/product pair, and started modelling my different groups using sets.

At the end of the day, sets are ordered chains of files/named file groups that get loaded and merged together. Modifiers are the exact same thing, except they only get loaded when a contextual condition has a specific value.

I've experimented with using a format that intentionally conflates both non-modal sets and modal sets:

{
  "name": "Flat tokenset chaining example",
  "data": [
    // Flat syntax works
    "primitive/brand.json",
    "primitive/functional.json",

    // Arbitrary groupings with a name work
    {
      "name": "semantic",
      "values": [ // NOTE: I would prefer naming this sources/sets/tokens
        "semantic/spacing.json",
        "semantic/typography.json"
      ]
    },
  
    // Modes can be treated with the same object syntax as set groups
    {
      "name": "color",

      // Good opportunity to document what type of data modes will override
      // since sets and modes use the same syntax
      "values": ["semantic/color.json"],

      // And finally, we include conditional sets that depend on the named
      // mode above having a specific value
      "modes": {
        "light": [
          "themes/light.json"
        ],
        "light-high-contrast": [
          "themes/light.json",
          "themes/light-hc.json"
        ],
        "dark": [
          "themes/dark.json"
        ],
        "dark-high-contrast": [
          "themes/dark.json",
          "themes/dark-hc.json"
        ],
        "dark-dimmed": [
          "themes/dark.json",
          "themes/dark-dimmed.json"
        ]
      }
    },
  
    // Modes get chained. If two modes modified the same token for some reason, the last one is retained.
    {
      "name": "size",
      "modes": {
        "mobile": ["size-modes/small.json"],
        "tablet": ["size-modes/medium.json"],
        "point-of-sale": [
          "size-modes/medium.json",
          "size-modes/pos-adjustments.json"
        ],
        "desktop": ["size-modes/large.json"]
      }
    }
  ],

  // Cannot be accidentally overridden by modes
  "legal/tokens-for-some-random-law-that-requires-specific-banners-or-logos.json"
}    

My personal, individual opinion is that:

  • This is easier to learn as it does not require attributing a specific meaning to "sets"
  • It treats "modes" as metadata to a group of sets, which I feel will better align with potential future extensions: any form of contextual information on a tokenset is stored at the same level, and modes are no different from other such information that emerges in the future
  • It affords more possibilities for prioritising: I can add a non-modal token set at the end of my resolver chain that, e.g.
    • enforces specific brand primitives were not overridden
    • or hotfixes an issue present across multiple modes that would require more thorough investigation to push to production

There are plenty of variations that can be made on the syntax for modes. What I really want to discuss first is not the exact above syntax, but rather, does it make more sense to separate sets and modes or to group them into a unified chaining syntax? I would greatly appreciate hearing everyone's opinion on this aspect first.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All good points. Yeah I’ll raise all this to an internal discussion to see if we want to reflect this.

Or, even if not, writing something, somewhere (even if it’s not a part of the spec) that explains why we didn’t go down this path etc

Each token set in the **sets** array is an object with the following properties:

- **name** (optional): An identifier for the set.
- **values** (required): An array of references to token files or inline token definitions. A reference MUST be a string containing a path to a token file. An inline token definition MUST be a JSON object containing a valid design token structure.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally speaking, I feel the values property would be more aptly-named sources or tokens. I would not suggest using files as one could imagine extending this spec to URI schemes, but I feel that values doesn't really convey what is being fetched.

Using different words from DTCG properties could slightly reduce cognitive load for folks working with tokens or implementing token tools; whenever the concept of value appears, it would be individual token stuff, whereas different vocabulary would hint to tokenset management stuff.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will also be raising this topic as well for internal review this week & next 🙂. I went on a full edit pass and name and value are almost meaningless because they are used in 4+ contexts. I was struggling trying to explain specific parts of the spec in unique terms. I think we can adjust the syntax so we can get more precise language here

Comment on lines 30 to 32
{
"inline-token": { "$value": "some-value" }
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Depending on what other future sources of tokens are imagined, it could be useful to request a more predictable top-level syntax for inline token data, e.g.:

{
  "type": "inline",
  "data": { "inline-token": "..." }
}

Considering that inline token data would be an arbitrary data dictionary, it would be impossible to reserve keywords later down the line without introducing breaking changes.

This point can be safely ignored if the spec ends up being versioned though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are some related points here around a string, in some cases, pointing to a filepath. And in others it’s an arbitrary string. I also want to improve this concept as well—really clear, really extensible type discrimination. I’m not a fan of datatypes being handled in radically different ways just because of where it appears in a structure (which could change!). Instead I really like explicit “boundaries” exactly like you’re proposing.

This is also a “raise discussion with group” thing we’ll want to get on the same page internally first, and then bring it back to this GitHub for another public review next month


<aside class="issue">

It is recommended to use `.tokens.json` as the file extension for token files to align with the Design Tokens Format Specification naming conventions. This helps reinforce that these files should contain valid DTCG token structures rather than arbitrary JSON data.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: should resolver files and token files use the same extension? Wouldn't .resolver.json or a variation thereof help clarify which file is the resolver and which ones are the data?

It might also be worth going over the examples and using .tokens.json to help give this convention more visibility?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup will add that line. I also think .tokens.json specifically should NOT be used because resolvers have incompatible syntax.

Comment on lines 55 to 57
"meta": {
"alias": "spacing"
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: other examples?

Something bothers me about this that I can't quite express. I certainly would not expect any form of automatic namespacing, but I'm also not sure if this is the best example for namespacing with an opt-in alias.

It feels to me that aliasing would require token file authors to be aware of how the tokens would eventually be aliased when producing them, in the provided example. Ultimately, I wonder if doing that wouldn't complicate tokenset composability in a white-label context. I would love to see more examples that don't necessarily impact authoring practices.


Tracking aliases throughout token lifecycles

One use case I've had in the past was to prefix an old tokenset with a legacy key so it could still be used in design tools as a new tokenset was being deployed. But I needed my token transformation tools to be aware of this legacy namespace and to try and resolve it (with a custom SD parser) so that code could continue to use old tokens without breaking changes.

In that context, a namespacing feature could possibly have been useful but only if tools could be aware of what has been namespaced to what, and could revert the process when relevant (yes, that means potential conflicts).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this behavior specifically contradicts with some other examples, and is one of the older concepts from the spec. We’ll double-check if we want to keep the namespacing behavior.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking at the examples here, i'm landing in the anti-namespacing camp too. I think that having the resolver manage naming (specifically the prefix) creates a duplication of responsibility with naming at the token level.

}
```

Resulting in tokens accessible via spacing.sm and spacing.lg.
Copy link

@Sidnioulz Sidnioulz Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider the following examples.

Say I define "responsive" or "device" or "screen" or "size" modes. I may want to affect my spacing.layout.* tokens but not my spacing.component.* ones. I may legitimately have a distinction between these types of spacing tokens because some are affected by end users' default font size (accessibility setting) and others not.

Or I may use this "responsive" mode to affect my typography.*.fontSize tokens, but not the font families, etc. In parallel, I may want my typography.*.fontFamily tokens to be modified by an accessibility feature for people with dyslexia, or my font features (tabular numerals) for people with dyspraxia.

Having a mandatory auto-namespacing system that affects a token taxonomy arbitrarily around the assumption that a mode affects a whole top-level tree of tokens, and that no two modes have legitimate reasons to affect subtrees within the same top-level tree, would be a painful constraint. It would make the spec less expressive and break legitimate use cases.


<aside class="issue">

**Namespace vs Alias Terminology:** Should the `alias` property be renamed to `namespace` to avoid confusion with token aliases (references to other tokens)?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

big +1 on using namespace, because it is more clear and avoids naming conflict.


**Namespace vs Alias Terminology:** Should the `alias` property be renamed to `namespace` to avoid confusion with token aliases (references to other tokens)?

**Redundancy with Modifier Names:** The `meta.alias` property may be redundant since modifiers already have a `name` property that could serve the same namespacing purpose.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i can see an author wanting to name a modifier but namespace it differently; 'name' might be useful in authoring contexts, whereas namespace might be used in a parsing/resolving context.


<aside class="ednote">

We need to decide if the resolver spec also follows the $name, $values, etc.</aside>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the token files, we expect reserved properties to get a $ prefix if the object is 'open', ie can contain arbitrary elements. Granted I think that we should keep the dollar sign prefix throughout, the current spec allows un-prefixed keys in 'closed' objects, ie objects with a strictly enforced set of properties. so this isn't necessarily inconsistent ... just potentially confusing :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants