##// END OF EJS Templates
Rework variant artifacts materialization model...
Rework variant artifacts materialization model Refactor VariantArtifactsPlugin around a live outgoing artifacts context and split artifact publication into explicit internal services: outgoing variant registry, assembly binding, materialization policy hooks, primary-slot convention, and slot assembly handling. Introduce variant artifact slots as identity-first public API and expose materialized assembly handles through ArtifactAssemblies. Add replayable configuration hooks for outgoing configurations, outgoing slots, outgoing variants, and registered assemblies. Create consumable outgoing configurations per variant, bind the primary slot to the root outgoing artifact set, and publish non-primary slots as Gradle outgoing configuration variants. Add deterministic injective task names for slot assembly tasks, use Sync for directory assembly, and configure the default assembly output location under build/variant-assemblies. Make primary-slot selection finalize-on-read and provide a single-slot convention that fails when no unique default can be inferred. Mark artifact internal implementation package as non-public API.

File last commit:

r40:924d9107c025 default
r51:9db7822cd26c default
Show More
identity-first-model.md
346 lines | 8.3 KiB | text/x-minidsrc | MarkdownLexer
/ identity-first-model.md
cin
WIP variant sources model
r40 # Identity-First Model: Preserving Lazy Semantics Without Custom Events
In build and configuration systems, the same tension appears again and again:
* you want an **observable model** that consumers can subscribe to using something like `all(...)`
* you do **not** want that subscription to destroy laziness
* and you would prefer to avoid introducing a custom event bus, event ordering, and a separate lifecycle model
A practical way to solve this is to **separate identity from computed state**.
## Core idea
The idea is simple:
* one object is responsible only for **identity** and minimal selection metadata
* the actual aggregate content is obtained through **separate API calls**
* those API calls may be lazy, expensive, cached, provider-based, or computed on demand
So instead of one “fat” object, we get two layers.
### 1. Identity layer
Lightweight objects that:
* are cheap to create
* are effectively immutable
* are safe to observe eagerly
* work well as keys and subscription points
### 2. State / aggregate access layer
Separate APIs that:
* resolve or compute content from identity
* do heavy work only when needed
* preserve lazy semantics where that actually matters
This is especially useful when a collection must be replayable, but should not drag expensive materialization along with it.
---
## The problem this solves
Consider a typical Gradle-like scenario.
An adapter wants to subscribe to a collection:
```java
projections.getProjections().all(projection -> {
...
});
```
If `projection` is a heavy object that already contains:
* computed bindings
* providers of source sets
* derived state
* partially materialized objects
then `all(...)` starts forcing things that were supposed to stay lazy.
Typical symptoms:
* lazy semantics are weakened or broken
* coupling increases
* plugin application order starts to matter
* custom events begin to look tempting: `onCreated`, `onResolved`, `onMaterialized`
The problem is not that `all(...)` is bad.
The problem is that **too much meaning has been packed into the observable object**.
---
## The solution: observe identity, not state
If the observable collection contains only identity objects, the picture changes.
For example:
```java
public interface SourceSetProjection extends Named {
}
```
This object contains only identity:
* `name`
* perhaps a small amount of selection metadata
* but not heavy aggregate state
Now this subscription:
```java
projections.getProjections().all(projection -> {
...
});
```
is no longer dangerous.
It eagerly materializes only cheap keys, not the full aggregate graph.
The actual content is requested separately:
```java
Set<VariantLayerBinding> bindings = projections.getBindings(projection);
NamedDomainObjectProvider<GenericSourceSet> sourceSet =
materializer.getSourceSet(projection.getName());
```
Those calls may remain:
* lazy
* computed
* cached
* tied to runtime lifecycle
* delegated to a dedicated materializer
---
## Example
### A problematic design
Suppose we define projection like this:
```java
public interface SourceSetProjection extends Named {
Set<VariantLayerBinding> getBindings();
NamedDomainObjectProvider<GenericSourceSet> getSourceSet();
}
```
This is problematic because `SourceSetProjection` is no longer just identity. It is already close to an aggregate.
It mixes:
* symbolic identity
* relation data
* runtime references into a foreign domain
Subscribing via `all(...)` now risks pulling in much more than intended.
The type says “projection”, but internally it already carries half the system.
---
### A cleaner design
Split responsibilities instead:
```java
public interface SourceSetProjection extends Named {
}
```
```java
public interface SourceSetProjections {
NamedDomainObjectCollection<SourceSetProjection> getProjections();
Set<VariantLayerBinding> getBindings(String sourceSetName);
}
```
```java
public interface SourceSetMaterializer {
NamedDomainObjectProvider<GenericSourceSet> getSourceSet(String sourceSetName);
}
```
Now the adapter flow looks like this:
```java
projections.getProjections().all(projection -> {
Set<VariantLayerBinding> bindings =
projections.getBindings(projection.getName());
NamedDomainObjectProvider<GenericSourceSet> sourceSet =
materializer.getSourceSet(projection.getName());
// apply adapter-specific policy
});
```
What changed:
* replayable subscription is preserved
* eager observation is acceptable because `SourceSetProjection` is cheap
* expensive and computed state has moved to separate APIs
* materialization remains under the control of a single owner
---
## Why this is often better than events
When identity and state are mixed together, people quickly start inventing events:
* `projectionCreated`
* `projectionResolved`
* `sourceSetAvailable`
* `sourceSetMaterialized`
That usually happens because it becomes important to know **when exactly** an object is “ready enough”.
If the observable object contains only identity, and heavy state is obtained separately, then many of those events become unnecessary.
The architecture becomes calmer:
* an **identity registry**
* a **lookup API** for relations
* a **lazy materialization API** for heavy objects
Instead of saying:
> “When this object becomes sufficiently ready, I will react.”
you can say:
> “I can observe identity immediately, and ask for the expensive state only when I actually need it.”
This is easier to reason about, easier to test, and usually easier to evolve.
---
## What should live inside an identity object
An identity object does not have to be completely empty.
It may carry **selection metadata**, as long as that metadata is:
* cheap
* stable
* not expensive to initialize
* not turning the object into an aggregate
Typical examples:
* `id`
* `name`
* `kind`
* `type`
* domain key
What should usually stay out:
* computed aggregate content
* runtime references to foreign domains
* lazy providers of heavy objects
* derived state that can trigger premature materialization
A useful rule of thumb:
**An identity object contains selection metadata; aggregate content is obtained separately.**
---
## Why this works well with `all(...)`
`all(...)` weakens laziness only if the observed objects are themselves heavy or stateful.
If the observed objects are:
* cheap
* identity-only
* effectively immutable
then eager observation is usually acceptable.
So the real principle is:
**Eager observation of identity is often harmless.
Eager observation of computed state is not.**
That is why `all(...)` can be perfectly fine for collections of:
* `Variant`
* `Layer`
* `Role`
* `SourceSetProjection`
as long as those objects stay on the identity side of the boundary.
---
## Where this principle is especially useful
This approach is particularly effective when:
* there is replayable observation via `all(...)`
* identity objects are cheap and stable
* aggregate content may be expensive
* symbolic model and runtime model should remain separate
* you want to avoid building a custom event system
For Gradle-like models, this is often a very natural fit.
---
## When it is unnecessary
If an object is:
* small
* cheap
* and already fully represents its useful content
then splitting identity and state may be overengineering.
So this is not a universal rule. It is a tool to use when there is real tension between:
* key
* state
* computation
* runtime reference
---
## Practical conclusion
The principle can be summarized like this:
1. **Identity objects** hold only identity and cheap selection metadata.
2. **Aggregate content** is not stored inside them, but retrieved through separate API calls.
3. Those API calls may perform:
* lazy resolution
* caching
* heavy computation
* materialization on demand
4. This makes it possible to use replayable mechanisms such as `all(...)` without destroying laziness where laziness actually matters.
This is how you can combine:
* simple observation
* a clean model
* no custom event bus
* lazy materialization of heavy state
---
# Short design note version
A concise version of the same principle:
> Use identity objects as cheap, observable keys.
> Keep expensive or computed aggregate content out of them.
> Resolve that content through separate APIs on demand.
> This allows replayable observation (`all(...)`) without forcing premature materialization, and often removes the need for a custom event model.