This may seem like an odd question. The whole point of the [effort that has led to the current] "deconstructible classes" idea was to bring the goodies in records to regular Java classes. Inferred equals, hashCode, toString. Inferred constructor and field accessors. And of course, pattern-matching.
From its inception, there was a certain fascination with pattern-matching being the "dual" of object-creation. For a time, it looked like this might lead to a future where custom patterns could be defined symmetrically to their object-creation counterparts, and subject to the same degrees of freedom: static patterns, instance patterns, abstract patterns, overloading, overriding, generics, varargs...
This design direction turned out to be a false lead. Brian Goetz cited high complexity for low gain: In the Java team's experimentation, they found no examples of classes wanting more than a single deconstructor pattern.
This realization clearly fed into the "carrier classes" proposal. Classes would be allowed to include a "state description" - a list of component types and names - in the class header.
class Point(int x, int y) {
...
}
Like a knockoff interface, the state description would require the class to implement a corresponding accessor method for each component listed. In return, the state description would double as the class's single deconstructor pattern, and the language would infer implementations of equals, hashCode, and toString based on the component accessors. In addition, it would infer accessors for suitable fields tagged with a "component" keyword, and infer a compact constructor that assigns to those fields.
This proposal was shortly disavowed by its authors. Though the state description still seemed valuable for inferring a decontructor pattern from component accessors, the commitment to inferring implementations for Object methods posed issues. If equals, hashCode, and toString were not declared, did they not exist, or were they inferred? If they were declared, but depended in part on non-component fields, how could that be expressed without defaulting to a completely hand-written implementation? These issues belied some wishful thinking: that the state description could be a complete description of the class's state, despite that property only being guaranteed for records, and this design being specifically intended for arbitrary classes.
And this is how "deconstructible classes" came about. They look just like "carrier classes" did, minus the inferred Object methods (and leaving the "component" tag / inferred accessors in limbo).
Let's look at what deconstructible classes buy us:
class Point(int x, int y) {
private final int x, y;
private final int min, max; // Derived data
public Point(int x, int y) {
this.x = x;
this.y = y;
this.min = Math.min(x, y);
this.max = Math.max(x, y);
}
public int x() { return x; }
public int y() { return y; }
public int min() { return min; }
public int max() { return max; }
public boolean equals(Object o) { return o instanceof Point p && x == p.x && y == p.y; }
public int hashCode() { return Objects.hash(x, y); }
public String toString() { return "Point[x=" + x + " ,y=" + y + "]"; }
}
Point p = new Point(1, 2);
// Deconstruction (syntax from "JEP draft: Enhanced Local Variable Declarations")
Point(int a, int b) = p;
// Reconstruction (syntax from "Updates to Derived Record Creation")
Point q = p.new(x:y+1);
Now compare to an approach we can already write today:
class Point {
private final int x, y;
private final int min, max; // Derived data
public Point(int x, int y) {
this.x = x;
this.y = y;
this.min = Math.min(x, y);
this.max = Math.max(x, y);
}
public int x() { return x; }
public int y() { return y; }
public int min() { return min; }
public int max() { return max; }
public record Parts(int x, int y) {
public Point build() { return new Point(x, y); } // projects Parts -> Point
}
public Parts parts() { return new Parts(x, y); } // projects Point -> Parts
public boolean equals(Object o) { return o instanceof Point p && parts().equals(p.parts()); }
public int hashCode() { return parts().hashCode(); }
public String toString() { return "Point[" + componentsString(parts(), "=", ", ") + "]"; }
}
Point p = new Point(1, 2);
// Deconstruction (syntax from "JEP draft: Enhanced Local Variable Declarations")
Point.Parts(int a, int b) = p.parts();
// Reconstruction (syntax from "Updates to Derived Record Creation")
Point q = p.parts().new(x:y+1).build();
// A working implementation - ideally would be given "special treatment" to implement efficiently for all records.
static String componentsString(Record r, String associator, String delimiter) {
StringBuilder s = new StringBuilder();
String delim = "";
for (var rc : r.getClass().getRecordComponents()) {
try {
s.append(delim).append(rc.getName()).append(associator).append(rc.getAccessor().invoke(r));
delim = delimiter;
}
catch (IllegalAccessException | InvocationTargetException e) {
throw new RuntimeException(e);
}
}
return s.toString();
}
But wait! If only records (directly) support reconstruction, and a class can project into different record types, there is not a canonical approach to reconstruct a class. This actually poses a problem for marshaling. But if we go back to how marshaling was originally presented... how about we just designate a canonical projection?
@Marshaller public Parts parts() { return new Parts(x, y); }
The annotation tells our serialization tooling which (no-args, record-returning) method to use to destructure the class. To reconstruct the class, the tooling can use the same trick that deconstructible classes were considering: search for a constructor on the class that has the same signature as the record.
There is one serious drawback from using records as the medium to destructure classes. The indirection required to project to the record brutalizes nested pattern matching.
switch (triangle) {
// Deconstructible classes:
// (Note this is also an unconditional pattern - see "JEP draft: Enhanced Local Variable Declarations")
case Triangle(Point(var x1, var y1), Point(var x2, var y2), Point(var x3, var y3)) -> { ... }
// Record indirection:
// (On top of the noise, the 'when'-clause needed to invoke the projection makes this pattern no longer unconditional)
case Triangle(Point p1, Point p2, Point p3)
when p1.parts() instanceof Point.Parts(var x1, var y2)
&& p2.parts() instanceof Point.Parts(var x2, var y2)
&& p3.parts() instanceof Point.Parts(var x3, var y3) -> { ... }
}
This could be viewed as a side-effect of there no longer being a language-known canonical deconstructor
for the class. Upgrading our extra-lingual @Marshaller annotation to some more language-approved keyword
might be sufficient for us to write the second case exactly like the first case (ie spelling "Point",
rather than "Point.Parts"). Taken to its logical conclusion, this would close the loop on the other
differences in usage from "deconstructible classes" - we've essentially just moved the deconstructor
from "a special signature in the class header" to "an ordained method in the class body", but with the
added bonus that the components are now bundled up in a record that we can leverage to implement
Object methods.