In my previous post on consistent interfaces, I ended by saying that an important part of what makes an interface I useful is to consider some equivalence relation such that for any input x and any conforming implementations I₁, I₂, then I₁(x) ≍ I₂(x) should hold. This lets the implementations of an interface vary with a constant input and we expect to get something out that is “very similar”. This is not your typical definition of an interface because most languages which have type-classes/interfaces don’t let you implement the same interface for the same type multiple times. I think it is beneficial to consider the possibility of a language with this in a principled fashion because too much application logic is tied to algorithmic implementation details.

From another angle, we can look at the behavior of I when we vary the input of the same concrete type. For x,y :: T, what can (or should) we say about I₁(x) □ I₁(y)? How about I₁(x) □ I₂(y)? Both of these look pretty familiar – they are normal type judgments on the return value of the interface. We can loosen this to be the predicate on the return value of the interface if we assume refinement types etc.

Lastly, we can look at the behavior of I when we have two different concrete types. For x :: T₁, p :: T₂ what can (or should) we say about I₁(x) □ I₁(p)? How about I₁(x) □ I₂(p)? This is a bit more tricky and really claws at what it means to be an interface. Why is it that we ever have some operation I that we are okay squinting a little and considering two things “very similar”? The benefit is pretty clear because we get to write other functions that use these generalized operations which work on a whole class of things, but what is an okay class?

Interfaces work between two types by trying to generalize the similarities between a class of things. We can go all the way up the lattice (loosely speaking; some sort of semantic lattice but a type lattice also suffices for illustration) and say that any two things are equivalent – this is not very useful. And we know that staying at the very bottom of this lattice isn’t useful either because then we don’t have interfaces at all – we could go even further and say that you have to define addition for each and every integer, madness! Somewhere in the middle, we seek commonalities that capture some notion of “useful similarity”.

What similarities you see depend on your point of view and assumptions. Similarities can also be fragile things – one request from a new client might break your assumptions – and this is all too common in building software for the real world (business rules are full of exceptions and similarities don’t go well with exceptions). Can we measure how far your assumptions can be stretched until your whole model is broken? Can we measure how many things are in your equivalence class to begin with (which should be an indicator of how many more places you can apply your assumptions)? Can we come up with an “automated refactorer” that given some breaking change to your similarities, derives a new pair (or set) of interfaces that accommodates the new change? I ask these with both a philosophical approach of “let’s keep this in mind and think of while we fall asleep” along with a mathematical one of “no really, what would that measure be”. I think these would be useful things to know because it might tell us when to give up the ghost and stop chasing the “ultimate abstraction” that programmers are so fond of.

One last thing that is on my mind that might be another post is that while abstractions can be powerful, they also require people to learn them. And the higher in abstraction clouds you go, people can easily get lost. Given a goal of using programs to aid human understanding, can we reasonably always seek a higher abstraction that gives us more bang for the buck, but makes it harder to reason about? I think computers could be of great help here in that we should be able to view the same program from many points on the abstraction ladder (in an automated fashion of course); like a choose your own adventure for programs (same thing goes for syntax, word choice, etc.).