Covariance and contravariance in your types
23 Feb 2016When creating parameterised types, you have control on how those types can be passed. These nuances are referred to as variance and scala allows you to explicitly nominate how this works in your own classes.
An excellent explanation on these terms can be found here. I’ve reproduced the three main points for this article though:
That is, if
A
andB
are types,f
is a type transformation, and≤
the subtype relation (i.e.A ≤ B
means thatA
is a subtype ofB
), we have:
f
is covariant ifA ≤ B
implies thatf(A) ≤ f(B)
f
is contravariant ifA ≤ B
implies thatf(B) ≤ f(A)
f
is invariant if neither of the above holds
Invariant
Invariant parameter types are what ensures that you can only pass MyContainer[Int]
to def fn(x: MyContainer[Int])
. The guarantee is that the type that you’re containing (when it’s being accessed) is done so as the correct type.
This guarantees the type of T
when we go to work on it.
You can see here that a good case for invariant is for mutable data.
To show the error case here, we define a show
function specialising to MyInvariant[Any]
Trying to use this function:
Covariant
Covariant parameter type is specific. You pass these sorts of types to functions that generalise their inner type access. You need to decorate the type parameter with a +
.
Then your function to generalise over this type:
Covariance is a good case for read-only scenarios.
Contravariant
Contravariance is defined by decorating the type parameter with a -
. It’s useful in write-only situations.
We write specialised functions for the type, but that are write-only cases:
Rules
When designing types, the following rules are very important when dealing with parameterization of types.
- Mutable containers should be invariant
- Immutable containers should be covariant
- Transformation inputs should be contravariant
- Transformation outputs should be covariant
Modeling a function call
Armed with this information, we can generalise function execution into the following type:
Defining this trait, allows us to generalise the computation of an input to an output like the following: