Lately, I've been reading and implementing bidirectional type checking and inference with higher-rank polymorphism from Dunfield and Krishnaswami. It uses a very clever notion of an ordered context to represent the current scope alongside all of the known and unknown (temporary) type variables. This mixes unification and type analysis so as to both be quite simple and also to make it easy to write error messages—at the type an error arises you've basically got all of the information you need at hand already. This technique is cited to Gundry, McBride, and McKinna and appears to be a trick McBride developed in working on Epigram.
What exactly is the big deal here?
One thing that can really wreck obvious-looking type inference relations is when they're nastily non-functional and assume an oracle. The most standard issue arises when inferring a binder where we have to guess a type for the bound term variable (or a definition for the bound type variable) in order to check the body with that knowledge in context. Since ascribing the types of bound variables defeats half the purpose of type inference, this really leads to trouble.
Normally, type inference proceeds by collecting a bag of constraints—equalities or subtype/instantiation relations which it encountered during checking—and then solves them all at the end. The program is inferable exactly when this unification problem succeeds (is decidable, has solutions).
Unfortunately, this means that inference happens in phases of analysis and unification. Unification is where most type errors are detected, but at this point we've lost the context of the analysis problem.
The novelty that ordered contexts brings is to introduce declarations in the typing context such that each new declaration is nested in the scope of the prior ones and then to augment the context not only with variable declarations but also variable equations and unknowns. As analysis proceeds, whenever a point is reached where an oracle is needed we just create a new, fresh variable to stand in for the answer and leave it as an unknown in the context. Continuing in the analysis, we will eventually solve for that unknown and then can use the equation to immediately substitute away the placeholder variable as if we had always known the right answer. Better, it so happens that you can arrange things to solve these variables without backtracking.
Ultimately, it's a clever trick which gels very nicely with bidirectional type checking. When failures occur in this kind of system there is an enormous amount of information describing the exact context for the failure. This makes it feel very easy to both write nicer error messages and also to support a rich type hole design.
I'm working my way toward understanding how the MLF type system works. The bidirectional system Dunfield and Krishnaswami lives in a particular sweet spot of power without sacrificing the ability to obviously interpret the inferred types within System F. However, it falls short of full impredicativity, something that interests me. MLF provides most of these benefits while capturing full impredicativity (complete first-class polymorphism). The only downside is that MLF types are not System F but instead an extension to it.
But that's to come later.
Dunfield, J., & Krishnaswami, N. R. (2013). Complete and easy bidirectional typechecking for higher-rank polymorphism. In Proceedings of the 18th ACM SIGPLAN international conference on Functional programming - ICFP ’13 (p. 429). New York, New York, USA: ACM Press. http://doi.org/10.1145/2500365.2500582
Gundry, A., McBride, C., & McKinna, J. (2010). Type inference in context. In Proceedings of the third ACM SIGPLAN workshop on Mathematically structured functional programming - MSFP ’10 (Vol. abs/1111.5, p. 43). New York, New York, USA: ACM Press. http://doi.org/10.1145/1863597.1863608