A new programming language called Polarity offers a novel solution to a classic problem in software design: how to build systems that are both correct and easy to extend over time. The language, developed by researchers from Delft University of Technology, University of Kent, and University of Tübingen, treats two fundamental programming paradigms—functional and object-oriented styles—symmetrically, allowing developers to switch between them as needed. This approach addresses what's known as the expression problem, a trade-off between extending a type with new operations versus extending it with new constructors, which has traditionally forced programmers to choose one style over the other. Polarity's design enables moving between representations algorithmically through global program transformations called defunctionalization and refunctionalization, providing flexibility previously unavailable in dependently typed languages.
The core innovation lies in Polarity's symmetric handling of data types (used in functional programming) and codata types (used in object-oriented programming). For example, a set of natural numbers can be implemented as a data type with constructors like Nil and Cons, where adding a new operation like checking if the set is empty is straightforward through pattern matching. Alternatively, it can be implemented as a codata type with s like .insert and .contains, where adding a new constructor like Union is easier. The paper demonstrates that Polarity allows free movement between these representations, so developers can work with the more convenient one for their task. This symmetry is maintained through unique labels on comatches (codata constructors) and matches, ensuring judgmental equality is preserved during transformations.
To make Polarity usable in practice, the researchers extended it with implicit arguments, a feature common in state-of-the-art dependently typed languages. Without implicit arguments, programs can become verbose, as seen in the paper's example where a set type specialized to natural numbers requires providing type parameters to each constructor. By marking arguments as implicit, developers can omit them at use sites, and the system infers them automatically using a unification algorithm. The paper provides a complete algorithmic description of the type system backing Polarity, including rules for reduction semantics, conversion checking, and a unification algorithm that covers arbitrary inductive and coinductive types. This algorithm is essential for handling dependent pattern matching and solving metavariables introduced for implicit arguments.
Show that Polarity's type inference algorithm successfully checks programs with user-defined dependent data and codata types, supporting local pattern matches with motives and local copattern matches. The unification algorithm, described comprehensively for the first time for arbitrary codata types, respects the language's symmetry and avoids known sources of non-termination, such as eta-equality for codata types. The paper references a work-in-progress implementation available at polarity-lang.github.io, indicating practical feasibility. The system handles examples like proving properties through self parameters in codata types, where a 's return type can refer to the object itself, as demonstrated with the insert_non_empty property for sets.
This work has significant for software maintenance and extensibility, key concerns in real-world programming. By bridging functional and object-oriented paradigms, Polarity could influence the design of future programming languages, especially those aiming for formal verification through dependent types. The paper's detailed account of the unification algorithm and design decisions can serve as a blueprint for other dependently typed languages that support inductive and coinductive types symmetrically. For everyday developers, this means more flexible tools for building robust systems that adapt to changing requirements without sacrificing correctness.
However, the paper acknowledges several limitations. The language does not yet have a suitable termination measure for definitions or a positivity criterion for types, and no meta-theoretical properties are proven about the calculus. The researchers avoid known sources of non-termination and conjecture that type-checking will terminate given a terminating input program, but this remains hypothetical. Additionally, the system uses the Type-in-Type axiom, which is known to be inconsistent, though this is not a concern given the lack of termination checks. The unification algorithm may not always find a most general unifier, as higher-order unification is undecidable in general, and the implementation does not yet include features like lowering or eta-rules for codata types. Future work is needed to combine metavariable unification with de- and refunctionalization transformations and to prove the soundness of the type inference algorithms.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn