There seems to be a persistent misunderstanding of what type systems are, even among people who make otherwise excellent arguments about ways in which mainstream type systems are deficient.
Specifically, I often see “types are bad; I want X instead” where X is a form of typing.
Specifically, I often see “types are bad; I want X instead” where X is a form of typing.
If someone says “We don’t need types, we just need automatic static analyses integrated with the language”, I don’t know how to convince them that what they’re describing *is* in fact a type system—that their understanding of the word “type” is much narrower than what a type is.
It’s like asking “Why monads? Why not just mark functions as pure/impure?” Well, you totally can! That’s a simple form of effect typing.
But once you *have* that, you end up wanting user-defined & 1st-class effects, polymorphism, and integration with other type system features…
But once you *have* that, you end up wanting user-defined & 1st-class effects, polymorphism, and integration with other type system features…
First, monads give you that, although you can certainly get those things in other ways (see: algebraic effects à la Koka).
Second and more importantly, if you integrate effects & types under a common logic, either by interoperation or encoding, they’re not meaningfully distinct.
Second and more importantly, if you integrate effects & types under a common logic, either by interoperation or encoding, they’re not meaningfully distinct.
Any analysis, once deeply integrated with the rest of the language, is part of the type system. The type system is the meta-logic of the language, not just the parts about the representations of values that people first think of when they hear “type”, causing this misconception.
This might sound uselessly general. What is a type system, really, if it encompasses all of this? Static & dynamic, intrinsic & extrinsic, representational & abstract, monomorphic & polymorphic, values, side effects, memory, security, proofs, and and and…?
To which I say: now you get it! Type theory is BIG. It’s a whole field. There’s an incredible amount of depth to it, and we’ve barely scratched the surface.
Thinking that types are *only* about values & representations is like thinking math is only about calculating & numbers.
Thinking that types are *only* about values & representations is like thinking math is only about calculating & numbers.
If someone says “most of my errors aren’t type errors”, that tells me they’re thinking of types as purely representational—distinguishing int, float, pointer. Helpful, but not *that* helpful—especially if you already have memory safety, so a mixup won’t cause nasal demons anyway.
The real value of types is in incorporating more & more nonrepresentational things that look increasingly like “analyses”, encoding intent about data & code to help make more errors *into* type errors. You can do it statically, dynamically, or both! They needn’t be warring camps.
And types can do so much more than tell you you’re wrong. They can write code for you!
Tools like generics and boilerplate deriving (like DerivingVia in Haskell) can save you from writing all kinds of tedious mechanical code—and the computer won’t make mistakes doing it.
Tools like generics and boilerplate deriving (like DerivingVia in Haskell) can save you from writing all kinds of tedious mechanical code—and the computer won’t make mistakes doing it.
Since types provide so much more value by writing code *for* me than by just checking *my* code, I don’t even want global type inference, where the compiler deduces types from completely unannotated terms—I want *code* inference, and I don’t mind writing some types to get it.