The discussion I’ve seen about the objectivity of 2+2=4 misses what I think is the most interesting part. What interests me is that there are many ways we could interpret 2+2=4, and we made a decision to treat one as default and the others as special cases that need to be noted.
This choice doesn’t really, in the long run, affect what math is possible, because we can always introduce ways to talk about other perspectives as needed. But it does influence how we interpret results, and which perspectives seem more natural as opposed to more contrived.
Unsurprisingly, with something as ubiquitous as ordinary addition, the choice we made is pretty compelling, but I want to imagine having other default choices. I don’t think these are particularly plausible as alternate timelines or realistic human cultures.
They’re more like things space aliens might do, or that might be suited to a culture in a speculative fiction story with a sufficiently different perspective on the world.
First, what if modular arithmetic were the norm? I don’t think a world where mod 3 addition in the norm is so plausible, but what about a world where the default setting of arithmetic was “Z/nZ for some pretty large n”?
This is, more or less, the natural setting in most computer programming languages: an int type holds something that behaves mostly like an integer, unless it overflows the register.
So I’m imagining a world where that’s the normal view: when talking about arithmetic, unless otherwise specified, the assumption is that there’s some number n beyond which calculations would overflow.
In the version I imagine, there isn’t a fixed value n that they work in. The idea is that you’re always working in Z/nZ where n is some unspecified big number. And, once you’re done with a whole calculation, you could always make sure it’s big enough to do what you want to do.
But you don’t want to bind yourself to a single choice of n in advance, and you also don’t want to forget that you will eventually be bound to a choice of n, so you have to keep checking whether you’ve overflowed it.
This sort of “dynamic” setting sounds odd if you’re not used to it, but calculations with phrases like “choose e small enough, as determined later” are pretty common in some parts of math, and you do get used to them.
So for this culture, it’s normative to say “unless that overflows” after a lot of basic mathematical facts. Like the way we often say “unless x is 0” when we divide by x. For instance “for y>0, x+y>x” isn’t true in this world. You say “for y>0, x+y>x unless it overflows”.
And, like “unless the denominator is 0", once you're used to it, you don’t say it every single time. Exams begin with a boilerplate “Assume no calculations overflow”. Smartass 10 year olds interrupt their teacher to point out they forgot to add a requisite “unless it overflows”.
And of course, they can talk about the integers, it’s just not their default. “In this paper we work in the integers. We might analogize the integers to working with an infinite register size: unlike ordinary numbers, calculations involving the integers never overflow.”
Like all perspectives, this suggests different priorities than ours. It lives very naturally with the idea that all calculations are ultimately meant to be implemented on a device.
The sense that the universe has some built-in “ultimate n” above which any calculation might overflow could feel normal, while the integers would feel like a slightly artificial abstraction.
Combinatorial and computational math is very natural in this setting. Analysis ultimately develops in the same way, but the descriptions are superficially different; continuity is in terms of “indistinguishables” (numbers whose difference is too small to fit in a register).
You might recognize that I’m really describing something fairly close to Z/NZ where N is a nonstandard natural number.
A second idea: a world where the default setting is the numbers should have units. The answer to “what is 2+2” is “2 whats?” You can talk about integers, of course: every school child knows that “2 units+2 units=4 units”. It’s just that if you want to do that, you have to say so.
This has some pedagogical advantages at certain ages. It’s often a struggle to convince students to slow down and think about what they’re calculating, and in this culture, it’s a little bit more friction-y to do purely abstract calculations.
I played an RPG once which had a culture like that. They had an excuse (it involved being ruled by a broken star which was trying to wipe out abstraction in the world); the illustrating line was when one of them joined the adventuring party and was taught math by our shadow mage:
“So 2 thingees plus 3 thingees is always 5 thingees?…But what about 2 slightly bent thingees plus 3 slightly bent thingees?”
That level of anti-abstraction is obviously implausible in a real society. None of these perspectives really change what math we can actually do, because any society with one of these perspectives will develop language to talk about the others.
But they do change which perspectives seem more or less natural, and so perhaps which areas get developed faster. When we’re talking about 2+2=4, these alternate views are pretty speculative.
Choosing the natural numbers as the unmarked case seems pretty canonical, and as far as I know, that’s been universal across human societies. But I’m interested in the idea because I’m interested in some alternate ways more recent math might have developed.
People often talk about intuitionistic logic as a restriction of classical logic (“it’s like classical logic except that it doesn’t prove ‘P or not P’”). I prefer to think of it as an expressive expansion. (I learned about this perspective from a talk by Joan Moschovakis.)
What if the constructivists won, and we agreed that, in order to prove “P or Q”, you had to actually either prove P or prove Q? We could talk about non-constructive or (“not (not P and not Q)”) so we could talk about classical mathematics, but that's not what English "or" meant.
Of course, it’s not hard to imagine the English “or” being split into multiple formal notions, because we already had to decide that “inclusive or” is the ordinary reading and “exclusive or” is the less common reading that has to be distinguished.
What makes intuitionistic logic an expressive expansion is that it can express both constructive and the classical or, whereas classical logic has no way to express the constructive or. So everything we call classical mathematics would still work, it would just be the marked case
By default, mathematics would be constructive, and then when people want to prove something that’s really non-constructive, they’d have to say that. Probably some subfields would still decide that it made sense to mostly be non-constructive.
Maybe people would adopt conventions to make it quick to talk about non-constructive or. “As usual, we write P orr Q to mean that P and Q cannot both be false.”
Here’s another one: what if we reject the idea that it was reasonable to work with all the subsets of the natural numbers as a single, collected object? This is an idea that’s been around in various forms (especially some of Feferman’s writing). Here’s how I imagine that working.
We’d talk about the power set of an infinite set the way we already talk about the complement of a set: it always has to be done in a context. You’d never really have all the subsets of a set, but you could have all those subsets which are elements of some universe of sets.
Sometimes, when the context is clear, you’d just say things like “the set of reals”, but only when it’s clear you really mean “the set of reals available in the current context”, just like now we talk about complements of sets when we’ve fixed a universe of elements.
Nonetheless, a lot of math, including a lot of math talks about the set of real numbers, can develop basically unchanged: most (all?) of math doesn’t really need to be sure it’s collected up every last real number; having a set of reals with some closure properties is just fine.
Even set theory still works. Actually, this perspective lines up very naturally with a perspective that’s become popular in set theory: that there is no single universal model of set theory we’re doing math in; rather, there are many universes of set theory.
Indeed, if this had been the perspective from early on, the invention of forcing would have been a vindication: just like everyone always knew, there’s no completed set of real numbers, because we can always extend the universe to add another.
One thing that changes is cardinality. It becomes obvious that the non-existence of a bijection is a subjective thing: two sets have different cardinality in a particular context if that context is missing a bijection between them.
But we wouldn’t talk about infinite sets having absolutely different cardinalities because another context might have a bijection. That, in turn, changes the way we interpret some results. I’ll close with an example of a pair of results that I think make more sense this way.
One of the equivalent definitions of a stable theory is that a theory is stable if there is an infinite kappa such that there are at most kappa A-types whenever |A|<=kappa. There’s a result about NIP theories that clearly wants to be parallel:
when a countable theory is a NIP, the set of types is bounded by (ded kappa)^(alpeh_0), where ded kappa bounds the number of cuts in a linear order of size kappa. This isn’t a cardinality bound, because it’s consistent that (ded kappa)^(aleph_0) is equal to 2^kappa for all kappa.
Indeed, it’s trickier to state if your background concept is cardinality, because in a model where (ded kappa)^(aleph_0)=2^kappa for all kappa, it’s not actually clear that the result tells you anything at all.
If you think the natural thing to talk about is injections, the parallel is much clearer: for stable and NIP theories, there is a natural injection from the set of types over A to some natural set: either the set of types over A, or a set of cuts in a linear ordering of A.
From that perspective, existence of injections is the purely combinatorial fact, and cardinality bounds is a related fact that mixes the combinatorics with some proper set theory.
But to make that formal, you have to start talking about what you mean by a natural injection, and then you have to really look at it to see that it’s a natural result, and not an artifact of the class of injections you chose.
If specifying families of injections and bijections were the normal thing to do, this would look like the natural result it is. When I say that our choice of perspective influences which results seem natural and which directions we develop, this is what I mean:
our perspective has a lot of influence over which results are easy to state and explain, and that influences which directions get more attention, at least over the medium term.
You can follow @htowsner.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: