I’m struggling to try to understand something more about functional programming but so far have seen nothing to persuade me that programs should be considered to be mathematical “functions”. We don’t have mathematical objects like integers and reals and functions in programming, we have bit patterns that *represent * mathematical objects and other kinds of objects and we have *algorithms that implement* functions, usually they only approximate those functions. Pretending otherwise creates complexity for no reason. Programs are mechanisms for producing or modifying bit patterns. It’s obviously useful to be able to treat, for example, a variable of unsigned integer type as if it were an integer, but it’s important to remember that it’s a bit pattern representing an integer, not an actual integer.

It’s also worth noting that the notion of types in working mathematics is much more fluid and “informal” than the type systems that theoretical computer scientists prefer. There is no reason to believe that even if types can be organized in some sort of hierarchy, doing so produces any illumination at all.

Russell introduced what is often viewed as his most original contribution to mathematical logic: his theory of types—which in essence tries to distinguish between sets, sets of sets, etc. by considering them to be of different “types”, and then restricts how they can be combined. I must say that I consider types to be something of a hack. And indeed I have always felt that the related idea of “data types” has very much served to hold up the long-term development of programming languages. (

Mathematica, for example, gets great flexibility precisely from avoiding the use of types—even if internally it does use something like them to achieve various practical efficiencies.) – Wolfram.