March 9th, 2015, 12:30 PM

Redesigning Mathematics
I've asked this question on a math and philosophy forum a few years ago but didn't
get much interest, so I thought I'd give it a crack on a programming forum.
I have a few problems with mathematics in its current form. As a developer one of my
main gripes is the naming of variables. Programmers learn early on in their career that
single letter variable names cause pain. I think a lot of people would agree with that,
including mathematicians, and the reason it's gone on so long is because of practical
reasons like having to write long variable names on a blackboard etc.. No problem,
hopefully it will change in the future as computers become more standard in classrooms.
Some other issues as I see it are things like inconsistency in the way things are done.
Sometimes you represent functions using f(x, y) and sometimes there are other
representations. Complex numbers can be in the form of 2i + 6 or they can be
represented as matricies, or polar coordinates.
Another thing is the minus symbol. When you say a number is negative it can mean
different things. In physics it is sometimes the direction, other times it means
something completely different. That is because plus or minus is really just a binary
value proceeding a number. If you were "cleaning up" math you might want to
get rid of negatives altogether and replace them with a data structure that has a
binary field and a number field.
Of course all that is controversial and isn't too important to me, because I have
a larger goal in mind. My central claim is that math is in fact simply a form of
processing, just like computer programs. The reason math has so many weird
rules and takes you places where you think "when would you bother using this", is
precisely because it is processing and nothing else. Over the years math has built
up all different kinds of algorithms, from proof processing to geometry processing. Yet
those fields are just different ways you can manipulate data.
I have left out a lot of details and I know there are subtleties I am skimming over
like the fact that math is about the simplest form of processing, not just any kind
of processing. Math is also used to understand, to prove things (rather than just
process data), to show consistency etc.. I don't really want to make this post too
long by discussing my claims about those things so I might just leave it at that for now
and see if anyone is interested in the idea.
March 10th, 2015, 03:31 AM

Sorry, although you have said a lot about your view on maths, I can't really spot your idea in that text.
March 10th, 2015, 04:55 AM

Sorry i realize it was quite longwinded. It's really just the 2nd last paragraph. The claim is the math is purely processing and is basically a subset of computing.
March 10th, 2015, 11:26 AM

I think you should switch your claim around, especially when taking a closer look at this sentence:
My central claim is that math is in fact simply a form of processing, just like computer programs
Computer processing is based on mathematics operation, especially the part for Arithmetic logic:
Originally Posted by http://en.wikipedia.org/wiki/Arithmetic_logic_unit
In digital electronics, an arithmetic logic unit (ALU) is a digital circuit that performs arithmetic and bitwise logical operations on integer binary numbers. It is a fundamental building block of the central processing unit (CPU) found in many computers
March 10th, 2015, 11:45 PM

I don't think that shows anything except that we currently think of math as fundamental and
computing is built on top of that. I'm arguing that math is a subset of processing and
that processing is the "fundamental" thing here. So it's true that any definition of computing
will be at odds to that because we currently define things the other way around.
If you give me any math operation such as addition, I can represent that purely as
a (physical) processing machine. It's just an adder circuit. So if math was never "invented"
we could still get by just fine by using a processing point of view, at least in the case
of arithmetic.
One of the main reasons people resist this idea is because of the view that math
is something that is external to physical reality. It's easy to think math sits in a
special realm that is kind of separate from physics because it is such a fundamental
thing. But that is not the case. When you do math on a blackboard, each symbol
is physically written on a blackboard, it is essentially a mechanical process. Math
only exists as physical manifestations in books, people's brains and blackboards (etc.).
March 11th, 2015, 06:56 AM

Something, for me, is being 'lost in translation' ...
So, instead of using the mathematical process we call addition to see what happens when you (for want of a better word) add two numbers together you want to ... use a process to add them together?
The moon on the one hand, the dawn on the other:
The moon is my sister, the dawn is my brother.
The moon on my left and the dawn on my right.
My brother, good morning: my sister, good night.
 Hilaire Belloc
March 11th, 2015, 08:51 AM

The goal would really be to replace the current language of math with a new language
that is the most "elegant" language possible. So far argument's sake imagine we chose
the C language for that role. Then addition would be:
int a = b + 6;
So to answer your question, yes, you are basically replacing normal math (a = b + 6) with
a language that is explicitly mechanical. That is, anything you can do is simply a
deterministic machine or process that is governed by rules.
For this example they are basically the same. But for other things like functions in linear
algebra the syntax may be different. And of course C would be a poor choice, I would
imagine the language to be as elegant and concise as possible. Whether there is an
'ultimate' language is controversial although it kind of overlaps with computer language
design which is probably of more interest to me (I'm currently trying to design a graphical
programming language).
There are many things that make this hard to discuss and there are also subtle issues
that can be easily overlooked. In the addition example there is a fairly obvious issue
where the equals symbol in programming is actually a "move" command that moves
the right side of the expression to a memory location. Whereas in math the equals
symbol is more of a statement / command that both sides are always equal
(or must be equal).
I'm happy to explore any of those areas and also happy to admit if it's a silly idea due
to those issues. To be honest I haven't tried building such a language even for
simple things like addition. It's more something that seemed to flow from the
philosophical belief that math is only ever something that is instantiated in physical
reality, it is not a separate 'realm' as I touched on above.
In summary I think the syntax and grammar of math has lots of holes in it, and I think
just patching those holes is not good enough. There needs to be a rewrite and a search
for an "ultimate" language. I feel that language would be purely "processing" in nature,
just like computer languages. In math there is often implication of processing but it is
not explicitly stated, it is kind of hidden under the covers and not talked about. I think
I can give examples of that.
March 11th, 2015, 03:46 PM

a = b + 6 < how is that not elegant? It is concise and exact (assuming you have defined what b is).
The moon on the one hand, the dawn on the other:
The moon is my sister, the dawn is my brother.
The moon on my left and the dawn on my right.
My brother, good morning: my sister, good night.
 Hilaire Belloc
March 12th, 2015, 04:52 PM

Originally Posted by SimonJM
a = b + 6 < how is that not elegant? It is concise and exact (assuming you have defined what b is).
Actually, for a mathematician or a formalism language lawyer, the above is not exact. As far as they are concerned, it should really be
a < b + 6
or even
b + 6 > a
i.e. it says evaluate b + 6 and assign the result to a. I know that < operator is used in R, which some of our statistics boffins use at work here. APL and OCaml also do this (and there may be other languages as well). I think > is also valid in R.
Another variant of the above is:
a := b + 6
where := is the assignment operator. This form is seen in programming languages like Algol, Pascal (and its descendants, such as Delphi). Crossref the Backus Naur Form, which uses ::=, but not for assignment
Other languages use specific keywords to indicate assignment. E.g.
For all you LISPers and Schemers
===========================
(setq a (+ b 6))
(set! a (+ b 6))
For COBOL Masochists
===================
MOVE B TO A
ADD 6 TO A
(or)
ADD 6 TO B GIVING A
Sinclair BASIC (and other dialects of BASIC available on early home computers)
===========
LET A = B + 6
Assembly language
================
MOV AX, BX (or variant of this)
As you can see, all of these clearly denote that there is an assignment operation happening and the value of an expression is getting assigned somewhere.
As far as mathematicians and language formalism fans are concerned:
a = b + 6
is an equation where the = denotes that the expressions on either side are equal. It is not an assignment expression, but an equality expression. This has been the convention for centuries, until FORTRAN was invented and made = as an assignment operator.
Last edited by Scorpions4ever; March 12th, 2015 at 05:04 PM.
Up the Irons
What Would Jimi Do? Smash amps. Burn guitar. Take the groupies home.
"Death Before Dishonour, my Friends!!"  Bruce D ickinson, Iron Maiden Aug 20, 2005 @ OzzFest
Down with Sharon Osbourne
"I wouldn't hire a butcher to fix my car. I also wouldn't hire a marketing firm to build my website."  Nilpo
March 16th, 2015, 03:27 AM

Let me try explaining my position from a different angle.
Take the example of negative numbers (integers for now). For simple, positiveonly
numbers you would have an 'add' function like this (using Clike pseudo code):
UnsignedInteger Add(UnsignedInteger number1, UnsignedInteger number2);
Of course it simply takes two positive numbers and outputs a positive result.
Now consider the case where you need to handle signed values. The function
could be written as:
SignedInteger Add(SignedInteger number1, SignedInteger number2);
That's all fine and obvious. And I realize there is no 'SignedInteger' data type, I'm just
speaking abstractly.
But in reality that function is hiding some details. The 'real' function should look
like this:
UnsignedInteger Add(UnsignedInteger number1, SignBit sign1, UnsignedInteger number2, SignBit sign2);
[also outputs a sign bit with the result]
In this case the sign is explicity passed as a parameter rather than being
part of the 'SignedInteger' data structure.
The point of that is it clearly shows the two Add() functions are different. The
one that takes positiveonly numbers has fewer parameters, whereas
the other Add() needs to pass around extra bits for the sign.
In math, signed values are kind of treated the same as unsigned. Formulas
don't explicitly treat the sign as an extra bit that is tacked onto a number. They
just treat them as numbers. A lot of the mechanics are left unsaid and swept
under the rug via notation. This causes unintuitive results for some parts of
math.
But the 'reality' is that signed numbers are simply data structures:
struct SignedInteger
{
UnsignedInteger Number;
Bit SignBit;
}
And so in math what you end up with is special treatment of a data structure
that contains a bit and a number, but any other kind of data structure
(say a number with two sign bits) is pushed out into the rain.
I know I am speaking very figuratively and colorfully here but I'm just trying to
keep it light. I think this issue can be stated in a much more formal way. This
is more to give a flavor of what I'm trying to get at.
March 16th, 2015, 02:32 PM

That's a false perception. The difference (in computing) between signed and unsigned numbers is that the high order bit of a 'signed integer' is used to express positive/negative.
The moon on the one hand, the dawn on the other:
The moon is my sister, the dawn is my brother.
The moon on my left and the dawn on my right.
My brother, good morning: my sister, good night.
 Hilaire Belloc
March 16th, 2015, 07:13 PM

^^^^
That is something which is generally true on twoscomplement machines, but the C standard explicitly says to not rely on it, because not all machines use it .
More reading: Signed number representations  Wikipedia, the free encyclopedia
Also notice that while most machines have signed/unsigned for *integral* types, they don't do it for floating point types (float, double, extended etc.), frequently because those types are implemented with a bit that is explicitly reserved for only the sign and not reused for magnitude. That's why you don't see unsigned doubles, for instance.
Last edited by Scorpions4ever; March 16th, 2015 at 07:18 PM.
Up the Irons
What Would Jimi Do? Smash amps. Burn guitar. Take the groupies home.
"Death Before Dishonour, my Friends!!"  Bruce D ickinson, Iron Maiden Aug 20, 2005 @ OzzFest
Down with Sharon Osbourne
"I wouldn't hire a butcher to fix my car. I also wouldn't hire a marketing firm to build my website."  Nilpo
March 22nd, 2015, 07:51 PM

Originally Posted by LegendLength
I don't think that shows anything except that we currently think of math as fundamental and
computing is built on top of that. I'm arguing that math is a subset of processing and
that processing is the "fundamental" thing here. So it's true that any definition of computing
will be at odds to that because we currently define things the other way around.
If you give me any math operation such as addition, I can represent that purely as
a (physical) processing machine. It's just an adder circuit. So if math was never "invented"
we could still get by just fine by using a processing point of view, at least in the case
of arithmetic.
One of the main reasons people resist this idea is because of the view that math
is something that is external to physical reality. It's easy to think math sits in a
special realm that is kind of separate from physics because it is such a fundamental
thing. But that is not the case. When you do math on a blackboard, each symbol
is physically written on a blackboard, it is essentially a mechanical process. Math
only exists as physical manifestations in books, people's brains and blackboards (etc.).
This is not a given. Broadly, there are two philosophies:  (Formalism) Mathematical symbols are what math is about, so math is nothing but symbolic manipulation.
 (Platonism) Mathematical symbols represent abstract concepts, and math is really about those concepts, not the symbols.
As a mathematician, there's normally no need to subscribe to one view or the other, since they don't lead to different math. However, sometimes the distinction can affect the questions you ask or find interesting. For example, the question
What are the correct axioms for set theory?
is fundamentally platonic: it assumes that there is some abstract "correct" notion of "set," and a given set theory may or may not be a correct representation of it. In the formalistic point of view, however, the question is meaningless: a given set theory is just a system under which certain deductions can be made. Different set theories give you different theorems, and that's it.
Anyway, I bring it up because you seem to be discarding the platonic point of view without any kind of justification.
Also, your argument about being able to represent addition using an adder circuit making the math less "fundamental" than the computation isn't very compelling, since all forms of computation can be and are studied as mathematical objects.
Originally Posted by LegendLength
But the 'reality' is that signed numbers are simply data structures:
struct SignedInteger
{
UnsignedInteger Number;
Bit SignBit;
}
Why is UnsignedInteger a primitive type in your reality, while SignedInteger is a data structure?
Finally, maybe you can define "processing" for us? You use it every few sentences, yet this is not a precise term in either computer science or mathematics.
March 22nd, 2015, 10:13 PM

Thanks Lux, formalism is just what I was looking for.
For platonism, I find it difficult to see why you need to think of universals things existing
in another realm. The universals are just defined in the axioms right?
Take this example: I draw a number line on a blackboard in one part of the world while
someone else draws a number line on a piece of paper. It seems clear they are both
referring to the same universal number line.
I agree that's true but there's no need for anything special here. There is a kind of hidden
axiom that says "Let's assume there is a perfect number line that contains all real numbers"
etc.. Just because both people are using the same axioms doesn't mean they are
referring to a special, universal realm. And the axioms of course are just mechanical
things like written rules or sometimes computer programs.