C is the set of specific integers that are in principle generatable by enumerating stuff that actually exists. For example "The number of electrons created during the lifespan of the universe."
M is the maximum number of bits of data storage in the universe (which is smaller or equal to the largest member of C).
A is the set of algorithms that are precisely describable with M or less bits.
T is the lifetime of the universe divided by the smallest time interval.
B is the set of numbers generatable by applying A to C for T or fewer steps.
F is the largest real finite integer member of B.
QUESTIONS:
Is F+1 a well defined number?
ChrisHowlett thinks yes. You've well defined F above (modulo metaphysical things like "Is there a smallest unit of time"), and +1 is a well-defined function from integers to integers; so F+1 is well defined. It (presumably) just takes one step longer than you've allowed time for.
(PeterTaylor) No, it also takes more memory than he admits possible.
Angoel is unconvinced that even F is a well defined number, because he does not think that algorithms can be described using bits without external context. Any people who disagree are challenged to produce a binary representation of a quicksort that can be understood without external context ;)
PeterTaylor suggests refining the definition to precise specification in SML using the ASCII charset.
In respect to the encoding of the algorithms, I had in mind the work of [Gregory J. Chaitin] --DR
(PeterTaylor) Sure, there are canonical ways of encoding TMs and RMs, but that's boring.
It's an axiom of compsci. Perhaps it's slightly mis-stated up there - but there exists a minimal binary representation of any program. (And yeah, you can have really degenerate ones - so it is only useful when talking about the set of all programs.) So F is well defined - given any languages, there are only 2^M programs in it - which is nice and finite - so we can say that any given language has a larger or smaller number of possible algorithms. Then we just have to pick the largest (and we know it's less than or equal to 2^M) --Vitenka
A minimal binary representation of a program is only of use if you have something to run it on. This is the external context that I am speaking of. --Angoel
Is there such a thing as the largest conceivable integer (the LCI), beyond which no specific reference can be created?
(PeterTaylor) No, because if there were then LCI + 1 would also be conceivable.
Is there such a thing as the smallest inconceivable integer (the SII)? I take it that if so, then the SII might well be smaller than the LCI.
Again, no. Since SII is the smallest such, SII-1 is conceivable. But then (SII-1)+1 is conceivable, and is SII. --CH
Are you claiming "If integer X is conceivable than X+1 also conceivable." ? I'm not sure that can be right, because combining that with "1 is conceivable." would imply "All finite integers are conceivable." which contradicts how we are defining conceivable here which is "A specific integer is conceivable of by a human if and only if a referent to that specific integer can be constructed by the human.". I would assert that there exist some integers which can never be referred to specifically by humans because, while finite, they are too large. Obviously, though, I can't give you an example. --DR
And I suppose that if you assert 0 = 1, we're supposed to believe that as well :p. Basically, this question is so riddled with holes, I don't know where to start. Firstly, lets find the smallest integer first, which is impossible, because whatever we assert it is, SI - 1 is smaller. So we need to add a condition of smallest positive inconceivible integer. Then we need to define conceivable better. We can make the smallest positive inconcievable integer one by killing the human before they have the chance to think of any numbers. We can wait until they can count to ten to put the number at eleven. Conceivable is not well defined. And even if we invent an EU standard human, then the SII is only going to be as well defined as the standard human. Basically, if you're going to mess about with infinities in maths, then you can't get away with handwaving wooley definitions and hoping that your point holds. --Angoel
Random aside - I assert that one is the smallest positive inconceivable integer. People may think that they're conceiving it, but really they're only conceiving *one of an object*. Not the pureness that is one itself. And the people that think they're conceiving one are just deluding themselves - they're really just conceiving half of a two --Angoel
As I said, please define 'conceivable'. Currently your approach is to use the algorithm 'writing stuff down on the wiki' and that has finite resources and time - so only finite numbers exist. There has to be a largest - we just won't know what it is until the EndOfWiki?. --Vitenka
I don't think that any numbers are 'inconceivable' I mean, if I started counting I could go on forever (well, I'd die eventually but...) or I could start at a large number I already knew things about and start counting there... I don't need a physical reference for every number... I mean, I can conceive of sqrt(-1) but it doesn't mean I can count i apples... - Naath
Since people are obviously missing it, let me put it on a line of its own:
I am defining the phrase
'The integer X is conceivable to person Y.'
as meaning
'Person Y can construct an absolute referent to the specific integer X.'
Where an absolute referent is a non-self-referential way of specifying a particular single number.
Examples of valid absolute references:
"10"
"42 to the power of 69"
"The Algorithm A1 applied to the Number N1 where N1 is the number of grains of sand on all the beaches in England at midnight last night, and A1 is 'factorial'."
Excellent. We finally have a relatively good definition. Even if it means that the definition changes between people ;) Anyway...
Theorem: Provided the person concerned is able to copy ten words, then there is no such thing as the smallest inconceivable integer.
Proof: Suppose this integer does exist, and call it X. If it is the smallest possible integer, then they must be able to construct an absolute referent to the integer X-1. Let this absolute referant be R. Now "The integer constructed by R, with one added to it." is an absolute referant for X. If the person is capable of coping this sentence down, they are capable of constructing an absolute referant for X.
Nup - won't work. You've got a perfectly good algorithm for defining your number sets. The algorithm includes a human, but that's ok, humans are asmaller than the space we allowed for algorithms. Problem is, you're still bounded by time - no matter how fast your human works, he can only conceive of a finite number of numbers. One of them will be larger than all the others. --Vitenka
Running out of time is covered in the phrase "Provided the person concerned is able to copy ten words." Running out of time would be a reason for them not being able to copy the ten words. Next objection? --Angoel
So, if you change the rules of the problem, it goes away? Wow, applied maths is wonderful. --NT
I haven't changed the problem - I've been given a problem with non-concrete things, such as 'a person Y', and I'm trying to make the best of it :p. My approach so far has been to attempt to compartamentalise out the non-concreteness into a condition in the theorem. If and when I can get people to generally agree this bit, I'll attempt to extend the theorem to cover more cases. --Angoel
Yes, the original version didn't have such messies as people, though it could be we're now talking about a different problem and I failed to notice. I assumed we got to import all the definitions from higher up the page, and that this was intended as an illustration. -- An imperceptive NT
AlexChurchill would argue that it always contained messies such as people; they were just implicit in words like "conceivable" before. Incidentally I agree with Angoel that there's no smallest inconceivable integer; I also note that what's conceivable to any given person varies with time. Most of us haven't been going out and adding one to a number each day of our lives; but the size of numbers I could conceive of increased sharply when I learned about the googolplex, for example, or learned how to apply iteration. (In case people haven't realised how big set B gets - observe that "x=2; repeat (x=x^googolplex) one googolplex times" is a very very short algorithm.) I would quite enjoy the task of increasing the maximum number a given person can conceive of through conversation with that person. --AlexChurchill
Meet AckermanFunctions. They're a nice way of producing something quite quite unnecessarily large. --TI
Theorem 2: The smallest possible inconceivible integer for a given person is a strictly increasing function of the number of days that that person has been alive. (This assumes that the person is able to use at least ten minutes a day conceiving numbers, and that once they have conceived a number, they can always construct an absolute referant to it).
Proof: Each day, the person goes out and constructs a referant as follows "The integer constructed by taking the number I constructed a referant to yesterday, and adding one."
I agree. And the same applies to a group of people. Or to the group "all the humans who have or ever will live in this universe.". While we can't specify what that smallest integer is, can we place bounds on where it is likely to be? As a start, I would say that 1 < SII < 10^53 * 10^(5.4 * 10^162) < LCI < infinity --DR
You are hypothesisising that SII < 10^53 * 10^(5.4 * 10^162) < 2^(10^164). I construct a referant as follows. "The binary number created by examining a cuboid 1m wide, 1m tall and 10^164m long, with the end starting here, and the cuboid going off in that direction (points into space) and seeing whether an electon / positron pair are created within each 1m cube of that rectangle during the next second. If they are then the corresponding space in the binary number is a one, otherwise a zero." Every number between 0 and 2^(10^164)-1 is created with nonzero (albeit low) probability. Thus it is possibly to create a referant to any of them. Thus, the smallest inconceivable integer must be greater than this value. --Angoel
Minor niggle, but are you sure that 10^(10^162) is smaller than 2^(10^163)? Either way, I did not assert so, which you could be interpreted as claiming. -DR
Hmm. Actually no, you're right. 10^53 * 10^(5.4 * 10^162) = 10^(53 + 5.4 * 10^162) < (2^4)^(53 + 5.4 * 10^162) = 2^(53*4 + 21.6 * 10^162) = 2^(53*4 + 2.16 * 10^163) < 2^(10^164). I've updated the numbers to be 164 in my argument instead.
Why stop at 10^163 metres long? Why not longer? Why not *arbitrarily* long - or at least as long as the largest number you so far have? I also dispute the equal probability - although that's not important - unless P(e/p pair created inside one second in any give metre cube)=1/2. --CH
Arbitrarily long doesn't give you an integer. As long as the largest number you have so far would be a possibility, although I suggest using towers rather than this method to provide them. --Angoel
Provided there's a nonzero probability of the e/p pair being created, it doesn't matter what the probability is, because there is a nonzero probability of any of the numbers being created. --Angoel
Because the universe won't fit it. If you arrange your little cubes in a cube, rather than a line, and make their sides the PlanckLength? long, you can only fit ~10^183 little cubes into the universe. This was worked out pretty crudely, assuming the universe to be 28 billion light-years across. --NT, who needs to find more envelopes to work on the back of
What happens when you reach the edge? Do you fall off? And what shape is the universe. Please say it's a disk riding on the back of a turtle. --Angoel
Right, so this method hits a limit at Size-of-Universe in PlanckLength?s cubed times PlankTime?. You can probably squeeze a couple of orders of magnitude extra by adding in other particles, but that's about the limit. --Angoel
I wonder whether you can keep going by saying that the universe should be reset. We'd need to assume an oscillating universe model, but given the number of unproven things people are taking as axioms, I'm sure noone will begrudge me that. --Angoel
I don't think that constructing a reference to a range of number is the same as constructing a reference to a specific number. If it is not meaningful to think to yourself "Is integer {my_reference} prime?" and expect to get a single boolean response (rather than an array of booleans) then it isn't a refererence to one specific number. --DR
This is a reference to a specific integer. The integer that it turns out to be referring to is a random one from a range of integers. As it is possible that any of the integers can turn up, then the person is able to have constructed a specific referant to any of these integers. It just so happens that they didn't. It's easiest to visualise using the multiple-universes model - you have a person who constructs this reference. Universe splits into 2^(10^164) different universes in each of which the number was different. However the person in the original universe had the ability to contruct them all. --Angoel
Hmm, ok, I see what you are getting at. So, Angoel, do you think there is such a thing as a LCI (for given groups of concievers and limits on size and duration of universe available) ?
Oh, yes. Indeed I think that there is such thing as a SII (for given groups, etc.). It's just that I'm not convinced that any attempt at upper bounds will necessarily be helpful, since people can wonderously inventive about getting around limits, and because the limits on size and duration are pretty uncertain. And lower bounds are boring ;) --Angoel
AlexChurchill is still not convinced there's much point defining conceivable in terms of absolute referents. As Requiem said, if we have to imagine (conceive of in some sense) all the weird and wacky abstract mathematical objects we see at university, and our brains get practiced at dealing with these abstract (and/or infinite) objects, why restrict oneself to things with a grounding in crude *matter*? ;) -- a semi-serious AlexChurchill
I think it's a case of the largest integer you can build. rather than the largest one you can conceive of. People tend to have trouble holding more than about 25 items in their head at once, anyway (if the Illuminatus! trilogy is to be believed). ^_^ --Requiem
As an aside, what's the largest integer you can come up with in 100 bytes. External context is any programming language of your choice. --Angoel
Can the programming language be assumed to have a BigInteger? extension, or do we have to code that within the 100 bytes? --AlexChurchill
(PeterTaylor) Probably depends on what axioms of set theory you assume. I have a 64-state TM which definitely terminates, outputting a number which is probably rather large, if we assume the existence of rank-into-rank cardinals, but whose termination is not known to be provable in ZFC (unless I missed some publication in the last year or so). https://cheddarmonk.org/papers/laver.pdf
Obviously, in practice it would be just an increasing function. I know I haven't spent ten minutes each day conceiving larger and larger numbers... I get by with a few googolplexes at a time. --TimeTrout
The definition is 'Person Y can construct an absolute referent to the specific integer X.' We're only discussing the ability of the person to construct these, not whether they actually go out and construct the referents. Thus it's strictly increasing. --Angoel
Modulo things like Alzheimers, attention deficit disorder or gradually forgetting the maths one learned at university. It doesn't have to be an increasing function (although it usually will be.) --AlexChurchill
The modulo things are covered in the assumptions in brackets in the initial theorem --Angoel
Terminology problem. Please define 'conceivable' X+1 is not a member of F, because X+1 is not a member of A. --Vitenka
But doesn't "the SII" count as "a referent to that specific integer"? And if so, doesn't that prove the nonexistence of the SII, by contradiction? --Rachael (edit conflict with Vitenka and not sure whether I'm arguing the same point as him)
(PeterTaylor) Definitely a different point - if you argue that humans can conceive of numbers of the order of 10^20, which I understand you to do, then adding one isn't going to make any difference whatsoever to a human's ability to conceive of that number.
If I'm understanding right then this is a fuzzy set, based on the length of number the human brain can happily internalise. Note that this is different from the nubers the human brain can happily construct - you could write 1*10^20 very easily but you couldn't really hold it in your head - CorkScrew
I assume you're not asking "Does the set Z have a size?" but rather "Is there an integer larger than which it is impossible to get in the real world?". In which case, obviously, yes. You can conceive of a greater number (I'm perfectly happy conceiving of things I can't construct, like n-dimensional vector spaces) but you can't build it. Is this your point? --Requiem
Not really. Indeed, I think it obvious that set B contains integers that are larger than the largest member of set C. --DR
Yes, that is obvious for a certain level of obviousness (obviety? don't know). --Requiem
I've missed a line. But the reply is: No. If you start counting at ten billion, then ten billion is one in your scheme of things. You still only enumerate numbers up until you run out of time - and your largest number is no bigger than your identical twin who did the same thing starting at zero. You just happen to speak a different language. --Vitenka
No, it isn't. If you divide two by one, you get two. If you divide ten billion and one by ten billion, you don't get two. It isn't just a difference in language - the numbers are different. --Angoel
I have this vision of hoards of shadowy numbers lurking out there in the dark, beyond the small sphere of light cast by the candle of reason. They are whsipering to each other; plotting who knows what. Perhaps they don't like us very much for capturing their smaller brethren with our minds. Or perhaps they just live uniquely numberish lifestyles, out there beyond our ken.
DouglasReay further ponders that we might actually be able to put a size on that 'small sphere of light'.
What limits can we place on how many members there are in B?
M < (a * c / p) ^ 3 where c = 299,792,458 m/s, a = 1.4 * 10^10, p = 1.6 x 10^-35
< 1.8 * 10^163
sizeof(A) < 2 ^ M (which is something like 10 ^ (M * ln(2) / ln(10)) isn't it?)
< 10^(5.4 * 10^162)
sizeof(B) < T * sizeof(A)
< 10^53 * 10^(5.4 * 10^162)
2 to the 24,036,583th power minus 1 is a prime number. Aren't you glad to know that? --DR
Useful little web page, allows you to do arbitrary precision maths. For example: [2^(2^(2^(2^2)))]
9^9^9^9 took too much memory to evaluate. So not really suitable for /BigNumbers, then.
How did you expect a web page to display that number? At 1 page a second, full of decimal digits, it would take longer than the life span of the universe to scroll down. --DR
Use a smaller font? ;). OK - 9^9^9^9 mod 3 took too much memory to evaluate. It's still not suitable for /BigNumbers :p.
MoonShadow - any chance of doing the same sort of thing you do with postcodes and google, with this? --DR
Here, put in a bit more detail, is what I was trying to describe.
"I" is the induction function. It takes the place of the symbol "...". The first argument is the number in the sequence actually intended, the following pairs of arguments are sequence number, sequence value.
So, for example, instead of defining the successor fuction S(N) by saying S(N) = N + 1 or by saying S(0) = 1, S(1) = S(S(0)), S(2) = S(S(S(0))), S(N) = S(S(S(...S(0)...))) We can use I to abbreviate that to: S(N) := I(N, 1, S(0), 2, S(S(0)), 3, S(S(S(0))))
And, because we may want to define lots of functions (more than there are letters in the alphabet), we'd be best off using a universal function F( ) whose meaning (ie 'letter') is defined by the first argument. So A(X,Y) becomes F(1,X,Y), B(X,Y) becomes F(1,X,Y), and so on.
So, to standardise the first few meanings of F
let's make F() with just one argument be the numbers (which we could represent using [ChurchEncoding] as 1 being F(), 2 being F(F()) 3 being F(F(F())) etc)
let's make F(5,A,B,N) be Knuth's Up Arrow - A (^)^N B
And, if instead of using as our variables A, B, C, D, etc (we might run out), let's use N(1) to be variable A, N(2) to be variable B, etc. This will then give us scope to define [Conway Chained Arrow Notation]
Let's make F(6,N(1),N(2),...) be conway's chined arrow A-->B-->N-->M etc.
If we think about using I() on a decresing sequence, that will let us define subtraction, which will then let us define division and [the Ackermann function]
Where am I going with all this? Well, suppose we define a number U(N) to be the largest finite number uniquely specified in N characters or less by using the above symbol system and starting from the primitives C(), F(), N(), I() and a few miscellaneous symbols such as ":=", ">", etc. [*1]
So for example: "F(F(),N(F())) := I(N(F()), F(), F(F(),F()), F(F()), F(F(),F(F(),F())), F(F(F()), F(F(),F(F(),F(F(),F())))) ; F(F(),F())" takes just under 100 characters, and only have the value 1 - certainly not the highest valued number possible in 100 symbols, but it does get the successor function defined. By 200 characters you'd have addition defined as well. By 1000 or certainly by 2000 you'd have the first 10 functions defined above (up to Ackermann and Conway) and be into the really big numbers.
You made a slight mistake in that definition. If translated into slightly nicer notation, it comes out as "F(1,x) := I(x, 1, F(1,1), 2, F(1,F(1,1)), 3, F(1,F(1,F(1,1)))) ; F(1,1)" (or, if F() is supposed to be 0, replace every number by a number one less). So F(1,1) = I(1, ....) = F(1,1) = I(1,...) = ... and so on, not actually terminating. It would of course be easy to fix, but the fact that you've come up with something nonterminating by accident strongly suggests that it's possible to define nonterminating things, which you were trying to avoid. --Edwin
Now, suppose in a way anaologous to Conway notation we defined U(2000,2) as being U( U(2000) ) How large a number could be uniquely specified (without recourse to tactics intefered with by computing whether something will halt or not, which is a problem suffered by the more loosely specified [Busy Beaver] [*2]) in that many characters?
I proposed at the SaturdayCoffee that U(20,20,20) was the largest finite number yet written.
Until of course someone came along and added a "+1" at the end. :-)
Footnotes: [*1] In fact, to allow new symbols, in the same way that we allowed new variables and new functions, we could define S(1) as "=", S(2) as ";", etc. But that's just being picky. If we wanted an alternate way of writing U(20), U(20,20), U(20,20,20), U(20,20,20,20) etc I'd suggest standardisation on a symbol not yet used in maths yet easy to write and present on all computer keyboards... "@". So U(20) becomes @20, U(20,20) becomes @20->20 U(20,20,20) becomes @20->20->20.
[*2] Note that it is intended that the Grammer specified above NOT be turing complete, in that U() is not one of the functions that may be referenced from inside it, nor can it be defined inside it.
I don't think I() is defined well enough for you to use in anything formal like that. What's I(6, 0, 7, 1, 10, 2, 15, 3, 1500002)? --Edwin
That's not quite how you use I(). I() is a direct replacement for "...". You do the definition in terms of the previous high level function, not in terms of the end number. So for instance, you don't define 10 + X as being
For X = 1, 10 + X = 11
For X = 2, 10 + X = 12
For X = 3, 10 + X = 13
...
Instead you'd write it:
For X = 1, 10 + X = 10 + 1
For X = 2, 10 + X = 10 + 1 + 1
For X = 3, 10 + X = 10 + 1 + 1 + 1
...
making use of your already having defined the operation "+ 1". --DR
Corrections: I guess I simplified going from succession to addition and from addition to multiplication. You'd have to define 1+B, 2+B and 3+B before using that to define A+B. And similarly 1xB, 2xB and 3xB before defining AxB?. Also, since 20 isn't enough characters to define anything larger than 20, You'd actually want @1000->1000->1000.
I don't think you've actually banned Turing-completeness. What about things like "F(10, N) := F(N, N)"? I'm a bit suspicious, since you seem to be trying to define a model of computation larger than primitive-recursive but smaller than computable, and I haven't come across one before. --Edwin
I don't think you could use I() to define F(10, N) as being F(N, N) because I() doesn't get applied to the end results of a sequence (which could be ambiguous), it gets applies to an explicit progression used to calculate the end results in a sequence.--DR
And if that number isn't big enough, we could take it a step further. Rather than trying to use U to write long strings such as: @1000->1000->1000->1000->1000->1000->1000->1000->1000 and then go the whole route of writing @1000(->1000)^8 and so on, we could invent another function, similar to U, call it V, which was allowed to use U() among its functions (but not, of course, itself). You get the idea. And when you run out of space on a single web page to write a large enough number with V notation, surely someone will attempt to define W() based off that. So let me preempt the effort. I hereby define W(N) as being the largest number comprehensibly, non-self referentially, unambiguously and 'theoretically computable in finite time'-ly definable on a wiki page of N or fewer characters with no reference to mathematics or other material not already referenced on this page.
I think your W is actually pretty much the Busy Beaver function, and isn't itself computable. I'm not sure whether you're still caring about that aspect if it, though. --Edwin
I'm hoping there are two differences. Firstly, if one defines the rules of the grammar carefully enough, one should be able to decide analytically whether a string conforms to the " 'theoretically computable in finite time'-ly " aspect without having to actually run it.
Alan Turing would like a word with you about that --Edwin
Ok, more precisely, for every sequence of chacters that is valid according to the rules of some specific non-turing complete grammar, that is N characters long, where for that grammar the validity of a sequence can be checked by some means other than running it to check whether it halts of not, there will exist one such sequence with a value higher or at least as high as any other sequence of the same length or less; and that value, for the specific stated grammar, is W(N).--DR
Secondly, the recursive aspect that (were Rado's sigma allowed, which it ain't) would permit the construct Σ(N), Σ(Σ(N)), Σ(Σ(Σ(N))), ... K(M, N) which is a pile of Σ's M long.
Obviously the claim is that the above grammar (the one using F(), I(), U() and functions defined using them such as V() onwards ) counts as a non-turing complete grammar where it is theoretically possible to determine whether a sequence has been written validly using the grammar by a means other than running the sequence for a long long time.--DR
I dispute your claim that the grammar is well-defined or non-turing-complete. I() is incredibly problematic -
Firstly, "and so on" is hardly a well-defined operation, --Edwin
Could one define it in terms of a functional non-turing complete text manipulation language. So for example: defining A x B by looking at A x 1 = A, A x 2 = A + A, A x 3 = A + A + A, A x 4 = A + A + A + A, the rule is "Add the string ' + A' to the end of the phrase (B-1) times." --DR
If you want to define an unambiguous value, then you'll need to change I so that it accepts a rule (defined in some ambiguous non-terminating way), rather than a bunch of examples from which the reader is invited to deduce the rule.--Edwin
Fair enough. See answer to MoonShadow below. Assuming, that is, that you actually meant "terminating and non-ambiguous" :-) --DR
and secondly, you accidentally wrote a non-terminating expression in the grammar further up the page --Edwin
I didn't say it was easy to make a valid string (especially if you're having fun by using Church numbers instead of decimal digits), only that it must always be theoretically possible to deterministically detect and decide whether a string is valid by looking at the syntax, without running into the halting problem. And you did detect it, and detected that the string was non-terminating without running it, yes?--DR
The string was valid, though - there was nothing in the grammar to say that it wasn't. It's just that it was nonterminating. And just because I could detect it in that particular case doesn't mean that a computer can detect it in general. (And if you want to say something like "my grammar doesn't allow nonterminating strings, by definition", then you might as well cut the clutter and say "my grammar is all valid computer programs that happen to terminate") --Edwin
I thought the brackets didn't match? The general format I() should have obeyed was: F(k,A) := I(A, 1, {an expression involving functions k-1 or lower}, 2, {an expression involving functions k-1 or lower that has been modifed in a systematic obvious way from the previous one such as adding on a string}, 3, {an expression involving functions k-1 or lower that has been modifed in a THE SAME systematic obvious way from example 2 that was used to reach example 2 by modifying example 1} ) --DR
"Deduce the general rule by looking at a finite number of specific examples" is not a well-defined function you can use in any sort of formal proof. Please scrap everything above and (if it's possible) rewrite using well-defined operations only. - MoonShadow
Ok, so let's take Edwin's suggestion, and redefine I() so that intead of defining A x B via the progression A x 1 = A, A x 2 = A + A, A x 3 = A + A + A, A x 4 = A + A + A + A, A x B = "A" + ( " + A" ) x (B-1) we instead go straight to the explicit rule A x B = "A" + ( " + A" ) x (B-1) defined by repeating the text operations of string insertion and string substitution a specified number of times. Could that be specified in a way you'd consider to be well-defined?--DR
I think you could tighten it up to be essentially [Lambda Calculus]. Again, Turing-complete, though. --Edwin
No doubt Lambda Calculus could do the job, as could any number of Turing complete programming languages such as PERL. The question is, though, whether a useful version of I() capable of defining the jump from succession to addition, from addition to multiplication, from multiplication to tetration, etc, could be written in something less powerful than Lambda Calculus that is clearly not turing-complete?--DR
For Ackermann, as mentioned earlier, you need some additional symbols ("less than" and symbols delineating a switch case syntax) to do A(0,n)=n+1 ; A(m,0)=A(m-1,1) ; A(m,n)=A(m-1,A(m,n-1)). Would those symbols, if available to be written out onto the symbol string using primitive recursive functions, make the process turing complete? --DR
Yes. If you have a way to avoid it, feel free to state it, and also your explanation for why it isn't turing-complete. --Edwin
How about using the substitution rule used in the [Metamath Proof Explorer] ? Define I() in terms of applying a description of a series of allowable substitutions, to create a new numbered F() function which is the resultant application of that series of allowable substitutions. --DR
So instead of I() trying to construct a multiplication function F(2, ...) from analysing the regular changes between a series of applications of the addition function F(1, ...), you'd have some highly restricted substitution specification language that let you describe explicitly how to go from one to the other.
XKCD Number :== A(g64, g64) Reay's Number :== W( XKCD Number )
DouglasReay has another question about BigNumbers?:
Suppose you have several functions which, when given a number for each input slot, outputs a different larger number. Let's call them P(n), Q(n) and R(n) for the sake of convenience. And suppose, for non-trivial values of n (say, n > 100) they can be ranked into a strict order by which function will output the highest number for the same input. For example: if P(1000) > Q(1000) > R(1000), then P(2000) > Q(2000) > R(2000)
One can define several possible new functions by specifying an order in which to apply the other functions. Obviously, the biggest number resulting from three such applications to the initial starting value would come from M(1000) defined as P(P(P(1000))) {going with our previous example in which P>Q>R} But, what if we want to use each function just once? (with the obvious clarification of allowing a result to be re-used where two inputs are needed) ?
If S(1000) = P(Q(R(1000))) and T(1000) = R(Q(P(1000))), which will be bigger? My intuition says T will be bigger, and that it will be most effective to put the most powerful function on the inside of the loop. But can that be proven? Is there a good way to quantify (or even talk about) the 'second order' property that influences the effect of re-ordering them and how that changes as the initial input number grows?
Don't confuse T() with Kruskal's TREE(3) --Pallando
Without further constraints, that doesn't hold -- if P(n)=n+6, Q(n)=n+5, and R(n)=n+4, then S and T are identical. --Nozaquirce
(PeterTaylor) If P(x) = 2^x, Q(x) = x^2, R(x) = 1 then P(Q(R(10))) ~= 1.3E30 and R(Q(P(10))) = 1048576. You want the most powerful function on the outside, in general.
"In general"? What are the exceptions, and can it be quantified? --DR
(PeterTaylor) The only exceptions I can think of are those where it doesn't matter. Nozaquirce gave one; x -> x^2 and x -> x^3 is another, because whichever order you compose them you get x -> x^6.
(PeterTaylor) Actually, building on that latter example it's possible to get an actual reversal. Consider P(x) = x^3 + 1, Q(x) = x^2 - 1. Then P(Q(x)) = x^6 - 3x^4 + 3x^2 and Q(P(x)) = x^6 + 2x^3. Asymptotically they're equivalent, but the second order terms give QP the edge.
(DouglasReay) So in general, for any two positive integers m and n, where n > m, is it always possible to define two functions P(x) and Q(x) for which P(x) > Q(x) once x tends towards infinity but such that P(Q(m)) < Q(P(m)) and the apparent linear trend (just looking at the range of inputs x=m to x=n) is that it is increasingly better to have P() on the inside ?
Note: I'm not sure which would be the best measure of the latter: an increasing ratio [ { Q(P(m)) / P(Q(m)) } < { Q(P(n)) / P(Q(n)) } ] or just an increasing difference [ { Q(P(m)) - P(Q(m)) } < { Q(P(n)) - P(Q(n)) } ]