But how relevant are the blueberries?

Intellectual reader, I invite you to imagine with me a malleable set of declaratives. By this I mean a set of logically related statements that can be altered for the purposes of experimentation; we can take away, add, or reposition declaratives and observe what becomes of the rest of the set. Our first observation will be the way in which each component part is related to each other. Only two sorts of logical relationships may exist between any given pair of statements, though these relationships may be described multiple ways and are best expressed as magnitudes, not booleans. In other words, it is best to discuss the extent to which a certain relationship exists rather than the fact of its existence or lack thereof.

venn diagram figure 1
Figure 1

We will here only discuss one of the two relationships: that of logical consequence. To describe this relationship, we may refer to declaratives as either “following from” one another or else “being contained” within each other. A concrete example is in order: suppose I held before you a black pen; if I were creative enough, I could talk about the pen forever, because there are infinite truths that may be said of this black pen of mine. But suppose, of all the possibilities, I chose to say to you, “this pen exists”. The use of the demonstrative pronoun ‘this’ brings into language all the infinite qualities that the pen possesses; hence, “this pen is black” follows from, or if your prefer, is contained within “this pen exists” because the former is a subset of all the infinite truths contained within the latter.

Figure 2
Figure 2

So picture the two declaratives as a venn diagram; in this instance, it is not a conventional-looking image (figure 1). But if we were to consider another example, the diagram would look more familiar: suppose instead I said to you, “this pen uses black ink, and all pens that use black ink write clearly”. Now you might reply, being the clever reader you are, with another fact that follows and is contained within the previous two; “if that is so,” you would answer in your decorous manner, “then this pen writes clearly”. Aside from our admiration for what a sensible and insightful logician this response makes you out to be, we are now struck by the complexity of a logical phenomenon. Presently we have two statements that intersect to form a third (figure 2), so “this pen writes clearly” follows from the union of “this pen uses black ink” and “all pens that use black ink write clearly”.

Kindly notice that each bubble in the diagrams above may vary in size, depending on what order of infinity it represents. Notice further that, in our second example, A and B share certain common facts, which set of declaratives we call C, but also have some differences. So how closely related are A and B? The answer is a simple measure of area, and it describes a notion that I will call ‘gravity’. To express the formula for gravity, I will refer to the area of a statement X with the symbolic convention, ∫X. So the gravity between A and B in our example is Γ = ∫C / (∫A + ∫B).

This expression solves two important problems. The first is that of defining a scope, a sector of reality that is coherent. Consider an example: you tell a friend that, on theological grounds, you believe it was immoral for him to steal blueberries from Mr. Dimmesdale, and in his contemplative manner, he says, “but ‘God works all things together for the good of those who love Him’, so my deed will ultimately come to good”. You are both right, but he has misapplied a teleological perspective to an analysis of the action itself. The fact that he brought up exists in a larger scope than the matter you are discussing. And defining a scope is no subjective matter, to express it mathematically, we must first make one more definition: a “gravitational average” is the average gravity that one statement bears on each other member of a set. With that in place, a scope is any set of declaratives that exists such that each member has an equivalent gravitational average.

The second issue that gravity solves is that of distinguishing normal functioning from dissociative functioning. Dissociative functioning is a section of a proof of actions on which an alternative declarative bears greater gravity than the primal premise. For a more in-depth discussion of this, see Is Hypnosis Self-Evident? A Concise Philosophical Inquiry, in which post I describe the concept of gravity in different terms that nonetheless mean the same thing.

It seems prudent to define one last term: the Quantum Model of Reality. If we picture reality as a black-board with an infinite area, on which each infinitesimal point represents a fact (and those combine to from larger facts), by the Quantum Model of Reality, we are able to draw lines on the board to sector it off into quantum regions contained within one another; in other words, we can draw a larger circle around a smaller one ad infinitum, where each circle represents a valid scope that is defined in terms of a gravitational average. This is why, elsewhere on this blog, we have referred to reality existing in ‘levels’. In practical application, “God works all things together for the good of those who love Him” can only be discussed in relation to other notions of equal size, and Mr. Dimmesdale’s blueberries still ought be returned.

Why I am not an Evolutionist

Unlike many, I see no incompatibility between Christian doctrines and the Theory of Evolution.  I don’t think that Christianity is meant to explain all of science for us; instead, I am quite compelled to think the opposite.  The Holy Bible uses the language it uses not to explain the laws of physics to us or tell us how old the earth is, but to explain that which lies beyond the capacity of human finding.  Turning once more to the model presented in “La cima del purgatorio,” one might say that the Bible was written to explain to us all the things that Virgil is incapable of discovering for himself.

With that in place, it quickly becomes clear that any references to “science” that we find in the Bible are not the ultimate intent of their associated rhetoric, but are themselves rhetorical devices being used for the communication of something much more important.  To differentiate between the makeup of a rhetorical strategy and the intention of the rhetoric, consider the case of Larry the Cucumber’s infamous water buffalo song.  Here we have a vacuous vegetable going on with a rather silly song only to be interrupted by some scrupulous other who objects to a discrepancy of complete irrelevance.  The situation is almost comical.  Actually, I think it is comical, maybe even silly.  But I hold it as no less silly to object to a passage in the Bible because it makes allusion to the earth being flat (or something of that sort).  Indeed, at the time the Bible was written, the earth was thought to be flat, and we should hardly expect the text to have gone so far out of its way as to first explain all of science to its readers before making any allusion to the physical universe–that’s just silly!  Instead, the best rhetorical strategy God could have chosen would be to speak of the world in the vernacular of the people he was working through, which happened to include some irrelevant misunderstandings about the physical universe.  This indeed seems to be the strategy He has taken.

For those of you Christians who do not agree with me on this, consider 1 Kings 7:23 which reads, “And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and its height was five cubits: and a line of thirty cubits did compass it round about” (KJV).  If we do some algebra:

d = 10 cubits

r = 5 cubits

c = 30 cubits = 2 π r

2 * 5 cubits * π = 30

π = 3

We get a mathematical statement that I have disproven more times than I can possibly count.  But I do not hold the exact value of π as being any more relevant to the salvation of souls than the ownership of water buffaloes is to the enjoyment of a humorous little song.  And so it seems to me to be of equally little importance whether or not God created humanity through a long, many-yeared process or a six-day one.  All I care, with regard to the literal facts, is that He created us and did so according to the normative principles that have elsewhere been established as necessary prerequisites to our existence.

One brief side note before I turn directly to the science of evolution: All the inaccuracies that have hitherto been mentioned are perhaps not even as dramatic or detrimental to the purely literal bible as we might make them out to be.  Consider the following points:

  • Three is less than five percent different from π, which actually makes it an accurate estimate of the irrational number when we account for significant figures.
  • The earth cannot be proven to revolve around the sun, and we indeed have no conclusive evidence that it does.
  • The earth, being roughly egg-shaped, does not really form any exact geometric shape at all, and therefore, to say whether it is flat or round is somewhat subjective.  Parts of it are flat, and other parts are round, but no part of it is perfectly flat or perfectly round.†
  • The order of the creation of species described in Genesis is roughly the same order science is uncovering, and the word that is translated to “day” could also be translated to “period.”  Therefore, the book might be saying that God created the universe in six time periods which are in the same order as science supposes them to be.

However, as I have said, I find all this argument about the physical universe to be largely irrelevant.  Now to evolution:

I find no theoretical inconsistency with the theory of evolution, but I find it hard to accept as a respectable scientific theory on the grounds of plausibility.  Having relatively little knowledge of biology, I will find it useful to comment on the theory from a statistical perspective rather than an empirical one.

Your DNA is made up of approximately six billion (6,000,000,000) genetic base pairs, each of which are in one of four possible arrangements (assuming that mismatches of nucleotides are negligible).  This means that according to Carl Haub’s estimate for the total number of people that have ever lived, there is less than a one in 3 * 10 ^ 1,800,000,000 chance that you would exist right now, assuming that there cannot be two people with exactly identical DNA, which would further decrease the probability.˚  This number completely excludes the probability of the human race existing, which is dependent on all physical factors that were necessary for its genesis–if there are such identifiable factors–as well as all those which are necessary for its continued prosperity.*

Furthermore, in the world of statistics, if we suppose that apes have DNA that is different from humans by two percent, and humans and apes together have an average of four point two billion base pairs (still using that six billion from earlier, and averaging it with the two point four billion ape base pairs), then forty-two million base pairs had to randomly mutate in order for either species to evolve from their ideal common ancestor (this being one percent of the aforementioned average), and twice that number in order for the whole process to occur.  Hence, the chance of the human race evolving from a common ancestor to apes is less than one in roughly 2 * 10 ^ 25200001 for every four point two billion mutations that occur.  We do not currently have any conclusive figure describing the mutation rate of humans or apes (that I know of), but it is thought to be very low.  Hence, if I were to take a single atom off the tip of your nose and throw it randomly into the universe, this single evolutionary step in what is thought to be an immense chain reaction of similar processes is less probable than you finding your missing nose piece without searching for it (based on the current estimates of the number of atoms in the universe).

It is imperative that you understand that these numbers are incomprehensible (uh … yes … that’s supposed to be funny).

Of course all this work is very rough and dependent much more on statistics than science, but the math certainly shows that biologists have some serious explaining to do, if nothing more.  I fear that because many believe that evolution is so relevant arguments against theism, they have shaded the public’s view of the theory.  Indeed, public perception is so misguided on this matter that people who know nothing about the subject hold it as solid fact.  In reality, it seems that it is very shaky theory, and if the evolutionist don’t have some clever reason that statistics are irrelevant to the plausibility of the theory, then we will all be compelled to call the Theory of Macroevolution “pseudoscience.”  I do, of course, understand that it is a very useful model that can be stimulating to research and organisation of data, and for that reason would not propose to throw it out all together, but would suggest to stop preaching it so religiously as fact–because it is clearly not true.

What bothers me about this situation, and has led me to blog about it, is a concern not with theistic and atheistic argument, but with academic honesty and sincere truth-seeking.  It seems that the voice of those who would point out that the emperor has no cloths has been buried in the overpowering assertiveness of those who would not.  Science has effectively lied to us, and that bothers me for science’s sake as well as for the sake of all academia.

POST WORD:

Here is a program that shows how a geocentric theory of the solar system is just as plausible (but less practical perhaps) as the current heliocentric theory.

And here is a “Super Calculator” that I created with the hope of using it to compute those ridiculous figures I’ve included in this post.  Much to my chagrin, I found that, even as efficient as it is, the program would take many years to arrive at those numbers (that’s how absurd they are!), and was compelled instead to turn to more theoretical methods of “Discrete Mathematics.”  But if you think that a super calculator is the sort of thing you’d like to have floating around on your hard drive, click the link.

(Technically, every word in this article is a “post” word.)

________________________

† As the currant theory stands, it is a fractal.  Of course I have stepped outside the scope of the question once I turn to such a theory, but so have the people who proposed the question in the first place.

˚ We shall ignore the negligible probability of two individuals having the same DNA by chance or in the case of identical twins.  This makes the math easier and has little effect on the estimate.

* I realize that this statistic is not all that relevant to evolution, but consider it an interesting prelude to the more relevant information found in the subsequent paragraph.

Ref #1: What’s Recursion

It was recently brought to my attention, thanks to the much appreciated input of a commenter that not everyone reading this blog knows what a fractal is.  As I began to think about how to explain the concept, I started to realize that there may be many such topic that I frequent in my writings that readers are unfamiliar with.  Although I try to give enough background information within each post, perhaps that is not always entirely sufficient.  Therefore, I have decided to create a series of “reference posts” explaining various such things for the sake of increasing the overall accessibility of this site.

Recursion: see recursion.

On its basic level, a recursive algorithm is an algorithm that is somehow defined relative to the same system it is creating.  For example, one might define a geometric sequence recursively as follows:

A1 = 2

An = A(n-1) * 2

This definition would produce the following sequence:

2, 4, 8, 16, 32, 64 … 2^n

In this sequence, I have defined each term relative to the term before it with the exception of the first term which I have given a set starting point.  Since each term is found by doubling the previous term, the overall sequence can be said to be recursive because each part of the sequence is defined relative to another part of the same sequence, and so, more generally, the sequence is defined relative to itself.

Recursion is, however, a broader concept than this and can extend beyond the world of simple algebra.  In fact, recursion can be found all over nature.  A simple example is when two mirrors are held so that they reflect each other.  One ends up with the image of a mirror inside a mirror inside … This is because the image on the mirror is defined relative to itself.

Another simple example of recursion is proof by induction.  In a proof by induction, one proves that an algebraic expression of n is equivalent to each respective term of a sequence by plugging it in recursively.  I will use the same geometric sequence above as an example:

Prove: if   A1 = 2  &  An = A(n-1) * 2  then  An = 2^n

2, 4, … A(n-1) * 2 = An

Assume: An = 2^n

2, 4, … A(n-1) * 2 = 2^n

∵  (2^n) * 2 = 2^(n + 1)     ! plug-in 2^n for the sequence on the left and apply the algorithm

!of multiplying by two to find the next term.  See if that is equal

!to the definition one the right incremented by one.

(2^n) * (2^1) = 2^(n + 1)

2^(n + 1) =  2^(n + 1)

Q.E.D.

Another example of recursion is the algorithm used by a scientific calculator to parse a formula into a computable expression.  That is, if I enter the expression

3 *2+4*3*2 +2

into a scientific calculator, it solves it recursively.  It might, for example, have an evaluate method that takes the first number and applies to it the solution of the rest of the expression (which it finds using the same evaluate method) using the given operation.  If you are familiar with computer science, the code might look something like this (in summary):

double evaluate(String expression){

if(getFirstOperation(expression) == MULTIPLICATION)

return getFirstNumber(expression) *  evaluate(getRestOfExpression(expression))

+ evaluate(getExpressionAfterFirstPlus(expression));

else if(getFirstOperation(expression) == ADDITION)

return getFirstNumber(expression);

//the code would be written a little differently

//inorder to accommodate for other cases

// this is merely an example that would solve

//the above problem

}

Recursion can also be thought of more generally as any process that references itself.  For example, phycology might be considered recursive because it is using the mind to study itself.  Recursion occurs in levels (see “Levels of Recursion”) or iterations.  Every time a recursive algorithm is completed, the system is said to have undergone one iteration or advanced one level of recursion.

Google has a pretty funny joke about recursion.

Orders of Infinity

You had to know it was coming…another calculus post!

If you have absolutely no interest in calculus, then I don’t recommend reading this, it will probably just be frustrating.  I don’t really expect anyone will follow this, but I did my best in wording it; it is a difficult concept.  I might rather call it an anti-consept because it is not an established idea, but an idea of ideas of ideas of …

Here it is:

Allow me to begin with a definition: an “arithmetic dimension,” n, is an algorithm of numeric manipulation that is defined according to a series of summations or subtractions of the input from or to itself repeated to the n-th level of recursion.

Therefore, the first arithmetic dimension is addition/subtraction, the second is multiplication/division, and the third is exponentiation/root.  These are the only commonly used and defined dimensions, but there are in fact infinite arithmetic dimensions.  Consider it this way: addition is pre-defined, multiplication is the addition of the multiplicand to itself repeated the number of times indicated by the multiplier, exponentiation is the multiplication of the base by itself repeated the number of times indicated by the power.  Therefore, it is clear that the fourth arithmetic dimension is the raising of the input to the power of to itself repeated the number of times indicated by the “secondary input.”  Thus it is recursively defined: the n-th arithmetic dimension is the application of the (n-1)-th dimension using the input as both the input and the secondary input, repeated the number of times indicated by the secondary input.

I bother presenting this definition before we begin because I am not aware of its existence elsewhere.  Therefore, I will invent a notation for it: let the n-th arithmetic dimensional operation applied to an input and a secondary input be expressed as

a n: b

and read as “a dim n b.

I am interested, at present, in only the application of the addition-based side of each of the arithmetic dimensions, so this notation and reading will assume a positive based definition and only acquire a negative one if 0≤b<1 as is inherently true from the nature of arithmetic.  The concept I wish to use this for at present (though I’ve already found it has many applications beyond this concept) is that of the orders of infinity.

It is often said that there are infinite subsets of infinity.  This statement, while true, only looks at the negative-based arithmetic dimensions.  That is, we can subtract, divide, or root infinity by any finite number and get an output of infinity.  What is looked at less often is the opposite, the positive-based arithmetic dimensional operations applied infinity.  Infinity can be added to, divided by, or raised to the power of any finite number, and once again, the end result is infinity.

Of course, the calculus literate know that while all this is true–that is, while any finite operation applied to infinity outputs infinity–the qualities of the infinity outputted by these different operations vary.  That is, while infinity squared still equals infinity to the first power, the ratio between infinity squared and infinity to the first power is equal to infinity, but the ratio of infinity to the first power and infinity to the first power is equal to 1.  Therefore, when we apply any positive operation of an arithmetic dimension higher than 1 to infinity, we get a higher order of infinity that can be appreciated via other arithmetic operations.  (Notice this statement excludes the first arithmetic dimension because infinity + x, where x is finite, is still the same order of infinity.)

This is a pretty big deal considering the following:  In a very important sense, all the orders of infinity are not equal to each other–they are in fact infinitely different, but that, I will admit, is irrelevant.  The real issue in considering them equal is that it would disrupt a pattern when we start using infinite secondary inputs.  That is, ∞ -1: ∞ (also written as ∞ – ∞) is equal, not to infinity, but to zero, but ∞ 1: ∞ would be said to equal ∞.

But now consider something like ∞ ∞: ∞.  You shouldn’t be reading this sentence yet–you should still be considering.

…Ok you can go on reading now.

That above mentioned quantity is the highest arithmetic dimensional highest order of infinity.  However, the raising of such a concept introduces a second set of dimensions: We have thus far defined an arithmetic dimension relative to the use of arithmetic operations, but we might also now consider an “arithmetic dimension” its own operation which can be used, in a similar respect, to define an arithmetic dimension dimension.  I know, the terminology is silly, but it’s the most natural wording that arises.

A second “arithmetic dimension n dimensional” operation with an input of and a secondary input of 2 could be written out long hand as follows:

(a n: a) n: a

and a secondary input of 3:

((a n: a) n: a) n: a

After defining this, we could give it some sort of notation (perhaps a c: n: b), and then define the arithmetic dimension dimension dimension.  We could keep going about this to infinity, plugging the algorithm into itself, with each additional dimension requiring an additional input (though we might just default to assigning this input a value of ∞).  And then, if we really had so much time on our hands, we could begin constructing a series of that series, calling the arithmetic dimension the first in the series, the arithmetic dimension dimension the second in the series, and so on to infinity.  Then we could begin to construct a series of that series, and a series of the series of that series, and so on.  In short, there is no limit to the fractal of orders of infinity.

All that probably seemed pointless, but its not; my point is this: if one travels to a high enough order on any of these dimensions, the first order of infinity in that dimension is considered equal to zero (the trivial case is to compare ∞^1 to ∞^∞).  That concept taken and applied to the infinite sets of infinite series of recursion is a powerful thing.  It is even recursive in itself, because we could use this model of infinity to evaluate the infinite system of dimensions that we have used to arrive at the model.

Spooky.  I know.

A Singular Application of Levels of Recursion

A friend of mine recently showed me the following question which I believe can be found online somewhere (besides here):

If an answer to this question is chosen at random, what is the percent chance that it will be the correct answer?

A. 25%

B. 15%

C. 50%

D. 25%

There is actually nothing wrong with this question.  If one looks at it at the trivial case level, it actually doesn’t have an answer, and therefore, an answer must be assigned arbitrarily in order to see the rest of the system work its way out, thus any answer given is ultimately arbitrary.  The question is, in this sense, like asking “What is the correct answer to this question?” which is really just nonsense.  However, ignoring that, lets suppose we assigned our trivial case the answer “B. 15%.”

This selection, while creating an arbitrary answer on this level, the trivial case, causes a relative correct answer of A or D on the next, lets call it the second, level of recursion.  There being two correct answers on the second level of recursion makes C the right answer on the third level, and thus on the fourth level we are back to A or D.

This is not a paradox, it is just a matter of an indeterminate level of recursion, which I find, as you probably could have deduced from the title of this website, quite fascinating.

Of course, the absolute answer to this question is that is does not have an intelligible answer anymore than does the aforementioned question, “What is the correct answer to this question?”  However, if we assume the trivial case for no reason (i. e. we chose it trivially˚), then I think the most convincing answer would be the “infiniteth” level of recursion, which, because the system has no limit, no end behavior, would be best put into the words “none of the above.”

In a later post, I might well invent a less trivial application or come across it by necessity; I just found this one interesting.

__________________________________

˚ O dear.  I’m really not that funny am I.

Levels of Recursion

Haskell Curry’s paradox, titled “Curry’s paradox,” is often stated in formal logic as follows:

let A = (A–>B)          ! A is a boolean assigned the value “if A then B,” in words this means that “A is true if A’s being true means that B is true”

A –> A

A –> (A–>B)             ! Substitution

A && A –> B

A –> B

A                                      ! Substitution

B                                       ! modus ponens

The logical error lie in the ignorance of levels of recursion.  In reality, there is no such thing as letting A1 = (A1–>B), because that is an in equality.  A1 does not equal A1–>B, it equals A1.  The algebraic analog of this principle would be something like “x=x+1” for which there is no real solution, and seeing as there is no such thing as imaginary logic, at least not yet (though there should be), the expression is utter nonsense in logic.  The first statement can remain in the syntax that it is currently in; however, it should be realized that what that statement implies is “Let A1 = (A2–>B).”  Thus the recession in logic works just the same as it does in algebraic recursive sequences (i.e. we never define “a sub n” in terms of “a sub n” but rather in terms of “a sub n plus or minus some integer value”).

Thus the proof is disproven as follows:

let A1 = (A2–>B)

A1 –> A1

A1 –> (A2–>B)

A1 && A2 –> B              ! This simplifies no further.

This understanding is essential to the functionality of logic and is very relevant allover the place.

The Power Rule

I just thought I’d put this up because most people don’t know it.  The reason I know most people don’t know it is because I’ve never been taught it or read it anywhere, and yet it’s so simple and essential, unlike most everything else I come up with. ;~)

Here it is, a proof for the power rule in deferential calculus (people take too much for granted):

What is a limit really?

In many fields, especially ones related to the physical sciences, small angle approximations are often used. For example, when x is very small, the sine of x is said to equal x and to equal the tangent of x. These conjectures are then used as theorems in a proof.  In my opinion, this is bad calculus. Such approximations generally serve their purpose well, but are merely a way to make a very long proof look very short.  In calculations, it’s fine to be a little bit off, but not in proofs.  We often come upon a difficult matter to prove and a professor takes the shortcut. While these shortcuts do work, they bring us to the right answer, they are not the real reason why that answer is right. In reality, the sine of x is never equal to x or the tangent of x unless x is absolutely zero.  Consider the following proof:

notes:

1. Because the recursion can approach infinity at any rate (even an infinite rate), and in application, x is assumed to be a finite constant just greater than one, this indeterminate quantity (0^inf) evaluates to infinity.  Thus, as we should expect, the anti-proof does not apply to an infinitely small x value or a zero x value.

2. At this point, one may evaluate the expression on the left by flipping and multiplying by the bottom, thus getting sin(0) = 0, which is true, but there is nothing stopping us from moving the cos(x) to the right side of the equation before the self-referencing recursion and thus ending with it on the other side, so the move of multiplying both sides by 1/0 is legal in this case, this way of doing it just makes it more clear.  Of course, if one did start with the cosine on the other side, the proof might also evaluate to 0 = inf.  Admittedly, this proof is somewhat ambiguous over all for these reasons, but the end result (ignoring all the paradoxes) is that a limit is a limit, and any finitely small angle will not satisfy this equation just as much as zero does not equal one, or infinity does not equal zero.  If you plugin an infinitely small value for x, all the problems immediately dissolve and the limit is proven.

3. because, as was said in the previous note, the infinite quantity can approach infinity at any rate (an indeterminate rate), here it could be said to be approaching it at the same rate that x is “approaching” zero (even though x is a finite constant), and thus the limit holds (if we want it too…ha, ha, ha).  Also note, it doesn’t present a problem for us that the power of the cosine had to approach infinity at a rate that would make the power of the cosine evaluate to an infinite quantity, because by continuing to manipulate that original rate of approach, we can also manipulate the rate at which the power of the cosine approaches infinity (which is what we need in this last step).

My Cartesian Point?

My point ultimately with all this silliness that no one actually knows (though we do, I think, know more about infinity and zero and the imagination of numbers– ok, i need to get better jokes–than we say we do) is that if you zoom in far enough (as we did here using the infinite recursion), the tangent of x, the sin of x, and x are just as far away from each other when x doesn’t equal zero as zero is from infinity.  Thus, while the approximations are useful when dealing with actual quantities, they are just bad math when used in proofs (even though everyone does it… it’s pretty much like jumping off a cliff of infinite height), because in proofs, we depend on expressions being absolutely equal, and finitely small number is infinitely larger than an infinitely small number.

What is wrong with the world!?

Answer: Most everything, but here is just a place to start….

BAD CALKALIS!

Does any one have a reasonable answer to the most difficult questions reason may ask?  What is zero over zero?

Lets start by considering the following true or false test.

Actually, this is more like a true AND/OR/XOR/NOR false test:

1. T  ||  &  !|  !!  F    This statement is true.

2. T  ||  &  !|  !!  F    This statement is false.

Both of those seem simple enough at first, but are not.  The first temptation may be to mark the first one true and the second one false and be done with the matter, but that’s bad calculus.  It is fine to mark the first one true; if you do so, you are saying that it is a true statement to say that it is a true statement to say that it is a true statement … on to infinity.  But also, consider marking it false.  If you do so, then you are saying that it is a false statement that that is a true statement, and that is also fine.  Notice, marking it false creates, in a sense, a finite chain of logic, or so it appears to at first glance.  In reality marking it either true or false both create an infinite, self-referencing, recursive chain of reasoning because the statement in itself is self-referencing and infinite regardless of what you mark it.  However, it is sound reason to say either that a statement that claims itself to be true is a true statement (again, on to infinity) or that such a statement is false, but not both.  It doesn’t make sense for a statement that says it is true to be simultaneously true and false.  Therefore the statement is either true or false, to be marked ether with an OR or an XOR if you like.

Next, lets consider the second statement.  This one is not as simple.  It seems okay at first to just mark it true.  But this is because when one does so, one does not see the infinite, self-referencing, recursive property it posses as clearly (just like how it was hard to see the infinite recursion in the first one when it was marked false, but it still exists nonetheless).  So lets begin by marking it false to make it easier to see.  If it is false that that statement is false, than that statement must be true, but if that statement is true, than by what the statement claims, it must be false.  We quickly find that we are going in circles.  Therefore, the statement can be neither true nor false because if it is true, then by the statement, it must be false and thus it is not true, and if it is false, than by the statement, it must be true and thus cannot be false.  True and false are mutually exclusive qualities, therefore this statement is not true AND false, but rather neither true NOR false.

Bad calculus makes these complex problems seem so much easier but it is simply wrong.  For example, it is easy to miss the complexity of the second statement by making a simple error.  One might suppose the statement is false, and therefore consider it rewritten as “This statement is true.”  Upon doing that, one might note that it is sound logic to, as was done in the first statement, mark such a statement false.  And with that he or she would have concluded that the statement is false.  This is bad calculus.  The error lie in the way the statement was rewritten, it should not be “This statement is true,” but rather “That statement is true.”  If this method of rewriting the statement is used, one must then ensure the rewritten statement is true in order for the marking (true or false) to be considered accurate.  Thus, the statement that “That statement is true” which says “This statement is false” recreates the self-referencing logic discussed earlier, only be this method, it instead causes an infinite loop of rewriting the statement.

Now lets consider this same logic in terms of calculus.

First, however, I’d like to present the following model of the categories of logic and math if I may:

Algebra is arithmetic with logic, calculus is algebra with simplified algorithms for computing infinite and non-exiting quantities, and logic is calculus without arithmetic.

In calculus, I’d say these problems are most related to problems that involve an indeterminate.  If you are not familiar with the concept of an indeterminate, take the following examples:

1^∞             0*∞              0/0

There are more, but this is a good start.  The first an last of these examples are much like the logic discussed in the previous section because they seem to have an obvious default answer.   For example, zero over zero is most likely “by default” equal to one, and the power of one to the infinity is like wise, one “by default.” However all three of these examples are like the logic discussed in the previous section in that, they could be said to equal most anything.  I would likewise say that if the logic from the previous section were actually applied in a full rhetorical situation, the outcome could be anything depending on the situation.   Lets examine how this works with a few proofs:

Consider the definition:

LIM X=>0   (SIN X)/X = 1
Trigonometric limit

This can be proved by the squeeze theorem, or l’hopital’s rule.   L’hopital is more useful for what I want to discuss however.

This definition is an example of 0/0 equaling the “default” (one).  Now consider a second definition,  again provable with L’hopital:

Trigonometric limit

This is an example of 0/0 not equaling one, but equaling five.  If you don’t believe me on this, use L’hopital:  take the derivative of both the top and the bottom at the point x = 0 and divide them.  The derivative of the top at x equals zero is five (5 * cos(5 * 0) = 5) and the derivative of the bottom is always one.  Therefore, the limit of the function as x approaches zero is five.

This is the nature of an indeterminate; they equals different things in different cases.  It is all a matter of how the incomputable quantities are related to each other.  The most classic example is an integral.  An integral, as you probably know, is any infinite sum of infinitely small parts.  It is, in essence, 0*∞  or ∞/∞, both of which are the same thing.  An integral may also equal different things in different cases, but by definition it must always be finite (assuming it is integer-dimensional and not fractal).  This is simple logic, because if it were infinitely large, it would have to include ether infinite finite parts or at least one infinite part, and if it were infinitely small, either all the parts would have to have zero magnitude (I’m ignoring negatives for the sake of my point), or it could not be an infinite sum.

Of course your mind will immediately jump to something like this:

the area under a curve

But notice, while this is an area under a curve, and thus can be found using an anti-derivative, it is not technically, by definition, an integral.

That being said, I’d like to bring up a well-known bit of theory that is actually horrible calculus:  the Coastline Paradox.  This is very a well-known theory, if you are unfamiliar, click the link.  It is, however, simply wrong.  The only reason the Coastline Paradox is so popular is because it is aesthetically pleasing on a plain level.  It is “merely poetry,”  (take a look at C. S. Lewis’ essay: Is Theology Poetry? if you want to know what I’m talking about).  Convincing others that the length around the coast of Great Britain is infinite makes people feel smart, but there is nothing intellectually honest about it.

Before I go on, I must note that I by no means reject fractal theory; it is fascinating, but I do reject the Coastline Paradox as a miserable anti-example of the highly intriguing theory.

Lets consider the Paradox as it is made to appear.  It seems at first, that the proposal is that the coast Great Britain, modeled by some sort of continuous function, has an infinite length.  This means that we are, from the start, ignoring any rise and fall of the tide. The length of a differentiable and integrable function is beautifully defined by:

the length of a curve

And for any cusps or discontinuities, the length may be taken from part to part and then summed.  However, in the case of the coast of Great Britain, there can be no discontinuities for obvious reasons, and no cusps because even if it were physically possible to have an actual geometric cusp, having factored out the rise and fall of the waves, we do not have enough precision to account for one.  But lets ignore precision for a minute (and we must be careful when we do that).  Lets suppose Great Britain is frozen in time and the waves are, therefore, neither rising nor falling.  The idea is that if we keep zooming in we will have more and more tinny measurements, and thus, on an infinitely small-scale, we have infinite measurements and the coast is infinitely long.  This, in itself, is nonsense.  Every time it is explained, it is done so in a matter designed to pull the wool over your eyes rather than to introduce you to fractals.  Yes, the smaller the measuring device you use to measure the coast, the more measurements you’ll take, but also the smaller each measurement will be.  The coastline paradox is portrayed to the public using bad calculus.  It is entirely focused on the fact that the length is an infinite sum, and tries to evade the fact that that sum is one of infinitely small parts.  It is, by definition, and integral, and integrals always evaluate to a finite quantity so long as they are integer-dimensional rather than fractal.

If the Coastline Paradox holds any weight at all, it is not the weight that it is appreciated for.  What it is appreciated for is the idea that integrals can have no finite limit (in integer-dimensions), which is absolute rubbish.  It is like a fallacy in the plot of a tragedy implemented inconspicuously in order to achieve a particular aesthetic (see Aristotle’s Poetics).  This is absolutely false.  It’s bad calculus! All the Coastline Paradox would be suggesting if it held any weight would be that all of the physical universe is fractal dimensional, but mind you, this is entirely unrelated to the coast of Great Britain, and such a detail would only evade such a theory.  Nonetheless, lets examine the theory:

If you are unfamiliar with fractal geometry, consider the Koch Flake, a fascinating piece of geometric calculus.  Here’s a diagram:

Koch Snow Flake
The first four iterations of the Koch Flake (from wikipedia)

The idea is that after infinite iterations, the flake will have an infinite length between any two points.  This is true.  As you may find for yourself with a simple geometric proof, each iteration makes the flake 4/3 longer in perimeter than it was in the last one, and thus when n is the number of iterations and l is the original length, the flakes perimeter can be expressed by the following:

length L after n iterations of a flake with original length l

Therefore, because the limit of this expression as n goes to infinity is infinity, after infinite iterations, the flake will have an infinite perimeter between any two points.  It is also interesting to note that because the rate at which the area of the flake is increasing is decreasing with each iteration, the area will diverge at a finite quantity.  The result: a flake with an infinite perimeter and finite area, a member of a fractal dimension.  This is the logically sound grounds on which the Coastline Paradox was likely originally based, but the coastline model is irrelevant.  The problem is that the implied model for the coastline of Great Britain is one of an integer-dimension, namely, two-dimensional, but the whole idea of the Paradox is based on the assumption that it is not two-dimensional, but has a dimension of infinite detail, a fractal dimension.  All the Paradox is really saying is that the atom is fractal dimensional.  There is, of course, no evidence to prove this one way or the other.  It is like saying the universe is infinite.  One can assume the universe is infinite or not, and base ones reasoning on such an assumption, but not prove it. Therefore, the Coastline Paradox is merely a fancy way of saying “I believe in string theory.”