[Home]NatureOfKnowledge

ec2-18-118-227-69.us-east-2.compute.amazonaws.com | ToothyWiki | RecentChanges | Login | Webcomic

What does it mean to know something?

For a start, it means that the knower is in a certain state. A mental physicalist would say that their nervous system is in a specific state; Descartes that their immaterial soul is in a specific state. Others would have different views as to what, exactly, it is makes up this state, but for the moment that is unimportant. The point is that there is some state of the knower which is part of what it means to know something. We can recognise when someone is in this state because they tend (when not lying or playing silly buggers) to give certain answers to questions: when asked 'What is the capital of Germany?', someone who knows the answer will tend to say 'Berlin'.

This state is distinct from the state of assuming, or believing: someone who knows has, we may assume, reasons for being in that state that are good reasons, as opposed to someone who merely assumes, who has no good reason for being in that state, or someone who believes, who may or may not have good reasons for being in that state but whose reasons ar eof a different kind to the one who knows.

So part of knowing something is the state of the knower.

However, this is not all that knowing something involves. Knowing something also involves the thing-that-is-known being true.

Someone may know that the capital of Germany is Berlin. But they may not know that the capital of Germany is Bonn, because it isn't. However the state they are in may be identical to the state the knower is in, except for the substitution of the external object of their internal state. That is, they do not believe or assume the capital of Germany is Berlin: their reasons for being in the state they are in are such that, if the object of their state was different, they would know it. Perhaps they have only ever seen mistaken maps.

Indeed, someone can go from knowing something to not knowing it without any change in their state. Someone may know that their car is in such-and-such a place; but, the moment it is stolen, they no longer know that, though their internal state has not changed. Their knowledge has been invalidated by a change in the external world.

Thus, knowledge has both an internal component (the state of the knower) and an external component (the state of the world).


This definition asserts that it is impossible to know something that is false.  Or indeed, self contradictory, unprovable etc.  A lot of people would disagree especially with that last one...  --Vitenka (And it says little about SelfReferential? knowledge.  "I know that I am right"...)


It is impossible to know something which is false, by definition. If it's false, you don't know it. Knowing implies having insight into the actual state of the world. If the thing which you 'know' is false, you don't really 'know', do you? Imagine the car situation: you go back you where your car was insisting that you know it is there, but when it turns out it's not there, you wouldn't keep saying 'I know it is here!', would you? You'd have to modify it to something like 'I thought it was here'; or, more accurately, 'I was in the state of knowing it was here, but it turns out that I was mistaken'.




"Indeed, someone can go from knowing something to not knowing it without any change in their state. Someone may know that their car is in such-and-such a place; but, the moment it is stolen, they no longer know that, though their internal state has not changed. Their knowledge has been invalidated by a change in the external world."
I personally find this concept very interesting. One could argue that the instant you are unable to perceive your car, you lose the ability to know its location. You can believe it hasn't quantum-tunneled away somewhere while behind your back, and your reasons for believing thus could be very good indeed, but you can't know. I have heard some very-well-argued attacks on science based on this principle. The area of the definition above that needs clarifying, then, is what constitutes "good" in "...reasons for being in that state that are good reasons..."; some would argue that nothing short of immediate perception (or not even that! - BrainInAJar?!) is "good", while at the other extreme WAFF counts. - MoonShadow

"Do you know where your car is?"  "Yes"
"Do you know that your car hasn't been stolen?" "Well, no"
"Do you know where your car is?" "Okay, no"




It is impossible to know something which is false, by definition. If it's false, you don't know it.
What about approximations - things which are, strictly speaking, false, but are "good enough" for some purpose or level of precision? You can conceive of a corner case where you end up with a person believing something to be true which is - as far as is testable by him or anyone else he can communicate with - true, but is not *actually* true. Scientific theories come to mind as a field where such corner cases are being continuously superseded. Again, one could have a go at wrapping this into the "when is knowledge justifiable" question above; also, arguably, only an approximation together with its error margins could be considered knowledge in this system. Alternatively, I guess one might end up with a system in which only statements of the form "I currently believe that I observed X" are knowledge, and anything else something weaker. - MoonShadow




DR happens to disagree with most of this, because he does not think that "a true justified belief" is actually a very useful definition for knowledge.  If a parrot has been trained to say "Berlin" every time it is asked the question "What is the capital of Germany?", does it really know it? 

Can a parrot trained to respond with "Berlin" to the spoken question "What is the capital of Germany?" really be said to hold a justified belief, or even any kind of belief at all, about Berlin being the capital of Germany? - MoonShadow

(DR) Ok, I was being nice.  For "Parrot" substitute "Pupil".  The nature of knowledge is something teachers face every day.  Have they really learnt what I am trying to teach them, or are they faking it on a basis or luck, parroting and broken approximations?  Do they really know?  Do they really understand?  A teacher tests understanding by seeing if the pupil can make use of the knowledge, relate it to things they already know and build on it to make accurate predictions (give answers).

(DR) Sure, it is nice to be able to differentiate things you know from things you just guessed at or delusions, but I'm unconvinced that making reference to an external reality is the most useful approach when, in the end, if you are being pedantic/philosophical you can't actually know for 100% certain anything about reality or even trust your perceptions of it.
Indeed, a number of schools of thought hold this; and therefore must amend definitions of knowledge similar to that originally posted to take this into account. Which is part of what I was attempting to say above. - MoonShadow
(DR) as in K-internalism in the linked article.  But I'm thinking of a more Eastern approach here.  In Taoism is makes perfect sense to speak of going beyond words, beyond knowledge and just living in a truth, unaware of it as a fish is unaware of water, yet still acting upon it.  In the teaching sense of knowing implying a network of related pieces of information sufficient to apply reasoning upon (See Wisdom), this is definitely knowledge, even if it has gone beyond conscious awareness.  The craftsman who can no longer state why he carves that piece of stone first when making a statue, but just 'knows' that it is the right way.
Gaptooth put a question to Wang Ni: "Would you know something which all things agreed is true?" "How would I know that?" he replied. "Would you know what you did not know?" "How would I know that?" he replied again. "Then does no thing know anything?" "How would I know that?" he replied again. He then continued, "however, let me try to put this in words: how do I know that what I call knowing is not ignorance? How do I know that what I call ignorance is not knowing? ... Gibbons are sought by baboons as mates, elaphures like the company of deer, loaches play with fish. Maoqiang and Lady Li were beautiful in the eyes of men but when the fish saw them they plunged into the deep and when the birds saw them they flew away. Which of these four knows what is truly beautiful in the world? In my judgment, the principles of Humaneness and Rightness, the paths of True and False are inextricably confused: how could I know how to discriminate between them?" (Graham, p. 58, mod.) --ZuangZi??



It is certainly possible to know things which we do not perceive directly with the senses; indeed, it is possible to know things that cannot be perceived by the senses, by their very nature, such as mathematical results. It is possible to know that the sum of e to the power of i times pi and 1 is zero, but what would it mean to perceive that? Nothing; it cannot be perceived. So perception is not necessary for knowledge.

However, correctness is necessary for knowledge: for it would be possible for someone to be convinced by a faulty proof that the sum of e to the power of i times pi is positive one. They would then be in the same internal state as the person with the correct knowledge, but the object of their state would be false so they couldn't be said to know it. The difference between real knowledge and false knowledge then must rest in something external to the knower.

Contrariwise, as was pointed out, even direct sense perception does not guarantee the truth of knowledge: there are delusions and deliberate deceptions which can mean that what we perceive is not what is true. However, to conclude from this that we cannot know anything, ever, is surely to go too far. That we can know things is a familiar concept which we know intuitively; and even those who deny that they can know things act in their daily lives as if they can. The usual counter to this argument runs along the lines that there are plently of pre-theoretical intuitions we have that turn out, on closer inspection, to be false; however, in general what actually happens in these cases is not that the pre-theoretical understanding must be completely abandoned, but rather than it must be refined. To jettison the idea of knowledge entirely would be such a huge change in our understanding of the world that it woud take much stronger evidence than that presented.

As for the definition of knowledge, well, for a start was it not clear enough that 'a true justified belief' is not the claimed definition? Was it not clear enough from the second paragraph (put in specificaly to make this point) that 'knowledge' is not in fact defined in terms of 'belief' but is a completely separate concept? Knowledge cannot be a 'true justified belief' because the moment something is a belief, it cannot be knowledge; the two are separate. The state-of-knowing and the state-of-believing are distinct (even if they have the same object).

If a parrot, or pupil, has been trained to say 'Berlin' every time they are asked the correct question, then no, they do not know it, because obviously they are not in the state-of-knowing, which includes some understanding and assimilation of the fact: otherwise we could claim that a computer 'knows' the things which are in its database, which is obvious nonsense. But in fact, it seems that your teachers have a very clear idea of this distinction and what it means to be in the state-of-knowing; their difficulty is in finding ways to tell, from outside, whether someone is in the state-of-knowing or not. But that difficulty is not relevant to what the nature of knowing actually is.

Finally, we have Putnam's jar-bound brains mentioned. It's amazing how often this gets trotted out as if it were a disproof of externalism, when in fact Putnam invented the thought experiment to should how internalism must be false. See http://plato.stanford.edu/entries/brain-vat/ .




"Knowledge cannot be a 'true justified belief' because the moment something is a belief, it cannot be knowledge; the two are separate."
Is that an axiom? I would call "I hold that X" a belief, and I would also be hard-put to describe "I hold that X, for reasons Y which fully justify this position" in the context of X actually being true, as anything other than knowledge. How do you propose I distinguish between the two? Or else what term should I be using instead of "belief" to describe a thing one supposes as part of one's knowledge? Or else what concept am I not grasping? - MoonShadow


Not so much an axiom (or premise, as it's more usually called in a philosophical context) as a matter of definition of terms. Knowledge is when you have (or think you have) sufficient evidence, though sense and reason, to claim that something is true. Belief is when you do not have sufficient evidence, but accept it as true anyway. For instance, you might not know for sure that something is true, but think it is more plausible than the alternatives, so accept it as true. That is a belief. Similarly if you accept something on the basis of a hunch. Whereas if you know something is true, you have sufficient evidence to prove that it is true -- you do not have to accept it on the basis of a hunch or of balance-of-probabilities.

Maybe he is thinking of faith, not belief?  As in HHGG's
"I refuse to prove that I exist," says God, "for proof denies faith, and without faith I am nothing."
See also: [A Hitchhiker's Guide to Epistemology] --DR
Yes, personally, I'd be inclined closer to MoonShadow's definition of "belief" than ChiarkPerson's. I'd say the things you know are a subset of the things you believe. For example, I believe the sun rose today; in fact, I know it did, because its light warmed my bedroom. I believe the sun will rise tomorrow; there's overwhelming evidence suggesting it. (Arguably it might be presumptuous to say I "know" the sun will rise tomorrow (given possibilities of the SecondComing? etc); but I'd argue that I certainly can say I know the sun rose today, because if you start raising issues of "it could be some conspiracy to position a shining sphere that looks like the sun in the sky" or whatever, the whole concept of "knowledge" becomes meaningless and useless.) --AC

Bringing the future into it confuses matters. Let's stick for now with present states.

What, precisely, is your problem with the distinction drawn between knowledge and belief? Perhaps if I put it this way: a belief is something that you can seriously and rationally doubt. Knowledge is something which cannot be rationally doubted, like knowing that the angles of a triangle add up to a straight line.
Looks like the problem we are having is that "belief" is being used to mean slightly different things. ChiarkPerson appears to define knowledge as "claiming something is true while having sufficient evidence to support that claim, while the thing actually happens to be true in reality", and belief as "claiming something is true while for one reason or another being unable to have knowledge of it"; for him, the two states are distinct by definition. AlexChurchill appears to define belief merely as "claiming something is true", and therefore by simple substitution, the definition for knowledge becomes "belief while having sufficient evidence to support it and with the claim actually being true" - i.e. a subset of "belief". I am still to be convinced which definition is more useful - MoonShadow
Where you draw that line is an interesting one, though.  For example, the latter example is false if the universe is non euclidean (which, we currently believe, it is...)  This suggests that the best we can say from inside a system is "I believe that this is knowledge" and it requires out-of-system measurements to have actual knowledge.  --Vitenka




"Finally, we have Putnam's jar-bound brains mentioned. It's amazing how often this gets trotted out as if it were a disproof of externalism, when in fact Putnam invented the thought experiment to should how internalism must be false."
I am reading the page you have linked, and failing to find support for this statement. My reading of it implies that Putnam, having repeatedly encountered the jar-bound brains argument that purports to show one cannot know any propositions about the external world, attempted to use the existing concepts to argue the converse. As for the arguments described, I can't quite see why, if *I* were to grow a brain in a jar and immerse it in a fake world, it would not be able to employ the arguments listed to prove to its own satisfaction that it is not a brain in a jar; I am therefore missing something obvious, or the form of argument proposed is flawed - somehow, I suspect the former; I will re-read and see if it makes any more sense. - MoonShadow

Putnam invented the 'brains in vats' argument as an updating of Descarte's 'evil demon' argument. The idea of all sense impressions being totally false has been around for a while yes, and is not due to Putnem; he just happened to invent the image of the brain in vat or jar, as part of his arguing against semantic internalism. He had repeatedly encountered arguments of the form, but not using the precise imagery.

I think that perhaps what you are missing is that you are considering it from the point of view of the brain in the jar. I don't think that Putnam's argument proves precisely what he wants it to prove (that a particular instance of a brain can prove whether it is in a vat or not); but it does show that a brain in a vat and a brain in a body are not in identical states (even if they cannot prove from their own experience which of the states it is).

No, you are not missing anything.  Putnam's argument is a load of claptrap (or, at most, wildly irrelevant).  He says (to summarise): We can know we are not brains in vats because if a brain in a vat did make the statement "I am a brain in a vat", that statement would be false because the brain, due to its upbringing, would not actually be referring to brains and vats, it would be referring to 'the faked image of a brain as generated by the supercomputer in the virtual world' and ditto for vat. --DR

Precisely how is that claptrap, or irrelevant? As mentioned I don't think it goes quite far enough to establish that a particular brain can tell whether it is in a vat or not, but it does suffice to demolish internalism by showing that the two brains would be in different states (even if the brain itself couldn't determine which one) and therefore the state depends on factors external to the brain and independant of the sensory input (which is assumed to be identical).



"but it does show that a brain in a vat and a brain in a body are not in identical states"
...but we knew that already; and in fact, the 'evil demon' argument could probably be restated as something like "the brain in a vat and brain in a body states are not identical, but the differences between them cannot be perceived by our senses, and therefore (this 'therefore' step is what Putnam attempted and failed to attack) we cannot know which state we ourselves are in; no propositions based on the evidence of our senses can be relied on if we are indeed in the BIV state, and since we don't know if we are or not, we don't know whether any propositions we make based on the evidence of our senses can be relied on, and therefore we can have no knowledge based on the evidence of our senses." - MoonShadow

No, you misunderstand what Putnam was attacking. He was attacking the idea that you can fully describe a mental state simply by reference to things internal to the mind in question (ie, mental internalism). His argument is that you cannot do this, because you can't determine simply from examining the mind whether the things its thoughts refer to are real or created by an evil demon.

Now, Putnam does try to go farther and claim that this difference can be detected by the brain in question; this is the bit of his argument that is shakey. But the significant thing is that a grain in a vat and a brain in a body have different mental states, even thought their internal states are identical (you appear to have conceeded this just by saying 'the brain in a vat and brain in a body states are not identical'); therefore mental states, such as knowledge, can depend on factors external to the mind.

Which is the point I was making at the beginning.
I am glad you accept that the attempt to argue a brain can tell whether it is a BIV is shaky. I don't necessarily dispute the point you make at the beginning, I just wish to investigate some disturbing conclusions that may be drawn - in particular, whether we can under that definition have any (non- purely mathematical or similarly derived) knowledge about the world at all, since I could just be a BIV imagining the lot of you. - MoonShadow

Oh, the consequences of externalism are certainly disturbing: that was the whole point. However, how can you say it questions whether we can have knowledge about the world? We certainly can. You only wonder because you are assuming an implied premise that, to know something, the knower must be able to ascertain that they are indeed in a state of knowledge; and that if they cannot ascertain this from simply their internal state and sense impressions, they do not really know it. However, the whole point of externalism is that whether the knower really knows is dependant on more than internal state, so it's unsurprising that this premise could be incompatible; so the question becomes, are the arguments for externalism convincing enough to accept it and so lose the implied premise?

Indeed, I was assuming that; note how I mention above that "good reasons" in "reasons for being in that state that are good reasons" is poorly defined and this could lead to problems? Indeed, I see your "being able to ascertain that they are indeed in a state of knowledge" as simply another way of saying "have reasons for being in that state that are good reasons", with, for instance, the action of "ascertain"  potentially involving the construction of logical arguments to support one's claim based on other things one "knows", but such construction not necessarily being either required or sufficient for every case. If I am to "lose the implied premise", therefore, I must have an alternative, satisfying, definition of "reasons for being in that state that are good reasons" to replace it. Currently, we have arrived at a place where I fail to see how, if you are *not* "able to ascertain you are indeed in a state of knowledge", you can claim you are. It may, for your intents and purposes, be very useful to claim you are, as an approximation - as, indeed, science often does when popularised; but strictly speaking, I would argue, you are not. - MoonShadow

That's another misunderstanding. 'Having good reasons to be in that state' and 'being able to ascertain that they are in that state' cannot be synonymous, because one can have good reasons for being the the state and yet still be wrong. One might have made a mistake in one's reasoning, or be proceeding from a false premise (such as that the universe obeys certain laws, as in the triangle example). In that case one would have good reasons for being in the state of knowledge (apparantly-solid reasoning from apparantly-true premises), but the thing one 'knows' is false, so one does not really know it. However, one cannot ascertain this from examination of one's internal state alone.

But indeed the whole point is that you can never claim you are in a state of knowledge, because whether you are in a state of knowledge or not is dependant on factors beyond you. Your mistake is in moving from 'can never claim to be in a state of knowledge' to 'can never be in a state of knowledge'. One can be in a state of knowledge without being able to claim that one is in that state, because whether one is in that state is dependant on factors which one cannot ascertain.

But that makes this definition of 'knowledge' worthless.  If there's no way to ascertain whether or not you have that state, what is the point of it being a state at all?  At best you can use it from outside a system to talk about a smaller system (How interesting, four of my boxed kittens think they know something, and one actually does!)  Surely a less strict definition would be of more use?  --Vitenka  (And you can never know that you know something - or at least you cannot arrive at that except by coincidence.  (I know that I know A, I know that I know B... I'm bound to be right eventually) I would have expected knowing include knowing that you know, etc.)

Let's go through these.

Worthless? No, not at all. It's a claim about mental states: that they are not limited to the contents of our heads, but depend on external factors. If correct, this surely has major implications for our understanding of the mind. Surely that would not be 'worthless'?

A less strict definition might be of more use, but would not tell us nearly so much about the nature of the mental -- which is the whole point of the exercise, to work out what is actually going on when we know something. Can it be reduced to just a pattern of neurons in particular states? If externalism is true, then no, we can never reduce the mental to just decriptions of neurological states (note that it doesn't commit us to rejecting materialism though, at least not in this form: mental states could be reducable to physical states, it's just that the physical states must include the state of the world outside the head as well as the state of the neurons and chemicals insie the head). Surely this is an important result, and so cannot possibly be described as 'worthless'?

You can't arrive at knowing something by coincidence -- that's the whole point of distinguishing between knowledge and happening to think something that happens to be true, a distinction I thought was made clearly above. You can, by coincidence, say somethign that happens to be true and preface it with the words 'I know'; or you can believe something that happens to be true; but neither of these are the same as knowing the thing.

And you can know that you know something -- inasmuch as you can know anything. Of course, your knowing that you know something is dependant on an external factor (whether your knowledge of the first thing is real knowledge, or not) so you can't ascertain the truth of your statement 'I know that I know this'; but that's just restating the deduction from 'you cannot be certain that you are in a state of knowledge' to 'you cannot be in a state of knowledge' at the second level. If we accept that we can break the link between certainty and knowledge at the first level -- if we accept that it is possible to know something without being able to be certain that one knows it (which it must be, or else one can know nothing) -- then there's no reason not to accept it at the second level (it is possible to know that one knows something without being able to be certain that one knows that one knows it), and at the third level, and so on.

But, above, you've basically just said that without something external - you can't know anything - specifically whether or not a mind knows something or not.  And if we don't know whether we're examining a knowing mind or a believing-incorrectly one or a clueless one then we can't really use the definition to do anything.  All of this feels like it's just CircularLogic at best, or preparing for a sly misinterpretation or additional hidden axiom and an 'Aha!' at worst.  --Vitenka

You're missing the point. We can use the definition to do things. We can use it to decide whether a particular theory of mind can possibly be true. If we accept the definition, then we can conclude that any theory of mind which does not permit externalism must be false. That's a useful thing to know, isn't it? It would prevent us from pursuing blind alleys of thought.



"It's a claim about mental states"
Is it? I'd argue that, if we accept the definition, rather than having to throw non-externalist theories of mind out of the window, we instead lose the ability to describe "knowing" as a mental state; rather, it becomes a state of a larger system that contains a mind in a particular mental state that we don't have a word for. The closest we can get to a mental state becomes something like "believing one knows something". - MoonShadow

Oh, all right. Maybe it isn't a knock-down argument for externalism all by itself. But it certainly suports externalism and provides evidence against internalist theories, and can be combined with other arguments (Putnam's sematic reference arguments about Twin Earth, for example) to give further support for externalism.

But the definition destroys itself.  Both indirectly (since we cannot know whether or not we know something, we cannot then test it to see whether that state needed something external) and directly (we cannot know whether or not the theory is true, without something external)  At best we can say whether or not a theory is consistent with externalism.  Which might be useful in sorting the theories.  --Vitenka

Okay. Firstly, have you ignored all the places where it has been pointed out that th definition does not stop things being known? Indeed, it starts from the premise that knowledge is possible. So to say that it means we cannot know things you must have completely misunderstood.

However, having said that, I don't think that the truth or otherwise of this theory is something that can be known: I think it could only be believed. But that still doesn't stop it being useful. Either it is true, in which case our theories of mind must adapt to fit it; or it is false in which case it in, yes, worthless. But in neither case does it 'destroy itself'.

Vitenka: the conclusions from the arguments presented here aren't quite as strong as you suppose. You cannot derive "we cannot know anything" from just the arguments presented; the most you can say is "we cannot depend on the senses" - things like "1+1==2 within my system of mathematics" can still be known. AFAICT, the definition does indeed appear to allow us to know itself and that we cannot know anything that depends on things external to our minds, without being inconsistent. - MoonShadow

Where does this 'cannot depend on the senses?' thing come from?
From the fact that "knowing that you know" is still a part of the definition of knowledge that Vitenka is using. I am attempting to build an argument that you can *still* know other things even when you are in a state where you *know* that version of the definition, without being inconsistent - that it does not completely destroy the set of things one can know, just reduces it. - MoonShadow
The theory doesnt' say anything about the senses. And, again, lots of things can still be known. It can still be known that the sun is above the horizon. We cannot of ourselves ascertain that we know it, but that doesn't stop us knowing it unless you add as an additional premise that we can only know things which we can be certain we know: and that is not really a justified premise because it's only if that were true that we could never know anything. It is your argument which destroys knowledge, not mine.

But if we're uncertain about everything we can't draw any useful conclusion about anything.  Our conclusions might be right, and we might even know they are right - but whether they are or not depends still upon something external.  So you end up with 'externalism might be true under extrenalism'.  This seems unhelpful at best.  --Vitenka  (Really bowing out now)

Nearly, but no. Your second sentence is exactly right, your first is right but rather pessimistic in tone, but your third... there's no 'might' about truth. Externalism either is correct or it isn't.

On knowing that you know

This phrase is leading to confusion. The definition of knowledge put forward allows knowing that you know. What it does not allow is being certain that you know. It would be useful if this terminology was used.

And -- can the 'being certain' condition actually ever be satisfied? This is where I would argue that it is only with the 'being certain' condition added that knowledge becomes impossible, as it is in fact impossible to ever be certain that any fact you know is actually correct.



"as it is in fact impossible to ever be certain that any fact you know is actually correct."
I know, for instance, the fact that the English-language statement "this statement is false" forms a paradox. I would argue that this knowledge is not dependent on external referents and I can therefore be absolutely certain I know this, regardless of whether I am a BIV, regardless of the truth or otherwise of externalism, and so forth. - MoonShadow

Interesting example, but not really a counter-example. You're basically, if I understand you right, appealing to the class of purely logical deductions as things which you can be certain that you know. This includes things like a triangle's angles adding up to a straight line, and so on. However, this is not the case: there are two possible source of error in logical deductions, mistaken premises and mistaken reasoning, and you can never be certain that you have reasoned correctly from correct premises. Of course, in such a simple case as your example, it's very unlikely you have reasoned wrongly; but clever people than either of us have made silly mistakes in simple reasoning, so isn't it a bit presumptious to say you are absolutely certain?
See DouglasReay/TheToposOfParadox --DouglasReay



CategoryPhilosophy
Reading material: [The Analysis of Knowledge]

ec2-18-118-227-69.us-east-2.compute.amazonaws.com | ToothyWiki | RecentChanges | Login | Webcomic
This page is read-only | View other revisions | Recently used referrers
Last edited September 2, 2008 7:47 am (viewing revision 58, which is the newest) (diff)
Search: