I went to a magic show. It seemed like the woman was cut in half, but I know she wasn't.
This shows it's possible to assert something that is contrary to your seemings.
When I think of intuition talk, I think of it as an invitation to the interlocutor: "hey this seem true to me, does it also seen true to you?". Because if it does not, then we have to debate the issue. Whatever score I get from my seeming, you also get from yours.
I think that "seem" talk is polysemous. I understand the case youre imagining though I still think (in the jargon of seemings) that you have a seeming such that your perception was mistaken and you witnessed an illusion. I.e. I still think the transparency and parasitism of "intellectual seeming" upon belief is correct.
A very thoughtful post--entertaining too! But I don't think I agree with it, so I'll just note some responses.
I'm not sure that my assenting to P entails it seeming to me that P, or vice versa. For example, it might seem to me that it would be wrong to steal the organs from one person to save 5 people, or something, but if I'm a utilitarian, I would no longer believe that it's true. But surely it would still seem to me that it would be wrong, I would simply have other, stronger convictions that entail the falsity of this.
If this is right, then it's a case where I assert the truth of P (P=it's not wrong to steal the organs), but it doesn't seem to me that P. And of course likewise that it seems to me that ~P but I don't believe ~P.
Maybe you want to say that this just couldn't be the case. After all, if your considered judgement is that P is true, then your "considered seeming" must also be that P is true. Fair enough, but from introspection I think that I have things that I believe even though they don't seem right. Even if considered seemings are real, it seems (sorry) like seemings change more slowly than beliefs. In the utilitarian case, I may even eventually have the theory so thoroughly integrated in my thinking that stealing the organs no longer seems wrong (though I have a hard time imagining that), but I think that would come a long time after actually assenting to the truth of utilitarianism. So I think there will at least be some period of time where the two don't coincide.
I still think your argument about recursive justification is interesting though. As a first pass response, I guess I would draw an analogy to the truth predicate. Let's say that the fact that this post is good is evidence that you're a good writer. Then surely "it is true that this post is good" is evidence that you're a good writer, and "it is true that it is true that..." etc.--infinite evidence glitch again! But that of course isn't right. Whatever response you give here, I would guess a similar thing could be said about intuitions.
In the Walsh-Rogan case, I guess I wouldn't say that Walsh provides a new argument when he says that the premises are intuitive. Rather, he simply reports his reasons for believing the premises. Suppose that we're debating whether you're a good writer. I give the argument:
1) This post is good
2) If this post is good, you're a good writer
3) So you're a good writer
If I then say in support of 1 "I read the post and thought that it was good", that doesn't give further reason to believe 1--it's simply a report. Likewise I can then cite litterary conservatism: "If S finds X good upon reading it, that is defeasible justification for X's being good". This again doesn't give extra justification. Nonetheless, this also doesn't mean that I'm not justified in judging you a good writer. Intuitions are supposed to be internal sources of justifications, and so, like my finding your post good, reporting them won't give evidence for another person, except insofar as it serves as some sort of testimony. Maybe reporting my intuitions will also help you realize that you have the same intuitions, making the reason *you* should believe the thing in question apparent--after all, we aren't aware of all the relevant reasons at all times.
You include a quote that suggests this should be a problem, but I don't think it is. I take it that an argument should only have persuasive force for you to the extent that you agree with the premises. If I present you with an argument against some proposition you believe, you will only be persuaded if the credences you have in the premises are inconsistent with the credence you have in the conclusion.
Suppose that Walsh and Rogan were--per impossible--to figure out *all* of the implications of their views, presenting all possible arguments against each other's views, such that the other person had some credence above zero in the premises. They would consider all these arguments and adjust their credences and beliefs accordingly. Assuming that they still didn't agree, each person would end up with a set of propositions where no new argument could rationally persuade them--Walsh would believe P and Rogan would believe ~P, and that would be the end of the story; they would just have differing brute intuitions.
While that is a sad outcome, surely it would still be irrational for each person to change their beliefs--they should definitely not believe something that *doesn't* seem right to them in this situation. I just think it's a sad fact that some people (probably most) just have differing intuitions, so that they could only come to agree through irrational means. Nevertheless, that seems like the best we can do.
The way I'd describe Transplant as a consequentialist is: I believe that something superficially similar in the real world would be very bad, and I believe that in a highly counterfactual situation actually satisfying the specified conditions of the thought experiment, it would be good. That's where I think the felt conflict is coming from. In neither case do I think that using the syntax "It seems to me that P" means anything different from "I believe that P". Of course, there are cases like optical illusions where they do mark a useful distinction.
The way I understand Huemer and PC is that as with the truth of modus ponens, there are some statements whose justification cannot depend on further statements, which is OK. We just 'see' it's right. Huemer's point is that intuitions are inescapable at some point in the chain of justification of e. g. moral truths, which he thinks of as non-natural. While I share your scepticism about using intuitions as a be all end all in a given discourse, I don't think it's fair to brush over the 'barring defeaters' clause and the prima facie status of the the seemings.
You can trivially stipulate the semantics for a word such that it's true when in a premise, but false when in the conclusion of a modus ponens inference. There is no more seeming that modus ponens is true than that any solution to paradoxes of self reference seems true. You can really only think such things are "true" if you're encouraged by analytic philosophy's bad methodology to completely ignore other approaches to the problem besides the approach that seems true to you.
I don't think it's unfair. I think it's transparent such that if you're telling me p, you don't have to also tell me a story about "seemings" and what they mean to you. I believe your claim that p is equivalent to you telling me it seems to you that p, so you're not providing me with any new information. If I dispute p, then I will have to dispute it.
I'm not sure if youret talking about the kind of strange aftertaste during a discussion when people invoke their intuition about something or about a problem with intuitionistic justification. Let's imagine someone disputes the claim that modus ponens is true. What else is there to say to them other than well it seems right to me? There, the fact that I can't say anything more is not necessarily a problem, right? To me, the problem is not intuition per se, but to know which intuitions to trust.
I wouldn't say that because it would be completely unconvincing given that's the very thing we dispute. I might appeal to what is taught in logic, what discipline we engage with, and I would listen to their position and maybe change my mind.
I think you're assuming a model where "rock-bottom" is a difference between THINGS called "intuitions", rather than MY model which is that we disagree about claims, our intuitions just are words we say when we believe claims. The appeal to intuitions is just to restate CLAIM or ~CLAIM. Sure, maybe we can't navigate a way forward in the disagreement (skill issue). I consider myself very imaginative and language as pretty complicated so I think there are plenty of ways to proceed.
For the cases where I would say "it's intuitive to me that p" or "it just seems to me that p" I am capable of deconstructing my thought and re-assessing my perspective. I would consider it a vice and rationalisation/bias for me to intrench myself in the thought "Well I do in fact believe this claim so that is what I believe so that is what I believe" without using all the tools I have to go somewhere productive.
Well, yeah, maybe I'm in the grip of a theory, as we used to say. I still would think that there is nothing you could cite from logic or else to come up with a reason to think that modus ponens is true. Seeing the premises A and A->B and the conclusion B just seems correct. Maybe you assume that there has to be something someone imaginative could come up with to convince the person denying the truth of modus ponens? I hope that we use the same meaning of convince here: I mean giving a fact that speaks in favor of believing p, not state something that makes the other person believe that p.
>Seeing the premises A and A->B and the conclusion B just seems correct.
Because you're using syntactic devices that are characteristic of the artificial constructions of formal logic. Conduct an empirical study with "P1) If it rains meatballs, I never existed. P2) It rains meatballs. C) I never existed." and question people on whether this is a valid form of inference. Many people will say no because ordinary material implication isn't as cut and dry as a formulation of modus ponens is in a specific type of logic.
I'm not sure if we're talking about the same thing. Yes, people will make mistakes when drawing logical inferences. What does that prove about the validity of modus ponens?
There is actually empirical data on this and non-philosophers in western contexts tend to attribute validity to arguments where they agree with the conclusion and invalidity otherwise (rather than due to the form).
"I still would think that there is nothing you could cite from logic or else to come up with a reason to think that modus ponens is true." This is because you're imagining a case where you've specified in your imagination that nothing changes your mind.
We would have to see what we actually *do* in cases of divergent intuition claims and whether we can get on with things and resolve disputes in order to see if my claim, that we can do just that, is correct.
I think the two perspectives here can be at least somewhat resolved.
Consider that "intuition" may be a general form, and "seems" a rhetorically stronger form of informal "table-setting." This is to say, rather than presuming the whole set of relevant elements and where they should go, we signal "intuition" as general and neutral, "seems" and "appears" as visual cues (to recruit embodied processing), "leaning" and "inclination" as sensorimotor cues, etc.
"Once the table is set," however, it is a category error to use rhetorical elements on the table's contents, unless there is tacit agreement for informal conversation.
For example, to rereiterate an already established intuition, which is already providing an "anchor" to some shared syntax for the conversation (the table in this metaphor), is essentially a rhetorical attempt to lend the semantic contents more rhetorical "weight" or "force." However, the inverse should be true since "leaning" or "seeming" is functionally DETRACTIVE of confidence in setting the table originally, as it signals to the conversational parter an incompleteness in the table setting and invites them to contribute. "I know" crowds the table more than "I intuit."
It would not be a categorical error, however, to make a formal case for changing the table settings, which in the case of logical debate is usually precluded except in the service of clarification on otherwise elusive, semantic divergences.
Excellent
Just wait until you hear about acquaintances!
I went to a magic show. It seemed like the woman was cut in half, but I know she wasn't.
This shows it's possible to assert something that is contrary to your seemings.
When I think of intuition talk, I think of it as an invitation to the interlocutor: "hey this seem true to me, does it also seen true to you?". Because if it does not, then we have to debate the issue. Whatever score I get from my seeming, you also get from yours.
I think that "seem" talk is polysemous. I understand the case youre imagining though I still think (in the jargon of seemings) that you have a seeming such that your perception was mistaken and you witnessed an illusion. I.e. I still think the transparency and parasitism of "intellectual seeming" upon belief is correct.
A very thoughtful post--entertaining too! But I don't think I agree with it, so I'll just note some responses.
I'm not sure that my assenting to P entails it seeming to me that P, or vice versa. For example, it might seem to me that it would be wrong to steal the organs from one person to save 5 people, or something, but if I'm a utilitarian, I would no longer believe that it's true. But surely it would still seem to me that it would be wrong, I would simply have other, stronger convictions that entail the falsity of this.
If this is right, then it's a case where I assert the truth of P (P=it's not wrong to steal the organs), but it doesn't seem to me that P. And of course likewise that it seems to me that ~P but I don't believe ~P.
Maybe you want to say that this just couldn't be the case. After all, if your considered judgement is that P is true, then your "considered seeming" must also be that P is true. Fair enough, but from introspection I think that I have things that I believe even though they don't seem right. Even if considered seemings are real, it seems (sorry) like seemings change more slowly than beliefs. In the utilitarian case, I may even eventually have the theory so thoroughly integrated in my thinking that stealing the organs no longer seems wrong (though I have a hard time imagining that), but I think that would come a long time after actually assenting to the truth of utilitarianism. So I think there will at least be some period of time where the two don't coincide.
I still think your argument about recursive justification is interesting though. As a first pass response, I guess I would draw an analogy to the truth predicate. Let's say that the fact that this post is good is evidence that you're a good writer. Then surely "it is true that this post is good" is evidence that you're a good writer, and "it is true that it is true that..." etc.--infinite evidence glitch again! But that of course isn't right. Whatever response you give here, I would guess a similar thing could be said about intuitions.
In the Walsh-Rogan case, I guess I wouldn't say that Walsh provides a new argument when he says that the premises are intuitive. Rather, he simply reports his reasons for believing the premises. Suppose that we're debating whether you're a good writer. I give the argument:
1) This post is good
2) If this post is good, you're a good writer
3) So you're a good writer
If I then say in support of 1 "I read the post and thought that it was good", that doesn't give further reason to believe 1--it's simply a report. Likewise I can then cite litterary conservatism: "If S finds X good upon reading it, that is defeasible justification for X's being good". This again doesn't give extra justification. Nonetheless, this also doesn't mean that I'm not justified in judging you a good writer. Intuitions are supposed to be internal sources of justifications, and so, like my finding your post good, reporting them won't give evidence for another person, except insofar as it serves as some sort of testimony. Maybe reporting my intuitions will also help you realize that you have the same intuitions, making the reason *you* should believe the thing in question apparent--after all, we aren't aware of all the relevant reasons at all times.
You include a quote that suggests this should be a problem, but I don't think it is. I take it that an argument should only have persuasive force for you to the extent that you agree with the premises. If I present you with an argument against some proposition you believe, you will only be persuaded if the credences you have in the premises are inconsistent with the credence you have in the conclusion.
Suppose that Walsh and Rogan were--per impossible--to figure out *all* of the implications of their views, presenting all possible arguments against each other's views, such that the other person had some credence above zero in the premises. They would consider all these arguments and adjust their credences and beliefs accordingly. Assuming that they still didn't agree, each person would end up with a set of propositions where no new argument could rationally persuade them--Walsh would believe P and Rogan would believe ~P, and that would be the end of the story; they would just have differing brute intuitions.
While that is a sad outcome, surely it would still be irrational for each person to change their beliefs--they should definitely not believe something that *doesn't* seem right to them in this situation. I just think it's a sad fact that some people (probably most) just have differing intuitions, so that they could only come to agree through irrational means. Nevertheless, that seems like the best we can do.
The way I'd describe Transplant as a consequentialist is: I believe that something superficially similar in the real world would be very bad, and I believe that in a highly counterfactual situation actually satisfying the specified conditions of the thought experiment, it would be good. That's where I think the felt conflict is coming from. In neither case do I think that using the syntax "It seems to me that P" means anything different from "I believe that P". Of course, there are cases like optical illusions where they do mark a useful distinction.
The way I understand Huemer and PC is that as with the truth of modus ponens, there are some statements whose justification cannot depend on further statements, which is OK. We just 'see' it's right. Huemer's point is that intuitions are inescapable at some point in the chain of justification of e. g. moral truths, which he thinks of as non-natural. While I share your scepticism about using intuitions as a be all end all in a given discourse, I don't think it's fair to brush over the 'barring defeaters' clause and the prima facie status of the the seemings.
You can trivially stipulate the semantics for a word such that it's true when in a premise, but false when in the conclusion of a modus ponens inference. There is no more seeming that modus ponens is true than that any solution to paradoxes of self reference seems true. You can really only think such things are "true" if you're encouraged by analytic philosophy's bad methodology to completely ignore other approaches to the problem besides the approach that seems true to you.
I don't think it's unfair. I think it's transparent such that if you're telling me p, you don't have to also tell me a story about "seemings" and what they mean to you. I believe your claim that p is equivalent to you telling me it seems to you that p, so you're not providing me with any new information. If I dispute p, then I will have to dispute it.
I'm not sure if youret talking about the kind of strange aftertaste during a discussion when people invoke their intuition about something or about a problem with intuitionistic justification. Let's imagine someone disputes the claim that modus ponens is true. What else is there to say to them other than well it seems right to me? There, the fact that I can't say anything more is not necessarily a problem, right? To me, the problem is not intuition per se, but to know which intuitions to trust.
I wouldn't say that because it would be completely unconvincing given that's the very thing we dispute. I might appeal to what is taught in logic, what discipline we engage with, and I would listen to their position and maybe change my mind.
I think you're assuming a model where "rock-bottom" is a difference between THINGS called "intuitions", rather than MY model which is that we disagree about claims, our intuitions just are words we say when we believe claims. The appeal to intuitions is just to restate CLAIM or ~CLAIM. Sure, maybe we can't navigate a way forward in the disagreement (skill issue). I consider myself very imaginative and language as pretty complicated so I think there are plenty of ways to proceed.
For the cases where I would say "it's intuitive to me that p" or "it just seems to me that p" I am capable of deconstructing my thought and re-assessing my perspective. I would consider it a vice and rationalisation/bias for me to intrench myself in the thought "Well I do in fact believe this claim so that is what I believe so that is what I believe" without using all the tools I have to go somewhere productive.
Well, yeah, maybe I'm in the grip of a theory, as we used to say. I still would think that there is nothing you could cite from logic or else to come up with a reason to think that modus ponens is true. Seeing the premises A and A->B and the conclusion B just seems correct. Maybe you assume that there has to be something someone imaginative could come up with to convince the person denying the truth of modus ponens? I hope that we use the same meaning of convince here: I mean giving a fact that speaks in favor of believing p, not state something that makes the other person believe that p.
>Seeing the premises A and A->B and the conclusion B just seems correct.
Because you're using syntactic devices that are characteristic of the artificial constructions of formal logic. Conduct an empirical study with "P1) If it rains meatballs, I never existed. P2) It rains meatballs. C) I never existed." and question people on whether this is a valid form of inference. Many people will say no because ordinary material implication isn't as cut and dry as a formulation of modus ponens is in a specific type of logic.
I'm not sure if we're talking about the same thing. Yes, people will make mistakes when drawing logical inferences. What does that prove about the validity of modus ponens?
There is actually empirical data on this and non-philosophers in western contexts tend to attribute validity to arguments where they agree with the conclusion and invalidity otherwise (rather than due to the form).
"I still would think that there is nothing you could cite from logic or else to come up with a reason to think that modus ponens is true." This is because you're imagining a case where you've specified in your imagination that nothing changes your mind.
We would have to see what we actually *do* in cases of divergent intuition claims and whether we can get on with things and resolve disputes in order to see if my claim, that we can do just that, is correct.
I think the two perspectives here can be at least somewhat resolved.
Consider that "intuition" may be a general form, and "seems" a rhetorically stronger form of informal "table-setting." This is to say, rather than presuming the whole set of relevant elements and where they should go, we signal "intuition" as general and neutral, "seems" and "appears" as visual cues (to recruit embodied processing), "leaning" and "inclination" as sensorimotor cues, etc.
"Once the table is set," however, it is a category error to use rhetorical elements on the table's contents, unless there is tacit agreement for informal conversation.
For example, to rereiterate an already established intuition, which is already providing an "anchor" to some shared syntax for the conversation (the table in this metaphor), is essentially a rhetorical attempt to lend the semantic contents more rhetorical "weight" or "force." However, the inverse should be true since "leaning" or "seeming" is functionally DETRACTIVE of confidence in setting the table originally, as it signals to the conversational parter an incompleteness in the table setting and invites them to contribute. "I know" crowds the table more than "I intuit."
It would not be a categorical error, however, to make a formal case for changing the table settings, which in the case of logical debate is usually precluded except in the service of clarification on otherwise elusive, semantic divergences.