The Open Mind

Cogito Ergo Sum

Atheism, Morality, and Various Thoughts of the Day…

with 26 comments

I’m sick of anti-intellectuals and the rest in their assuming that all atheists are moral Nihilists, moral relativists, post/modernists, proponents of scientism, etc. ‘Dat ain’t the case. Some of us respect philosophy and understand fully well that even science requires an epistemological, metaphysical, and ethical foundation, in order to work at all and to ground all of its methodologies.  Some atheists are even keen to some form of panpsychism (like Chalmers’ or Strawson’s views).

Some of us even ascribe to a naturalistic worldview that holds onto meaning, despite the logical impossibility of libertarian free will (hint: it has to do with living a moral life which means to live a fulfilling life and maximizing one’s satisfaction through a rational assessment of all the available information — which entails BAYESIAN reasoning — including a rational assessment of the information pertaining to one’s own subjective experience of fulfillment and sustainable happiness). Some of us atheists/philosophical naturalists/what-have-you are moral realists as well and therefore reject relativism, believing that objective moral facts DO in fact exist (and therefore science can find them), even if many of those facts are entailed within a situational ethical framework. Some of us believe that at least some number of moral facts are universal, but this shouldn’t be confused with moral absolutism since both are merely independent subsets of realism. I find absolutism to be intellectually and morally repugnant and epistemologically unjustifiable.

Also, a note for any theists out there: when comparing arguments for and against the existence of a God or gods (and the “Divine Command Theory” that accompanies said belief), keep in mind that an atheist need only hold a minimalist position on the issue (soft atheism) and therefore the entire burden of proof lies on the theist to support their extraordinary claim(s) with an extraordinary amount of evidentiary weight. While I’m willing to justify a personal belief in hard atheism (the claim that “God does not exist”), the soft atheist need only point out that they lack a belief in God because no known proponent for theism has yet met the burden of proof for supporting their extraordinary claim that “God does exist”. As such, any justified moral theory of what one ought to do (above all else) including but certainly not limited to who one votes for, how we treat one another, what fundamental rights we should have, etc., must be grounded on claims of fact that have met their burden of proof. Theism has not done this and the theist can’t simply say “Prove God doesn’t exist”, since this would require proving a null hypothesis which is not possible, even if it can be proven false. So rather than trying to unjustifably shift the burden of proof onto the atheist, the theist must satisfy the burden of proof for their positive claim on the existence of a god(s).

A more general goal needed to save our a$$es from self-destruction is for more people to dabble in philosophy. I argue that it should even become a core part of educational curricula (especially education on minimizing logical fallacies/cognitive biases and education on moral psychology) to give us the best chance of living a life that is at least partially examined through internal rational reflection and discourse with those that are willing to engage with us. To give us the best chance of surviving the existential crisis that humanity (and many more species that share this planet with us) are in. We need more people to be encouraged to justify what they think they ought to do above all else.

Advertisements

26 Responses

Subscribe to comments with RSS.

  1. I don’t agree with everything there, but I agree enough to give you a “like”.

    In particular, I’m not in favor narrow core requirements.

    Neil Rickert

    July 13, 2017 at 12:03 am

    • Hi Neil, thanks for commenting. I’m not in favor of narrow core requirements either, but that doesn’t mean I’m not in favor of any core requirements at all. Do you agree? Or are you against any educational requirements concerning what should be foundational in educational curricula (e.g. math, history, sociology, etc.)? Philosophy is far from narrow so including it in core education would be broad, quite broad and comprehensive in fact. While we already smuggle philosophical assumptions and concepts into education as it is, we are missing fundamental aspects of philosophy including the rational self-reflection that accompanies any good practice of it as well as basic procedures for establishing a coherent worldview with a solid epistemology and so forth. We desperately need help on that front, and fundamental education requirements that aim to include such foundational mortar (so to speak) will help to improve our chances by bestowing a solid rational foundation for the next generation, so they can live better lives and make the world a better place.

      Lage

      July 13, 2017 at 8:03 am

      • I’m for broad requirements, such as some science classes, some humanities classes. But when you narrow that down to philosophy, it is too narrow.

        I’m a mathematician, but I have always opposed a mathematics requirement.

        Think of the bright student who did a lot of philosophy reading on his own while in high school. And now you are going to force that student to take a boring watered down philosophy class in college?

        Neil Rickert

        July 13, 2017 at 10:09 am

      • Philosophy ultimately encompasses everything (including science, the humanities, etc.) so I’m not sure what you’re talking about. Perhaps you and I just have different conceptions of what philosophy is. I take it to be the theoretical underpinnings of all of our knowledge and experience which is as broad as anything can be. There are just more systematic ways of teaching it and important things that I think need to be emphasized (such as logic and ethics), hence the commentary in my post here. But if your view of philosophy is much more narrow, I could see how we might have this misunderstanding of what I meant. Oh Wittgenstein comes to mind here, where communicating ideas effectively to others takes a bit of back and forth for clarification. Always a joy to do so with others that are willing to engage!

        Think of the bright student who did a lot of philosophy reading on his own while in high school. And now you are going to force that student to take a boring watered down philosophy class in college?

        I’m referring to core requirements in grade school, paving the way for more fruitful intellectual endeavors in college. I’m talking about teaching philosophy to empower children to becoming much more responsible and fulfilled adults. Perhaps I should have specified that any core requirements I had in mind were intended for children’s education — since that’s where the foundation is ultimately established prior to becoming an adult in society. Getting adults interested in more critical self-reflection and moral development is far more difficult and often less fruitful because virtues are harder to cultivate at older ages.

        Lage

        July 13, 2017 at 10:55 am

      • Philosophy ultimately encompasses everything …

        It doesn’t, although philosophers claim that it does.

        And HERE is someone who thinks philosophy is a literary genre.

        Neil Rickert

        July 13, 2017 at 4:04 pm

      • I disagree, though I will better clarify what I meant by “philosophy ultimately encompasses everything” because I don’t mean to say that “philosophy is everything”. Rather what I mean is that all of our subjective experiences involve some kind of interpretation and are driven by desires/goals of ourselves and of others, and the interpretations of our own and other’s experiences and behaviors continue to modify our interpretations and desires over time. All of these interpretations, desires, and their effects can be described by philosophical concepts within the various branches of philosophy such as metaphysics, epistemology, ethics, esthetics, politics, etc., even if that description is limited by language (which is unavoidable).

        As for Barefoot Bum’s comments in that article relating to philosophy and genres of literature, I would say that philosophy (philosophical concepts, generally speaking) is indeed found throughout the various literary genres, and one could say that many philosophers’ literary works (which could be described as forms of literature that include a more formalized philosophical analysis of some concepts) could be categorized as a genre of its own. But that doesn’t mean that philosophy is itself just another literary genre. That’s fallacious to think so. Rather, literature includes philosophy (or perhaps one could say philosophy includes literature) within it even though philosophy isn’t at all dependent on literature.

        One can do arm chair philosophy, experimental philosophy, or simply converse with others in order to analyze various aspects of the human condition and our interactions with the world and they will inevitably be engaging in philosophical analysis (whether they know it or not). And all of this can occur with no literature needed necessarily, even though it helps immensely to include various products of literature to build off of the philosophical work written by countless others that were willing to share their ideas with their contemporaries and posterity (whether the works of some fiction writer, or a prominent author of “philosophical works”, or whomever).

        Lage

        July 13, 2017 at 4:58 pm

  2. I get thrown off a bit by the reference to Baynes.

    Objective morality arises naturally from life. All living organisms seek to survive, thrive, and reproduce. And no mammal survives without an early experience of love and care. One can objectively observe which behaviors aid the survival of the individual, the society, and the species. And one can objectively observe which behaviors weaken or destroy it.

    Morality seeks the best good and least harm for everyone. Rule systems (customs, mores, ethics, laws, etc.) serve and are judged by how well they achieve or frustrate that purpose.

    It does not require a formal study of philosophy. It only requires that we teach our children to love wisdom. And demonstrate how that works in our daily lives.

    Marvin Edwards

    July 13, 2017 at 11:43 am

    • I get thrown off a bit by the reference to Baynes.

      I mentioned Bayesian reasoning because all rational reasoning can be shown to follow Bayesian inference based on the prior probabilities of any belief/hypothesis and the consequent probabilities of new evidence one encounters pertaining to said hypotheses/beliefs.

      Objective morality arises naturally from life. All living organisms seek to survive, thrive, and reproduce. And no mammal survives without an early experience of love and care. One can objectively observe which behaviors aid the survival of the individual, the society, and the species. And one can objectively observe which behaviors weaken or destroy it.

      Yes, and science (or any general methodology relying on Bayesian inference) is the best way to conclude which behaviors do this best. The key here is that many people believe they know what will maximize their life fulfillment, but are often incorrect about such conclusions because they aren’t including important facts in their reasoning or are not analyzing it rationally. Which, again, is why I mentioned the need for Bayesian reasoning, since then one can be sure they are giving the best justification they can for any claim relating to supposed moral facts. The only claims to objective moral facts that are justified are those that are arrived at through a rational analysis of all available information pertaining to what gives us a satisfying/fulfilled life.

      Morality seeks the best good and least harm for everyone. Rule systems (customs, mores, ethics, laws, etc.) serve and are judged by how well they achieve or frustrate that purpose.

      To some degree yes, but many laws, customs, etc., are rife with incorrect reasoning, false beliefs, and built on a foundation of cognitive biases — which is why there is so much moral disagreement in the world (despite a lot of overlap). Most of that disagreement lies in the fact that many people have different bodies of facts (some with many false beliefs) and/or many are reasoning about those facts inconsistently and/or irrationally.

      It does not require a formal study of philosophy. It only requires that we teach our children to love wisdom. And demonstrate how that works in our daily lives.

      I agree with you for sure! No formal study is needed, but in order to teach our children to love wisdom requires teaching them philosophy. The word philosophy originates from exactly that concept — “the love of wisdom” and you can’t have such a love of wisdom without cultivating it and learning about it. Formal study in philosophy (to some degree) just helps to make it more universally available and prioritizes it in terms of the many hours per week that kids are learning various things in school. I believe we are currently spending too much time in school (grade school and otherwise) teaching kids what to think when we should be spending the bulk of time teaching kids how to think. Only then can they fully appreciate a love of wisdom and therefore live a philosophically examined life.

      Lage

      July 13, 2017 at 1:23 pm

      • I’m sorry, but every time I look at the Baynesian method it appears to be a guess multiplied by another guess and divided by more guesses. My tendency at this point is to consider it a bit of a cult.

        Normal thinking by normal people will consider all the available evidence for and against a hypothesis and then decide how likely it is to be true.

        But perhaps you can show me how to apply the Baynesian method to the evaluation as to the likelihood that the Baynesian method produces a more reliable certainty than what we’re already doing.

        Marvin Edwards

        July 13, 2017 at 3:48 pm

      • I’m sorry, but every time I look at the Baynesian method it appears to be a guess multiplied by another guess and divided by more guesses. My tendency at this point is to consider it a bit of a cult.

        Not at all. In fact your brain uses Bayesian (and that’s “Bayesian” not “Baynesian”) reasoning all the time when it is reasoning rationally. The only thing your brain can do is guess by using some form of active inference based on various probabilities of causal outcomes that are modeled in the brain. Trying to use non-Bayesian reasoning (which is irrational) is what you will actually find in so-called cults. That is where you find people and ideologies that claim to be “certain”, avoiding the incorporation of a number of facts in order to fit a preconceived conclusion, support their confirmation bias, etc.

        Normal thinking by normal people will consider all the available evidence for and against a hypothesis and then decide how likely it is to be true.

        Actually most people don’t do this even if they claim to. But if they are actually doing it, then they are implicitly (whether they know it or not) using Bayesian reasoning. Their brain is simply doing the math while keeping the actual numbers out of conscious thought processes, unless one is actually attempting to do the math. In many (if not most) cases though, people don’t actually reason rationally about these likelihoods, because they either have ridiculous priors or consequent probabilities and/or they are excluding information (even if unintentionally/unconsciously so, based on various cognitive biases).

        But perhaps you can show me how to apply the Baynesian method to the evaluation as to the likelihood that the Baynesian method produces a more reliable certainty than what we’re already doing.

        Here’s a good overview to explain things in a pretty accessible manner, at this link.

        I recommend Richard Carrier’s book Proving History as it explains the application of Bayes’ theorem in determining the relative probability of any hypothesis with respect to any alternative hypothesis.

        Lage

        July 13, 2017 at 4:27 pm

      • Carrier appears to put it this way:

        1) We hear a hypothesis explaining a phenomenon, and we form an initial impression as to the likelihood that the hypothesis is correct or incorrect.

        And that makes sense. However, he then says something that does not make sense:

        “If you are behaving rationally, you will base that assignment on your past experience and knowledge, of what’s typical and what’s not. If you are not behaving rationally, you will simply codify your biases and false beliefs, and substitute them for facts at this point. And then it’s just garbage in, garbage out.”

        The problem with that statement is that there is no way to logically distinguish “behaving rationally” from “not behaving rationally”, because “your biases and false beliefs” are functionally identical to “your past experience and knowledge”.

        He might as well have left that out.

        But let’s move on.

        2) He posits the concept of a “likelihood ratio”, to be computed as a ratio of two other ratios:
        A. The probabilityA that the phenomenon is caused by our original hypothesisA, and,
        B. The probabilityB that the phenomenon is caused by hypothesisB, a different explanation.

        Which I presume would be ProbabilityA/ProbabilityB.

        But why would we bother to do that? We don’t need a ratio, but just a simple comparison:
        Is ProbabilityA > ProbabilityB? If so then our original hypothesis is more likely.
        OR
        Is ProbabilityB > ProbabilityA? If so then our alternate explanation is more likely.

        And you’d repeat this with each new hypothesisC-hypothesisZ as they come up.

        Am I missing something?

        Here is the critical point:
        1) If we have sufficient evidence to confidently assign a probability to hypothesisA and hypothesisB, then all we need is the comparison.
        2) If we do NOT have sufficent evidence to confidently assign these probabilities, then NO calculation based upon these truly imaginary numbers will ever reliably take us closer to any truth.

        One final note. I remember as a beginning programmer reading about randomization algorithms. And, ironically, it turned out that the more complex the formula, the less random was the result.

        Marvin Edwards

        July 13, 2017 at 6:31 pm

      • The problem with that statement is that there is no way to logically distinguish “behaving rationally” from “not behaving rationally”, because “your biases and false beliefs” are functionally identical to “your past experience and knowledge”.

        Actually it makes perfect sense. He’s pointing out what goes on in our thought processes and what one is implicitly doing when they are reasoning rationally versus irrationally. In order for that point he’s made to hold true, one need not know how to distinguish the difference between whether or not one is reasoning rationally or not.

        But if you read on, he later mentions that knowing about cognitive biases and that all truly rational reasoning must be able to be translated into a Bayesian form, then one can better distinguish one from the other. That is to say, if one is aware of the fact that we often reason irrationally, and distort many of our beliefs with cognitive biases, then one can apply Bayes’ theorem to a particular hypothesis/belief in order to make one’s priors, consequents, and resultant posterior probabilities transparent to them (and to others). This is a lot harder to argue against, even via one’s own reasoning, because making a concerted effort to statistically analyze a claim with actual numbers forces you to actually justify those numbers (or at least try to) and this also lends you to criticism from others that can see your numbers as well (if you aim to justify a belief publicly in a statistically transparent way).

        One may be biased in performing a clinical trial by knowing which patient they gave the pill to and which they gave the placebo to, but if they discover that they have a bias that can destroy the integrity of the result, then they can control for that by setting up a blind (or double blind) clinical trial. Similarly, one can use various tricks and aids to help us combat our biases with respect to any number of beliefs or claims that we are trying to test, thus being able to better distinguish between “past experiences and knowledge” and the products of “biases and false beliefs”. Often it simply requires more reflection to do this and then one can re-evaluate their memories and have an “ah ha” moment where they realize that their biases were distorting their reasoning.

        To give another example, consider prayer. People can often trick themselves into thinking that prayer actually works, violating the laws of physics from time to time just to benefit some desire of their’s, and they do this by selective memory, remembering and prioritizing memories of times where they thought a prayer was “answered”, and forgetting or deprioritizing memories of times where a prayer was not “answered”. Upon rational reflection however, one can re-evaluate their memories of prayers (for example, by specifically trying to think of all the times prayers have NOT been “answered”, which most who believe in the power of prayer never do), then they can come to a different conclusion despite the same group of memories (pertaining to prayer) being there all along. Many times the information is there and has been all along. You just gotta’ learn how to grab it.

        He posits the concept of a “likelihood ratio”…But why would we bother to do that? We don’t need a ratio, but just a simple comparison…

        You’ve missed the main point here which is to establish a likelihood ratio, in order to determine HOW MUCH MORE likely one hypothesis is than another. It’s not nearly as epistemically useful to say “Belief A” is more likely than “Belief B”, when one could say “Belief A” is 1000 times more likely than “Belief B”, or “Belief A is 1% more likely than “Belief B”, based on the calculation. One needs to know how likely one is compared to the other so they can distinguish between comparably likely hypotheses and those that are vastly different odds.

        Here is the critical point:
        1) If we have sufficient evidence to confidently assign a probability to hypothesisA and hypothesisB, then all we need is the comparison.
        2) If we do NOT have sufficent evidence to confidently assign these probabilities, then NO calculation based upon these truly imaginary numbers will ever reliably take us closer to any truth.

        The best we can do is to assign at least some probability based on our inferences using as much background information and specific evidence as we can for our competing hypotheses and then argue a fortiori, so that we can maximize our confidence in being charitable (to combat biases) and in reaching a sound conclusion based on what information we do have. This is the best we can do. When we don’t have much information, this means that we can’t form very strong conclusions (with a high epistemic weight) given that limited evidence. But in many cases throughout our day to day lives, we are pragmatically driven to decide whether A is more likely than B given what we know, and when we know we can’t assign a high weight to that conclusion, then we limit the severity of our decisions based on those conclusions. It’s simply the best we can do and it’s often quite effective and sufficient for many practical purposes as the human species has shown given the advancements we’ve made in our knowledge about the world. Whether it’s our personal everyday reasoning or a more externalized/systematized analysis in any branch of science, if it’s valid reasoning, then it can be broken down to a Bayesian form.

        If you have more questions about this, I highly recommend reading Carrier’s Proving History as it addresses all of these sorts of questions (and many more) and gives more sources for those interested in more complex math that underlies the basic concept. It goes into a lot more detail and is very accessible to people with different competencies in mathematics/statistics and epistemology. Check it out if you are interested in more info!

        Also, I recommend looking into the work of various cognitive scientists that have modeled various Bayesian dynamics employed in the brain. Check out much of the predictive coding and other relevant work done by Friston, Clark, Hawkins, and others. They can explain in far more detail as well, including showing how various neural processing schema appear to use a Bayesian network of Markov blankets or chains. Very cool stuff and something I’ve been reading a lot about as a separate hobby I like to dabble in (cognitive science). I also recommend Clark’s book Surfing Uncertainty, which talks about predictive coding (relying on Bayesian causal modeling) and empirical research that supports it.

        Lage

        July 13, 2017 at 11:01 pm

      • The problem is that I’m picking up on what I sense is a bias now. I am hearing a lot of faith, but I’m not seeing the reasoning. And if a mathematical operation has no clear rational basis, then I suspect the formula is a placebo, no better than prayer. It’s like the case of the random number algorithm that became less random the more you added to the formula.

        I could be wrong, of course. But what are the odds of that. 🙂

        Marvin Edwards

        July 14, 2017 at 1:32 am

      • The problem is that I’m picking up on what I sense is a bias now. I am hearing a lot of faith, but I’m not seeing the reasoning.

        That’s a funny joke! Well, to humor you and your attempt at playing the devil’s advocate, I’ll play along. Faith is what results when people fail to use Bayesian reasoning and that can be shown by plugging in the relevant numbers for a belief that is fundamentally not based on evidence (or not very much evidence, if any) and then acting AS IF the posterior probability that results is high. So it’s good that you mentioned the term “faith” because it is a perfect counterexample to Bayesian reasoning, which is why people should learn more about Bayesian reasoning, so they have better chances of believing as many true things and as few false things as possible.

        It is true that we all have biases which is why we need to try and apply critical reasoning to our beliefs and ultimately apply Bayesian reasoning. The success and validity of Bayes’ theorem can be shown through the following proof and through its actual efficacy in allowing us to successfully predict the future through a number of various causal interactions and processes. One need only look at the scientific method and how its application and our conclusions based on experiments directly translates into Bayesian reasoning, and then look at how successful the method has been epistemologically. It’s been quite successful indeed! It allows us to make so many successful predictions that we’ve been able to perform vast manipulations of our environment. It’s actually quite breathtaking when you think about it.

        And if a mathematical operation has no clear rational basis, then I suspect the formula is a placebo, no better than prayer.

        I’m not sure what you meant by this comment. Bayes’ theorem quite obviously does have a rational basis, and you need only look at the logical proof I provided above and realize its causal efficacy when applied (in terms of increasing our ability to make more and more successful predictions over time). That shows the ratcheting effect that it has on our expanding body of knowledge. As for placebos, unless this is simply a part of the joke, I think you misunderstand what placebos are. I recommend reading up on them as well. Placebos have less causal efficacy than an actual viable treatment, and Bayesian reasoning can show its viability via the scientific method (which is also used to analyze placebo effects in general). So actually, one needs Bayesian reasoning to properly infer what placebos are and to determine when their effects are more than likely playing a role in some experimental outcome. If you were joking, then excuse me for taking your comment seriously and sucking the laughter out of the room on that one. Sometimes it’s difficult to detect humor when we’re not talking face-to-face so apologies if that’s what I’ve done here.

        I could be wrong, of course. But what are the odds of that.

        Well, you’d have to apply Bayes’ theorem to find that out! 🙂

        Lage

        July 14, 2017 at 10:14 am

      • Ah! Here’s a very simple and understandable demonstration of the Bayes’ Theorem without all the sales talk and the anti-religion bias: https://en.wikipedia.org/wiki/Base_rate_fallacy

        (A) If it is the case that 1 out of a thousand drivers are driving drunk each night, and

        (B) Our breathalyzer shows the drunk driver 100% of the time, but also gives us a false positive (showing a sober person as drunk) 5% of the time,

        Then what are the odds that our driver, who just tested positive, will actually be drunk?

        Since our test is only wrong 5% of the time, we may be tempted to say that there is a 95% chance that our driver is actually drunk. But that would be incorrect, because we’ve ignored the fact that 99.9% of the drivers going through our check point are actually sober and 5% of them will also show up falsely on our breathalyzer as drunk.

        So, if 1000 people go through all our checkpoints in one night, we know that (A) only 1 of them will actually be drunk, but also 5% of the other 999 (= 49.95) will be sober, but will show up as drunk on the breathalyzer. So there will be 49.95 sober plus 1 drunk or a total of 50.95 people who show up as drunk on the breathalyzer. And only 1 of them is actually drunk.

        So, the odds, that any one of the 50.95 people who tested as drunk is actually drunk, is precisely 1 in 50.95, or 1.96% rather than 95%. And that’s a big difference in our certainty.

        Bayes’ Theorem uses a formula to correctly calculate this probability:
        P(A|B) = ( P(B|A) * P(A) ) / P(B)
        (see “Bayes’ Theorem” in Wikipedia for general description).

        Which in the Wikipedia example becomes:
        p(drunk|D) = (p(D|drunk) * p(drunk) ) / p(D) = 0.19627 = 1.96%
        (see “Base Rate Fallacy” in Wikipedia for full details)

        NOTE: We must keep in mind that the output of Bayes’ Theorem is only as reliable as the accuracy of our estimated probabilities and our inclusion of all the relevant factors. If we start off with inaccurate estimates or missing critical factors, then all the math in the world won’t help us.

        Marvin Edwards

        July 14, 2017 at 10:40 am

      • Ah! Here’s a very simple and understandable demonstration of the Bayes’ Theorem without all the sales talk and the anti-religion bias: https://en.wikipedia.org/wiki/Base_rate_fallacy

        Yes, there’s a good example! I never gave any sales talk though but rather a description of the theory and its relevance to proper reasoning. I’m not trying to sell anything. So I’m not sure what you meant by that. Another joke perhaps? And as for anti-religion bias, while I am opposed to irrational thinking which is highly prevalent in religious circles and ideologies, you need not focus on the religion aspect so much as the rationality part. Once one does that, then even people with various religious views can start to see (or at least have a better chance of seeing) why many of their religious beliefs are not rational, not based on proper reasoning. That was part of what facilitated my conversion from protestant (born-again) Christianity toward atheism. Now as an atheist, I try to continue to apply proper reasoning methods in my everyday life, question beliefs I’ve held for a long time and try to continue to increase the number of true beliefs and reduce the number of false beliefs. I’ll always have a mixture of both, but as long as I actively try to skew the ratio toward truth and rationality, I can be content with that effort.

        NOTE: We must keep in mind that the output of Bayes’ Theorem is only as reliable as the accuracy of our estimated probabilities and our inclusion of all the relevant factors. If we start off with inaccurate estimates or missing critical factors, then all the math in the world won’t help us.

        Exactly! And that was one of Richard Carrier’s points in that article. Garbage in = garbage out, which is why we need to learn more about how to apply proper reasoning and to use the best protocols themselves correctly! I think you’ve now got it dude! Thanks again for commenting and engaging with me. Always a pleasure to have a friendly discourse with others about such fascinating topics. Knowledge is power Marvin!

        Lage

        July 14, 2017 at 11:47 am

      • Okay, so “sales talk” would be over selling the influence of Thomas Bayes upon clear thinking. Basically, he simplified a formula for combining multiple probabilities into a single formula by using Algebra. To say that there exists any such thing as “BAYESIAN reasoning” which will solve all the other fallacies in our thinking is going way overboard.

        I would suggest that “BAYESIAN reasoning” is limited to a specific type of probability problem. For example, in the faulty breathalyzer case, it was only necessary to point out our sampling error, and include the 5% of all sober people in our sample, to compute the correct odds. Note that once we did that, we computed the correct odds (1.9267%) BEFORE we even plugged the figures into Bayes’ Theorem.

        Like you, I was raised in a Protestant church, the Salvation Army. But I had no anti-science bias. When I asked my mother (she and dad ran the church) about evolution, she said that “a thousand years are as a day to the Lord” and that it was possible that evolution was the tool he used. And I think I first heard about Einstein’s time-dilation on a TV program produced by the Moody Bible Institute of Science. At one of the Bible Conferences in Florida, an officer was doing a Bible lesson with chemistry. He mentioned that his brother had created a red dye out of gold when investigating Moses turning the golden idol into blood.

        I’m a Humanist and a Unitarian Universalist. So, from my perspective, every word of the Bible was written by well-meaning people. And in a mixed-family where mother remained a Christian all her life, I respect the emotional/spiritual benefits provided by religion as well as its deliberate moral training of our children through stories.

        As to practical everyday moral problems, I believe that all people who profess morality are the allies of all others, regardless of their notions of Gods and afterlifes. So I’m not really into attacking anyone’s religious beliefs, even though I will challenge their moral judgment when needed.

        Marvin Edwards

        July 14, 2017 at 3:24 pm

      • Okay, so “sales talk” would be over selling the influence of Thomas Bayes upon clear thinking. Basically, he simplified a formula for combining multiple probabilities into a single formula by using Algebra. To say that there exists any such thing as “BAYESIAN reasoning” which will solve all the other fallacies in our thinking is going way overboard.

        This misses the point in what I was saying. My point was that all rational/valid reasoning can be broken down into a Bayesian form which means if one wants to reason correctly about any two competing hypotheses/beliefs (i.e. is A more likely than not-A), it will follow a Bayesian form. The more information they include in determining their priors and so forth, the more likely that Bayesian form will correspond to actual factual states about the world. I wasn’t selling the influence of Bayes himself since he actually played a relatively small role in the overall theoretical development, and so it was many of his successors that built off of his simple formula and discovered how it could be applied more broadly to any form of reasoning/inference.

        I would suggest that “BAYESIAN reasoning” is limited to a specific type of probability problem.

        Yes and no. It’s limited to probability problems that involve any two (or more) competing hypotheses/beliefs/etc., and thus has a bearing on how beliefs should be updated over time given the amount of information at any point in time. Which would basically cover everything that we can claim to believe is true or false.

        Note that once we did that, we computed the correct odds (1.9267%) BEFORE we even plugged the figures into Bayes’ Theorem.

        Yes, but you’ve glossed over a critical point here. Bayesian reasoning WAS necessary to update one’s belief correctly. If you started with the belief that the probability was 95%, but were missing information, then once you got that new information suggesting that the actual probability was closer to 2%, then you can apply Bayes theorem to that update in belief. You can ask, given the base rate fallacy information and the math you were presented with to illustrate the error, what is the probability that your previous belief in a 95% probability is still true rather than the 2% probability given the new information? The probability is vanishingly small (if not approaching zero) that given that new information, that the 95% probability is still more likely to be a correct/true belief. See what I mean? You DID use Bayesian reasoning to get the proper answer to update your belief, but because you assigned huge priors (perhaps unconsciously/automatically) to a math error pointing out a flaw as illustrating how to get a more accurate answer, one unconsciously updated their belief, no further questions asked. But Bayesian reasoning was needed for the proper rational belief updating. You’ve simply isolated the base rate fallacy math from the updating of beliefs which was a logical error. See what I mean?

        Like you, I was raised in a Protestant church, the Salvation Army. But I had no anti-science bias. When I asked my mother (she and dad ran the church) about evolution, she said that “a thousand years are as a day to the Lord” and that it was possible that evolution was the tool he used. And I think I first heard about Einstein’s time-dilation on a TV program produced by the Moody Bible Institute of Science. At one of the Bible Conferences in Florida, an officer was doing a Bible lesson with chemistry. He mentioned that his brother had created a red dye out of gold when investigating Moses turning the golden idol into blood.

        How is this anecdote relevant? Don’t get me wrong, I don’t mind expanding the conversation a bit, but why did you mention this?

        I’m a Humanist and a Unitarian Universalist. So, from my perspective, every word of the Bible was written by well-meaning people.

        I think that’s a very naive belief. To think that absolutely none of it was written by people with an intent to maintain dogmatic ideological power over others beliefs/behaviors, for the usurpation of the property (including slave-ownership) of many around them, foreign tribes/nations invaded, etc., is beyond ridiculous. And even granting the idea that they were ALL well meaning (which is ludacris) still wouldn’t negate the harm and suffering that those writers (and their leaders/followers) caused for more than two millennia with the awful moral prescriptions (though not all of them), and the cultural calamities that followed (and which still exist today).

        And in a mixed-family where mother remained a Christian all her life, I respect the emotional/spiritual benefits provided by religion as well as its deliberate moral training of our children through stories.

        Actually, I agree in part with you here in that I acknowledge the benefits that some get from religion, but there are ways of getting those benefits that don’t require dangerous beliefs that harm many others around them (even if not intended by the proponent of said beliefs). The same goes for moral training. If one removed all of the morally reprehensible content of the bible and the sets of beliefs that many followers have, then it would be far more permissible, though still the wrong way to go about moral education. It should be built on a solid epistemology, and rationally reasoning about what we ought to do above all else given the information we have about what maximizes one’s chance of a fulfilling life. There’s some overlap there in some of the myths and stories in religions that actually have important/valid moral points, but we can dispense with the lies, including the heaven-hell doctrine, the idea that some supernatural deity is the creator/decider of what is moral and what is not, the idea of souls (including in embryos, etc.). It is that kind of reasoning that leads to people killing homosexuals, oppressing women, owning slaves, rejecting science (and medical advancements), etc. So we can certainly acknowledge the bits of good but then one must discard the rest.

        Lage

        July 14, 2017 at 4:42 pm

      • I have an upside-down cup on the table. It contains one of six possible coins: a penny, a dime, a nickel, a quarter, a half-dollar, or a silver dollar. The coin will be either heads up or tails up.

        What coin is under the cup? Is it heads up?

        Marvin Edwards

        July 14, 2017 at 5:22 pm

      • Ooh, a riddle! I love it! If this is truly the only information that you are giving me (and assuming it’s not a trick question, play on words, etc.), with no other context or otherwise then ignoring any background information, I’d say that it’s a 1/6 chance (equal chance) that it’s any of the six coins and an equal chance 50% that it’s heads or tails (up). However, if also include my background information (and assuming I’m allowed to do so for the purposes of your hypothetical), then I also have to factor what coins you are likely to use for a riddle/trick. The most common coin used in magic tricks, coin tosses, and other similar events is likely to be the quarter (based on my previous experience), even though the penny is the most common coin in circulation in the U.S. So I’d give a probability of between 60-90% that it’s a quarter, even though I think the probability is likely to be on the higher end (so I’ll argue a fortiori that it’s only 60% likely to be a quarter even though I think it’s actually higher). As for heads or tails being up, although the odds are technically 50/50 without any more information, if I include background knowledge of which way quarters tend to face after coin flips and a number of other magic tricks, it is slightly more likely than 50% (since there’s some possibility that you chose to flip the coin before placing it under the cup to make it’s orientation random for the purposes of the riddle and experiments have shown that quarters are more likely to land heads up), but not much more probable so I’ll simply say between 50-51% chance of it being heads up. If I was allowed to get more information from you, those odds could change (e.g. have you performed this trick before, are you randomly grabbing a coin from a jar that has an equal number of each coin you mentioned, how big is the cup — to see if it is too small to fit a particular sized coin, etc.). So the numbers are subject to revision if you can give more information, but from my background knowledge and what you’ve given me here, that would be my estimated probability of the most likely outcome. Fun stuff!

        Lage

        July 14, 2017 at 6:48 pm

      • The information I wanted to convey is this: There is a 100% probability that the coin is what it is and there is a 100% probability that either the face or tail is up. There is a 0% probability that it is any other coin and there is a 0% probability that the down side is up.

        Whatever likelihood we compute, it is always a possibility that even the least probable event is what actually happened.

        The same applies to whether there was a preacher named Jesus traveling and preaching around the time of, let’s say, Jesus.

        And this is another reason why putting too much faith in “BAYESIAN reasoning” would itself be a fallacy. And that’s why I rolled my eyes at you and Carrier when Carrier suggested this is the way everyone is reasoning already and you suggested that our only alternative to BAYESIAN reasoning is irrationality.

        Small basket. Tons of eggs.

        Marvin Edwards

        July 14, 2017 at 7:45 pm

      • The information I wanted to convey is this: There is a 100% probability that the coin is what it is and there is a 100% probability that either the face or tail is up. There is a 0% probability that it is any other coin and there is a 0% probability that the down side is up.

        Yes but that statement is epistemically useless so therefore not relevant to our belief of what the odds actually are that it’s any one of those options you listed.

        Whatever likelihood we compute, it is always a possibility that even the least probable event is what actually happened.

        Again, irrelevant to what one should conclude is likely to be true given what we know. We could also have a Cartesian demon fooling us about everything but that’s not likely or not relevant given the epistemology we have available to successfully infer causal relations about the world. A minute chance of something being true is not rational to believe is true and it’s logically fallacious to believe otherwise (a common fallacy committed by Christians and the like all the time).

        The same applies to whether there was a preacher named Jesus traveling and preaching around the time of, let’s say, Jesus.

        Yes but it’s not rational to believe in remote possibilities. I recommend you read Carrier’s On the Historicity of Jesus to see why that’s not a likely claim. It’s likely there were preachers named Jesus because that was a common name and there were many preachers of the time but the claim that Christianity began with a historical Jesus is actually not very likely (between 1/3 and 1/13000 a fortiori). His book is the most recent academically published peer reviewed comprehensive scholarship which addresses that very question.

        And this is another reason why putting too much faith in “BAYESIAN reasoning” would itself be a fallacy

        It’s not faith when there’s evidence to support it. You don’t understand the definition of faith if you think that logical conclusions are based on faith (at least blind faith, believing without good evidence and reasoning). If the evidence were to change where Bayesian reasoning consistently led to false beliefs (and some other system was demonstrated with evidence to be better) then my belief in Bayesian reasoning would change but the paradox there is that I’d be using Bayesian reasoning to update that belief too. So it’s not logically possible to avoid it in correct reasoning. Knowledge is power Marvin and I’m confident if you keep digging into this topic you’ll see what I mean. Statistics are not something our brains evolved to do well in many contexts so it takes hard work and lots of study to better understand how to apply them explicitly.

        Lage

        July 14, 2017 at 8:52 pm

      • There is sufficient evidence to presume someone named Jesus was a traveling preacher. The name was popular at the time (even the criminal Barabbas’s first name was Jesus). And we know there were other traveling preachers at the time (e.g., John the Baptist). In fact, there might have been several preachers named Jesus wandering around at the time. The claim that there wasn’t at least one preacher named Jesus would be improbable. Paul seems to have been a real person, and since he wrote of his encounters and arguments with Peter in his letters, it is likely that Peter was real as well.

        We can toss all the miracles and myths, of course. Just like Jefferson did in his version of the NT.

        So, one of my bad feelings about “BAYESIAN reasoning” (as distinct from Baye’s Theorem) is that it appears it can produce inaccurate conclusions with inappropriate levels of certainty. (That “faith” thing).

        Hmm. You said it best, “A minute chance of something being true is not rational to believe is true and it’s logically fallacious to believe otherwise…”.

        Marvin Edwards

        July 14, 2017 at 10:00 pm

      • The claim that there wasn’t at least one preacher named Jesus would be improbable.

        Exactly, as I said in my previous comment.

        So, one of my bad feelings about “BAYESIAN reasoning” (as distinct from Baye’s Theorem) is that it appears it can produce inaccurate conclusions with inappropriate levels of certainty. (That “faith” thing).

        It can, but if reasoning rationally with charitable a fortiori ranges of probabilities, then it’s likely to have a degree of certainty proportional to the evidence which is exactly why one needs to apply it correctly. Garbage in (i.e. using faith, etc.) equals garbage out. But in all cases, it’s the best we’ve got for any epistemology that aims to arrive at accurate beliefs. Levels of certainty arrived at through non Bayesian reasoning are always less likely to be justified. That’s logic baby!

        Lage

        July 14, 2017 at 11:49 pm

      • Anyway, thanks for letting me get Bayes off my chest. Pardon my venting. 🙂

        Marvin Edwards

        July 15, 2017 at 6:56 am

      • No problem. Bayes’ theorem and it’s implications are often misunderstood so I’m more than happy to try to shed light on the matter and spread it to others. Any cognitive tools that we can share with others will only improve one another’s epistemologies and so forth. We all can learn a lot through friendly discourse. As always, knowledge is power! Peace and love Marvin.

        Lage

        July 15, 2017 at 8:28 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: