Operationalizing a Christian Foundation for Statistical Inference

by Andrew Hartley

Andrew Hartley is the author of Christian and Humanist Foundations for Statistical Inference; Religious Control of Statistical Paradigms. For more information on this work, please visit the Resource Book page. Guest author Steve Bishop posted an interview with Andrew as part of his series on Christian Mathematicians.

41JdVGeMu6L._SS500_In my book of 2008, Christian and Humanist Foundations for Statistical Inference (Resource Publications, hereinafter, Foundations), I sketched out what I see as a biblically consistent philosophy of statistics. Since then, a few statisticians and mathematicians have requested a framework for “operationalizing” that philosophy. What, they ask me, does the philosophy entail practically for the Christian doing statistical inference or teaching it, in the present age? This post takes a first pass at addressing these requests without, however, offering a self-contained explanation that can stand apart from that book, or from a suitable background in reformational philosophy (sphere sovereignty, sphere universality, etc.).

At the outset, I should set some realistic expectations, concerning the faith-statistics integration that is possible now. As will become clear below, fully implementing the philosophy of statistics introduced in 2008 is impossible without a large-scale reformation of both the producers & the consumers of statistical reports and results. Only so much can be done in this fallen world where sin crosses all boundaries and infects all people. As you and I consider “Christian statistics,” we should, therefore, consider the differences between how we might perform this statistics now, and how we might do it in the New Jerusalem, with our “new bodies.” Caveat lector.

This post requires qualification also in that the present times—the early decades of the 21st Century—impose their own distinctive threats to a Biblically consistent statistics, so that the features I have stressed of such a statistics might not be the ones to stress in a different time and place. The humanistic “science” & “personality” motives have motivated much of the foundations and practice of present-day statistics; therefore, as we outline a statistics consistent with the Philosophy of the Law Idea (PLI), we are compelled to emphasize elements of the PLI that address those motives. No one should be surprised if, decades or centuries from now, different threats arise, necessitating different Christian responses.

Discussion of the main topic here is set in the narrow context of the classroom, for that setting may offer the best chance of transforming statistical practice. A philosophy of statistics deals, after all, with fundamentals; and statistics educators take on a primary role in introducing us all to statistical fundamentals. Therefore, this post is framed with undergraduate college professors and high school teachers in mind, and offers examples and explanations that, hopefully, he or she can use easily in lectures, small group discussions and student exercises without a great deal of modification. Furthermore, this post assumes that the students, too, are seeking to conceptualize and practice statistics in ways that honor the Lord; indeed, much of what this post promotes is impossible without such an attitude, regardless of the efforts of the educator.

Some Christian principles for our vocations and sciences that should be conveyed in the classroom are as follows:

  1. All aspects (kinds of laws and properties) of human experience (quantitative, spatial, kinetic,…) are equally valid, important, and dependent upon God. No aspects can be reduced to any others
  2. All the aspects, though mutually non-reducible, are nonetheless connected. As Herman Dooyeweerd has said, all functioning is “analogical”
  3. Sin has affected humans—and, thus, the world we are called to care for as vice-gerents—in all the aspects, though in their directions, not their structures
  4. God’s redemptive work, through the cross, frees His people to reclaim what sin has stolen, and liberate the earth from its frustration, progressively, albeit with fits and starts
  5. Science in general, and statistical inference in particular, must be founded on pre-scientific experience (what Dooyeweerd calls the “naïve attitude of thought”), to enhance rather than replacing that experience

What follows here assumes that the reader is familiar already with these principles; hence, it does not explain them or even cite many of the wonderful lectures and books that lay them out.

I propose “operationalizing” those principles in an introductory statistics course using a fairly lengthy illustration—a story, if you will—although this post gives only an outline for the illustration. It introduces the discipline of statistics by discussing a subjective decision analysis problem, first without empirical data, & then—bringing in Bayesian analysis—with such data. Presenting the entire illustration will require several weeks, and multiple detours to clarify concepts more difficult to grasp. In the end, though, if successful, the illustration will give a broad overview of statistical inference (purposes, foundations, ways of thinking, and so on) while aiding the statistician in modeling the above principles.

Here are some insights, consequences of the principles mentioned above, that should surface—and be emphasized—as the teacher discusses the illustration with students:

  1. Founding Inference on the Pre-scientific “Naïve Attitude:” Statistical inference requires combining new data with previous knowledge. Data alone do not suffice as the basis for making statements about the unknown based on the known; rather, prior beliefs should influence post-analytic belief. Statistical inference—like other scientific activities—should enhance, and not replace, everyday ways of thinking (Dooyeweerd’s “naïve attitude”). To the extent the data do not overwhelm (“swamp”) prior belief, statistical inference depends on forming reasonable, realistic priors. Thus, inference is best when those forming the priors are experts in their subject matter, honest and level-headed with their assessments of prior evidence and the applicability of that evidence to novel situations, and willing and able to distinguish between reasonable beliefs and the beliefs that would benefit them personally. An awareness of the importance and limitations of prior opinion will, in later, more detailed, theoretical discussions, ease students into discussion of Bayes Theorem. This insight helps to inoculate students against the humanist “control” motive. Jan C. Geertsema, too, has discussed the necessary founding of statistical inference in the “naïve attitude,” borrowing from Stoker the idea of the “contextual view of science.”
  2. Objective Implications of Data: Prior opinion affects statistical inference; however, the data, too, have an impact, and the more data that are available, the more precise (in most inferential situations) are point and interval estimates. Thus, data place constraints on justifiable scientific belief. This seems to cohere well with the PLI, which implies that all things are subject to the laws of all the aspects all the time, implying in turn that beliefs (scientific or otherwise), while qualified by laws of the pistic (fiducial) aspect, are subject too to quantitative laws. In other words, contrary to the humanist “freedom” or “personality” motive, norms exist for beliefs, & we are not free to believe whatever we wish.
  3. Decisions: If we take as given that decision making is rational when it maximizes overall expected net benefit, then this decision making requires incorporation of utilities, that is, the possible, but sometimes only partly certain, costs and benefits of the candidate decisions. The expected net benefit of a decision is a synthesis of the quantitative sizes of the (usually multiple) possible payoffs, and the probabilities of those payoffs. In this way, even if a particular possible state of nature is extremely unlikely, acting as if it is true might be prudent. For instance, if a patient exhibits symptoms consistent with a rare and deadly medical disease, treating the patient for the disease may be sensible even if the patient is unlikely to be infected with the disease. This insight draws into statistics economic laws and properties, that is to say, laws and properties about investments, sacrifices and rewards; therefore, it points up an additional means by which statistics must acknowledge a multiplicity of aspects.

What follows here, then, is the outline of the illustration. An instructor would, in most teaching contexts, flesh out the outline with numerous other examples & more details:

  • Every part of human life involves risk-taking decisions, i.e., allocating resources in the presence of uncertainty about the outcomes of the allocations. Many high-school graduates, for instance, must decide whether to begin working full-time immediately, or to postpone work to attend college in the hopes that additional education will yield additional opportunities and rewards several years hence.
  • We generally want to make the decisions which maximize expected rewards or, in other words, which minimize risk.
  • We handle most such decisions best through intuition & common sense, without any formal statistical analysis. In these cases, we don’t engage in what Roy A. Clouser calls “abstraction” or, much less, “high abstraction,” so that our deciding remains in the “naïve attitude.” E.g., when I make such minor decisions as whether to store a small sum of money in my wallet or in a bank, I do so quite informally. I fairly quickly & casually consider my own subjective feelings (which are qualified by Dooyeweerd’s “sensory” aspect) about the probability—viz., strength of belief—that, say, I will lose my wallet relative to the probability that the bank will fail. Those probabilities certainly possess quantitative laws and properties, but they involve laws and properties of every other aspect, too, and I do not focus on (or “abstract out”) the quantitative ones to the exclusion of the others.
  • On the other hand, if we did demand abstract quantitative thinking for even the most minor of decisions, we would cause numerous detriments. Obviously, we would slow daily life almost to a standstill; perhaps less obviously, though, we would also risk neglecting, or giving short shrift to, aspects of the decision other than the quantitative one, e.g., losing sight of the
    • social impacts of the decision, such as the social interactions I enjoy when walking to the bank
    • chemical impacts of the decision, such as the pollution I might cause if I drove my automobile to the bank
    • ethical impacts of the decision, such as the ability of the bank to loan more money to needy businesses when I entrust the bank with my money

As long as we remain in the naïve attitude, we may be better able to appreciate these and other impacts of our decisions which are qualified by a wide array of aspects. So, one of the first main messages to convey to statistics students is that statistical reasoning can enhance everyday, pre-scientific experience, but we should not insist on “scientistically” imposing that reasoning indiscriminately.

  • However, when the implications of making a sub-optimal decision are serious (i.e., major loss of life or property), then probabilistic & statistical reasoning are often justified, to improve the chances of identifying decisions that are optimal from a strictly quantitative perspective. That is, such reasoning, though it does require some expertise & time, is available to enhance our intuition. That enhancement can, in turn, enrich human life, as long as quantitative findings are, subsequently, integrated properly back into the multi-aspectual fabric of everyday existence.
  • As an example of one of these more consequential decision-making contexts, imagine a society or a large for-profit firm deciding whether to embark on a trip to a far-away planet, in the hopes of locating—and returning to Earth with—a large quantity of a precious metal. Such an endeavor carries the potential of great rewards, but also entails great risks to both life and property; hence, deciding whether to attempt the mission deserves careful consideration & systematic analysis.
  • Say that, upon analysis of the possible outcomes of the trip, scientists determine that
    • The probability of success is 47%.
    • The reward, upon success, would be $50 million
    • The cost of the trip would be $25 million
  • Given these costs, benefits & so on, the expected net benefit of the trip would be $50*0.47 – 25 = -1.5 million, which is less than 0; hence the trip seems unjustified, at least from a financial perspective (recognizing, though, that many other considerations together might nonetheless justify the trip). Later, if the statistics course is advanced, the instructor might justify, using so-called “Dutch Book” arguments, decision making using this type of probabilistic rules.
  • This discussion has illustrated decision analysis under uncertainty WITHOUT random data informing the decision. One might call it “statistical” decision making; however, no “statistics” are involved, so if it needs a name, we might refer to it better as “probabilistically supported” decision making.
  • On the other hand, as an example of decision making under uncertainty informed by random data, suppose data were collected on successful & unsuccessful space missions. Among a sample of historical missions, 85% (say) were successful. This summary statistic changes the analysis of the decision by updating the probability of success for the prospective mission. Such updating leads the class, in turn, to Kolmogorov’s definition of conditional probability, and Bayes Theorem (which can be taken as a consequence of that definition).
  • Suppose as well that, following Bayesian reasoning, the probability of success for the mission is updated from the prior 47% to the posterior 63%. This implies the expected net benefit of the trip has changed to $6.5 million, so that the trip is now justified financially.
  • The instructor of an advanced statistics class might, at this point, discuss briefly with students the principle of stable estimation, the upshot of which is that, as the number of data and/or their precision increase, the impacts of a wide range of prior probability distributions decrease. In other words, the principle shows that even “two people with widely divergent prior opinions but reasonably open minds will be forced into arbitrarily close agreement about future observations by a sufficient amount of data” (Edwards et al., 1963). While students in introductory statistics courses do not need to learn every facet of this principle, they should know it exists and a basic overview of its implications. This will help them appreciate the principle that data place limitations on beliefs.
  • In summary, we take risks and seek to maximize expected returns on our efforts & investments, whether we are deciding whether to store a little money in a bank or deciding whether to embark on a mission to another planet. The first scenario calls, though, for intuitive, informal judgment, whereas the second calls for comparatively careful, analytic measurements & comparisons of risks & rewards. The levels of abstraction and study appropriate for these scenarios fall along a continuum. Many other points along the continuum exist, too; less formal approaches would be appropriate when deciding whether to build a fence around the perimeter of a one-family house to keep out potential intruders, but more formal analyses would be helpful when deciding whether to research & develop a new anti-diabetes drug. Informal intuition is most beneficial, one might argue, when decisions must be made quickly and/or the possible losses of sub-optimal decisions are small. Formal analyses are useful, though, when sufficient time & expertise are available for them, & the possible gains of optimizing the decision are large. All statistical methods strike some balance between these types of intuition & formalism.
  • When the practicing statistician wishes to perform inference or assess potential decisions, & needs to select reasoning & methods appropriate for the situation, awareness of this continuum of formalism—as we might call it—might remind him/her that each of the various levels of formalism conveys costs and benefits. It also sets a context, though, for appreciating the potential usefulness of some statistical methods that, for any of a variety of reasons, do not provide ideal results, but are cost-effective in their ease of computation. The results of some methods based on asymptotic distributions, for instance, have higher than necessary standard errors or even biases, but are quickly derived by existent computer software or even by hand calculators. On the other hand, as could be conveyed next to students, some frequentist statistical results, though they are statements about data given parameters, can be re-interpreted as Bayesian—and, therefore, inferential—results, approximately at least.
  • Having introduced statistical inference using Bayes Theorem, the instructor can turn to a discussion of statistical inference using frequentist statistics. The intermediate or advanced statistics student must understand Neyman-Pearson frequentist reasoning, & Ronald A Fisher’s hybridization of frequentist & Bayesian reasoning (if the latter can be “understood” & not only “felt”); however, discussions of these approaches & their underlying philosophies should aim primarily to show how their results can, in certain circumstances, be interpreted inferentially, viz., Bayesianly. The possibility of such correspondences is fairly easy to convey to students familiar with calculus; they can be shown that, when making a posteriori inferences about a single normal mean m with a known standard deviation, given a flat prior distribution, a sample, and a sampling stopping rule independent of the data,
    • the maximum likelihood estimator equals the posterior mean,
    • the p-value of the one-sided hypothesis H0: m≤0 equals the posterior probability that m≤0,
    • the 95% symmetric confidence interval for m equals the 95% highest posterior density interval for m.

The student will then appreciate why frequentist methods, despite being developed using deductive (rather than inferential) reasoning, sometimes do support the progress of science & decision making. S/he will also have access to a larger “toolbox” of statistical methods, some of which are easier to implement than are Bayesian ones and are, yet, at least almost as accurate as the latter.

  • All students should be informed, though, that such Bayesian-frequentist correspondences do not hold, or cannot be verified to hold, if certain conditions are not met. They often do not hold, for instance, if Bayesian priors are not flat, if the sampling rule depends on the data, or if either the parameter space is discrete & the sample space is continuous, or vice versa. Therefore, if the statistician interprets frequentist results inferentially, s/he should do so discriminately, ensuring that their inherently deductive meanings can be re-interpreted inductively, at least approximately.  The challenges of performing these checks can impede implementing frequentist results, though, to the extent that the statistician might opt for Bayesian ones instead.

The above pedagogy will, I hope, convey the principles for statistical inference I’ve outlined in Foundations, & suggest how that inference can be performed so as to cohere with those principles. I do want to circumvent a possible misunderstanding, though: The educator & the student should not take the use of subjective Bayesian inference above as an indication that Christian statistics is necessarily Bayesian. Foundations tried to show that this inference is appealing due to its direct, inductive statements about hypotheses given data & its founding on Kolmogorov’s (what seems to me) compelling, self-evident definition of conditional probability, & because it seems to permit non-reductive inference. Nonetheless, as a human formulation, subjective Bayesian inference is in need of reformation because, if nothing else, sin has impaired our ability to construct realistic priors. We have trouble distinguishing what we believe from what we hope to be true. Even our desire to reflect our true beliefs in priors is weakened; for, we sometimes indicate we believe something merely because we want others to believe it or because we want the conclusion of an analysis to benefit us. Plainly, then, bayesianism cannot serve as a panacea for statistics in all respects. In any case, however, when statistical approaches other than subjective Bayesianism are proposed, I hope that the principles for statistics illustrated above could constitute a starting place for judging whether they are more suitable.

I would welcome your comments and questions on this post; if you would like to collaborate with me on refining it or on subsequent related research, please ask Josh Wilkerson (jwilkerson<at>godandmath<dot>com) to send me your contact information.

REFERENCE

Edwards W, Lindman H, Savage LJ. Bayesian Statistical Inference for Psychological Research. Psychological Review 70:3, 193-242.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s