In Defense of Secular Religions
Modern ideals of scientific and moral progress promise that rational inquiry can cleanse our beliefs of worldly bias. The history of those ideals suggests that this is neither possible nor desirable.

In a heartfelt essay penned in 1887, the British mathematician William Kingdon Clifford defined what he called “cosmic emotion.” A new type of feeling conditioned by the scientific advances of his age. Writing in the wake of Darwin’s disruptive revelations, Clifford found in evolutionary theory cause not for disenchantment, but a renewed sense of expansive moral duty. He described this as the feeling that swells within when one considers “the universe,” or “sum of things,” as one “great aggregate of events,” alongside the resultant pang of dutiful desire to make this world a better place. The starry heavens above; the moral law within.
For Clifford, this emotion arose from immanent contemplation of nature’s unity and intricacy—devoid of any relation to an extrinsic Creator transcending it. Indeed, he contrasted his “cosmic emotion” to older conceptions in which nature was made—by an infallible architect—to be flawless and perfect. By contrast, for more modern and godless conceptions such as his, nature is only good to the degree that creatures fallibly attempt to make it so. It is because the world lacks an architect that it can be improved. Clifford’s contemporaries apprehended his notion in precisely this way: as the sense of purpose that arises when dwelling upon “Nature as apprehended by an age of scientific culture.”
Such a conception, therefore, arguably represents the epitome of a godless world. A world wherein, beyond our own actions, and as moral agents adrift within an otherwise amoral cosmos, there is no external guarantee. It is infinitely telling, after all, that the phrase “making the world a better place” itself only entered parlance toward the end of the 1800s.
Such a conception would have seemed alien to earlier generations: the confidence that our actions can matter in the grander scheme, in ways that wouldn’t otherwise have come to pass. But the venerable secret lying at the root of godless modernity is that its core tenets and conceptual tools also emerged—entirely accidentally—as byproducts of earlier speculations on God’s unlimited, untrammelled sovereignty.
It was late medieval fixation with divine omnipotence, luring God-fearing theologians to obsessively picture how Sovereign Divinity could have made the world otherwise, that encouraged people to begin articulating the ways we, humble humans, might—through combined effort—ourselves make it otherwise.
Not only did this ancient, terrifying, and tenebrous conception—of a Deity with power beyond limit—foment modern ideas of ethics, it also lies upstream of all scientific attempts to predict nature by grasping its laws and tracking its uncertainties. Herein lies the despotic origins of liberatory modernity, the forgotten roots that transmogrified into our contemporary world of silicon and suffrage.
But should this pedigree undermine our belief in the veracity and efficacy of our inherited worldviews? Ultimately, I argue, it shouldn’t. Because some accidents are happy ones; and this might be one of the most felicitous of all. To demonstrate this, however, we must first establish what a happy accident even is.
Exaption and Eucatastrophe
A 900-page copy of Thackeray’s Vanity Fair can be used, successfully, as a doorstop. Thackeray didn’t write the novel for this purpose, of course. Importantly, something similar often takes place in biological evolution. It is called “exaptation.” Exaptation describes the happenstance whereby a trait or behavior that originally arose to serve once function—or none at all—thereafter fortuitously comes to be employed for a new, alien, adaptive usage. A classic example is feathered plumage: initially evolved for thermoregulation, later co-opted to produce the serendipity of avian flight.
Modernity itself, I argue, should be seen as an exaptation of older theological ideas. Another instance—amongst many throughout history—of the happy accident. Most of our cherished practical precepts, looked at the right way, can be seen as mutations of obsolesced outlooks. Like the bird—accidentally unshackled from planar life, thereafter enabled to soar beyond—we are latter-day beneficiaries of conceptual structures originally produced for alien purposes. The history of ideas, like that of life, is a conservative thing: forced to make do with the material it has inherited. But, as we shall see, this need not make us masters of suspicion.
Since the end of World War Two, many intellectuals—beginning with Karl Löwith—have unveiled the theological roots of modern ideas as a means to debunk or delegitimize them. Their aim is to puncture on its own terms modernity’s pretensions of standing cleanly apart: as being conceptually distinct from the superstitious past it disavows and, therefore, purports to overcome. They apply a hermeneutics of suspicion. On the other hand, many contemporary champions of the modern precept of progress naively take for granted claims that modernity is entirely desacralized and voided of any faith or trace of its premodern past. Contemporary rationalists and optimists assume that, given enough time, we can optimize our beliefs and expunge them of all contingency or arbitrariness. The basic belief is that inquiry is like the marble rolling into a funnel, where there are many paths to the bottom, but the same position will always be reached regardless—making history irrelevant and its influence inert.
The history of ideas, like that of life, is a conservative thing: forced to make do with the material it has inherited.
But both of these broad positions assume that contingency is damaging to the legitimacy of beliefs or goals. The following is an attempt to stake a different position, between suspicion and naivety, by showing not only that legitimate precepts—such as the injunction “to make the world a better place”—can arise from historic accident, but that they indeed must so arise. Because it is only by abiding with the accidental and arbitrary that we can come to call anything, including the architecture of our worldviews, our own. They can be said to be earned—rather than imposed—precisely because they required prior overcomings and retain their traces; and what’s more, even if something can never entirely overleap its origins and reach some imagined global optimum, that does not mean unending local improvement is illegitimate.
In 1939, J.R.R. Tolkien defined what he called a “eucatastrophe,” articulating it as an unprecedented yet joyous break from the prior course of events: “a sudden and miraculous grace.” Like a catastrophe, but for the better.
Here’s my candidate for a eucatastrophe. Somewhere in Africa, probably tens of thousands of years ago, potentially hundreds, a string of sounds fell from the lips of one of our forebears. It expressed, for the first time, what could properly be called a counterfactual.
It might have been expressed in awe, in urgency, in annoyance. We will never know. The utterer will remain forever nameless and unsung. But it was the first linguistic utterance, articulated on this planet, describing something not currently actual.
Complex language includes expressions that can articulate what is not in addition to what immediately is. This, presumably, is a necessary feature of any shared system of complicated communication. Because such utterances, of what can be in excess of what is, allow rules—and, thus, what’s grammatical and ungrammatical—to be articulated and maintained. But they therefore also allow the carving up of what’s proper and improper. The difference between “ought” and “is” rests, after all in the fact that the former doesn’t need to concretely exist to be meaningful.
Our ancestors will have undoubtedly felt a sense of propriety long before they were recognizably human—that is, talkative and technological. But it would have taken the miracle of complex language to fully initiate our species into a world of right and wrong.
Norms will have existed long before creatures like us could talk about them—in some diffuse way, as appetites or conventions—but, after these linguistic developments, they could be discussed and contested, disputed and discarded, and, thus refined.
Thereafter delaminating from the absolutism of the actual, our kind stepped forth into the spacious world of what’s merely possible: of what-could-have-been; of what-could-yet-be; of what-ought-to-be.
Possibility and Virtue
But unfolding all the things we now recognize as following from this—including Clifford’s “cosmic emotion” and the precept of “improving the world”—would take many further millennia. Knowledge builds, but not like a bullet shot from a pistol. Hence, the precept of “improving the world”—at least in the form we now take for granted—was lacking from most of the ancient world. Writing from the Aegean circa 300 BC, Aristotle often pronounced that—whether through art or ingenuity—humans could not improve on nature’s precedent. This, in turn, was linked to a limitation of language itself: namely, one concerning premodern definitions of what we now call “possibility.”
In his Metaphysics, Aristotle declared that “evidently it cannot be true to say ‘this is capable of being but will not be.’” Or, put differently, if something can happen, it must reliably actually happen. This presumption derives from a definition of possibility that was prevalent throughout the ancient world, though Aristotle was the first to make it explicit. Across his writings, he consistently defined “impossible” as that which never takes place; “necessary,” as that which is always the case; and “possible,” as that which is sometimes so.
Though elegantly intuitive, this formulation obstructs articulation of possibilities that have never once happened before. And what you cannot clearly articulate you cannot yet clearly conceive. Such formulation blunts sensitivity, in other words, to entirely unprecedented potentials. Accordingly, Aristotle couldn’t help but affirm that it is solely “of actual things already existing that we acquire knowledge.” This is the same as denying humans can concretely know anything about things which haven’t yet already, concretely occurred. It is a denial of grasping the future as an open, uncertain, undecided place.
Aristotle—alongside other Greek and, later, Roman writers—readily accepted the ramifications of this for wider human history. That is, they assumed that all things humanly possible have already come to pass—that everything that can happen already has.
Hence, Aristotle’s conviction none of us can meaningfully improve this world, at least in ways that haven’t already unfolded and aren’t guaranteed to later unravel anyway. He professed belief that everything humanly achievable, and knowable, had already been achieved: not once nor twice nor occasionally, but infinitely many times. Hence, over time, our human world stays basically the same.
In such a scheme, that which hasn’t yet happened becomes indistinguishable from that which will never happen. It truncates the space of possibility to trivial circulations and rejiggings of what’s already been precedented.
This applied as much to theoretical advance as to the moral order. Plato, for example, proclaimed every single permutation of human “goodness” and “badness”—alongside every possible social arrangement, from the most benighted to the most beneficent—has been passed through and will, inevitably, return.
Even in his Republic—which envisioned one of the first recognizable utopias—Plato pauses halfway through the text to clarify that he is envisioning a situation that “has been” and will inevitably return, again, “hereafter.” After all, all that’s possible reliably becomes actual.
Again, this can be tied back to contemporaneous assumptions about possibility, which exhaustively anchored what’s possible to tangible manifestation within time. Today, philosophers call this the “diachronic” conception of potentiality. In such a scheme, that which hasn’t yet happened becomes indistinguishable from that which will never happen. It truncates the space of possibility to trivial circulations and rejiggings of what’s already been precedented.
Thinking this, there’s little point trying to seek radically unprecedented improvements, certainly not ones that would otherwise never have come to pass. Every renovation, and its opposite, will eventually take place regardless, such that all fleeting impacts of our individual decisions will eventually wash from the face of the world anyway.
Hence why Classical ethics was primarily “aretalogical” in tone, or, focused on virtue. That is, centered on following established habits—or, the behaviors of admirable precursors—and cultivating oneself in ways that have well-proven precedent for flourishing.
Indestructible Value
The arrival—and eventual ascendency—of Abrahamic religion introduced an alien intrusion within this pagan worldview. With the spread of Christianity and later Islam, there arrived a germinal sense of the universally unprecedented.
Roughly three centuries before Jesus’s life, Aristotle had reasoned that, if history cycles, we cannot properly say we come after our ancestors, because, in another sense, we also come before them. But, after the crucifixion on Golgotha, this could no longer apply—at least, for followers of the new creed. God can only sacrifice himself once, you see. If one time was not enough, then the act—of the saving of the souls of everyone, everywhere—wouldn’t have been fulfilled. Divinity, being omnipotent, doesn’t work in half-measures.
Which is to say, Jesus’s sacrifice, at least for his disciples, forced acceptance that something had transpired that had never once happened before, anywhere, nor could happen again. Precisely the same applied, for early Muslims, to the arrival of the Prophet’s teaching around a half-millennium later.
Nonetheless, old habits—just like the crucified Sons of Gods—do die hard. Monotheistic belief in a superhuman, supervising deity sustained the old belief in moral equilibrium in the world, outstripping all human activity or decisions. Our world, by necessity, is the best world: given God’s omnibenevolent orchestration. And true, genuine perfection cannot be polluted just as much as it cannot be improved upon.
So went the perennial Scholastic platitude: “ens est bonum convertuntur” or “being and good are convertible terms.” Or, in the words of Anselm of Canterbury, writing around 1080 AD: “whatever is, is right.” As Nicholas d'Autrécourt put it a couple of centuries later—extending this line of reasoning—existence must be “always perfect to the same extent.” The universe, he pronounced, must house, over time, an invariant “complement of good.”
Variation, after all, would introduce imperfection into the perfect world, which is pure contradiction. In other words, the net “bonum” or “good” in the universe was not considered destructible: not variable nor contingent upon local events or happenstances. If something bad happens here, it was thought something good happens over there, balancing out the whole. Perfection couldn’t be orchestrated any other way.
Nor could any potential be terminally lost from existence: if possibility is something that sometimes happens and impossibility something that never will, to stop partaking in existence would make a possibility into an impossibility—again, introducing contradiction. The Creator can’t suffer any extinction or loss of His creations.
As in the ancient world, possibility was still defined solely as that which sometimes happens and sometimes doesn’t. This blocks apprehension of irreversibly wasted opportunities, eternally frustrated potentials, and irrecuperable harms. Limiting possibility to a narrowly “diachronic” application—as denoting that which comes and goes, but only ever as return of what’s come or gone before—means it can only describe reshufflings of the cosmic card deck, never genuine losses or novelties.
Put simply, on such a view, the world, at large, cannot get better, or worse, in the way we now acknowledge it can. Indelible stains on the human record—as we now conceive of the atrocities of the mid-twentieth century—as much as unprecedented reformations—as we now conceive of social renovations such as universal suffrage—were unthinkable.
For many medieval thinkers, therefore, the total “bonum” of our cosmos was considered invariable: globally conserved through all local interactions and shifts. Value, whatever that might ultimately mean, was considered indestructible and invariant.
Writing around 400 AD, Augustine summed it up: explaining that “variable good” is governed, in the last instance, by the divine “immutable good.” The “goodness” of created beings can, locally, “be augmented and diminished,” he admitted. But, from the perspective of the sum of things, all variations must ultimately average out as “good”—because “their author is supremely good.”
This was readily compatible with elder, pagan beliefs: whereby things never really change. It was continuous with the earlier view of Lucretius, who—writing a few centuries before Augustine—claimed that, through all its possible permutations, the “aggregate of things palpably remains intact.” All losses are necessarily matched by compensatory rebirths, somewhere or somewhen.
The Confusion of the Muʿtazilites
Such belief was maintained into the Medieval epoch by the Aristotle-inspired school of thought known as scholasticism. Scholasticism taught that everything is the way it is for a deducible and neatly demonstrable reason, such that it strictly couldn’t have been any other way.
Scholasticism spread through the Middle East during the Islamic Golden Age, dating roughly from 700 to 1300 AD. Here it blossomed into the teachings of the Muʿtazilite falāsifa (i.e. philosophers) who—following Aristotle’s lead—sought to elucidate the rational necessity of the cosmos. This triggered a counter-reaction from what came to be known as Ashʿarism. The Ashʿarites instead foregrounded God’s limitless will, forging a position known as voluntarism. This new movement began unbinding the old view of a rational world where everything is the way it is for reasons of necessity such that nothing can be radically altered for better or worse.
Voluntarists reproached the scholastic encroachments of the Muʿtazilites, for leveraging chains of deductive demonstration over divine decision. Exalting God’s freedom and omnipotence over such rationalist straightjacketing, they asserted the centrality of Divine Will in every single worldly happenstance and so sought to strip the cosmos of any inherent, indwelling, independent rational order.
A prominent example came from the Persian theologian al-Ghazālī and his Tahāfut al-Falāsifah (or, The Confusion of the Philosophers) of 1095. Here, the Sunni theologian wanted to dismantle scholastic faith in the logical demonstrability of the connection between cause and effect. To do this, he conjured up unrealized—yet logically plausible—events that could disrupt regular causality. One can picture a flame not burning dry wool, or a dropped object falling upwards, after all.
As such, Islamic voluntarists notably moved away from the elder construal of “possible” as that which doesn’t happen at one moment but does happen another. Instead, it began being deployed to express plausible alternates to what does happen at one moment, without any obligatory need for them to come to pass at another. Contemporary philosophers call this a “synchronic” conception of possibility.
In developing such arguments, voluntarist theologians divorced possibility from concrete realization within time: emancipating its range of application from the known, familiar, and precedented course of events. Implicitly, they were conceiving of it, instead, as a space of plausible alternates, untethered from actual manifestation. A space carved up by purely logical relations of compatibility and incompatibility, rather than one barricaded by the fickle horizons of established, tangible, chronicled experience. The human imagination could begin drifting from here and now—from the absolutism of the actual—toward far more exotic vistas.
Abrading Reality to the Kernel
Though it was already tacit in Augustine’s 5th-century proclamation that “He could have done, but didn’t want to”—expressing belief there are myriad worlds God might have made, but simply didn’t—this powerful new logic of possibility was first made properly explicit in the 14th century. It came from the quill of the Scottish friar Duns Scotus. He wrote:
I do not call something contingent because it is not always […] the case, but because the opposite of it could be actual at the very moment it occurs.
Making clear what remained tacit in prior voluntarist thinkers, Scotus here finally consummated the divorce of possibility from precedent and obligatory later manifestation.
Known as “the Subtle Doctor” in his day, due to his sprawling spiderwebs of syllogism, the modern word “Dunce” derives from later attempts to ridicule the perceived pedantry of this method. This seems unfair due to Scotus, in my eyes, given the world-transforming consequences of his innovations.
Empowered by these culminating developments, late medieval theologians—bewitched by omnipotence, drunk on divine freedom—started excitedly conjuring all the ways in which this world could have been forged otherwise. They imagined worlds unrelinquished by the principles imposed by Aristotle’s philosophy—who was, then, still considered an overriding authority. Some modern scholars have even written about what they call a “Medieval Multiverse.”
Buttressing God’s freedom in selecting between countless worlds involved pushing the claim that ours is not the way it is for any binding reason or logical necessity, but is so purely because of arbitrary choice. This eventually led to the claim—articulated most vociferously by William of Ockham—that none of the dictates of our language or reason can restrict what reality can be. (Hence, Ockham’s eponymous razor: which originally stated that we should not multiply abstract entities needlessly, but cut away such clutter to get to existence’s minimalist core.)
Motivated by conviction our mental categories—abstractions, generalisations, propositional structures—cannot place limits upon what God can do, Ockham imagined existence completely voided of all such mental contents. To do this, he used the newly strengthened conception of possibility. He imagined counter-to-fact worlds wherein all such mental categories are procedurally eliminated, abrading reality to its mind-independent kernel. He called this the “Principle of Annihilation,” conceiving of it as a thought-experimental benchmark for establishing the independent reality of singular objects beyond any of our conceptual relations to them. In other words, here was the primal scene of modern scientific realism.
The compulsion of earlier theologians to imagine worlds entirely otherwise than the apparent one therefore led to the later discovery that it, in fact, is otherwise than it intuitively appears to us.
Importantly, this simple acknowledgement—that existence would continue without any minds like ours within it—can only ever be articulated counterfactually. Hence, why such articulations would have to wait for the late medieval liberation of possibility from precedent. Put differently, Ockham’s annihilatory method was the origin of our modern conception of robust mind independence. It’s no coincidence that, writing centuries later, Galileo deployed an identical vocabulary of annihilation, pondering the counterfactual erasure of every last human experience. “I think,” explained the Tuscan astronomer, “that tastes, odours, colours, and so on are no more than mere names, so far as the object [is] concerned.” “Hence,” he continued, “if the living creature were removed, all these qualities would [also] be wiped away and annihilated”
This way, Ockham’s deployment of the new language of possibility lies at the root of the important Early Modern distinction—integral to the founding of modern science—between “primary” and “secondary” qualities. Or, between the ways things seem to us versus the ways they actually are.
The compulsion of earlier theologians to imagine worlds entirely otherwise than the apparent one therefore led to the later discovery that it, in fact, is otherwise than it intuitively appears to us, and, thereafter, to the scientists’ demonstration of this fact. Conjuring alien worlds underwrote the epochal discovery that the very world we inhabit already is alien: made of invisible atoms and impersonal forces, not of odours, colors, and intentions.
This emancipation of possibility from experiential precedent also provided an essential precursor when it came to formulating the very idea of a “law of nature.” That is, as a parameter that can be imagined as having been otherwise without entailing logical contradiction. For the scholastics, because only impossible things never come to be, the ways of the world are the way they are—and cannot be any other way—as a matter of pure logic alone. Accordingly, it was assumed the world’s workings could be illuminated—from the proverbial armchair—via deductive methods. But conceiving of nature as having “laws” that are contingent constraints, alike to a sovereign’s diktats, meant they could no longer be deduced from afar. Instead, they had to be investigated through messy encounters with the world. They must be ascertained empirically and inductively, rather than via subtle syllogism alone.
This conceptual shift likewise provided the tools for the proper articulation of these peculiar, new-fangled laws. Just as Ockham had to orchestrate counterfactuals to stumble toward the notion of an entirely mind-independent object, so too did Galileo have to concoct unobservable, because physically impossible, limit cases—such as a frictionless plane—to throw into relief the limit cases of counterintuitive laws like that of inertia.
How Duns Scotus Inspired Probability Theory
But that’s not all. Said unshackling of possibility from time’s sequential passage also fomented our modern ability to track nature’s uncertainties. Historians, that is, have often remarked how curious it is that—despite the fact almost all other fields of mathematics find their first flourishing in the ancient world—the study of probability remained absent until modernity’s dawn.
There are myriad, subtle reasons for this. One amongst them, hitherto unacknowledged, is the lack of “synchronic” definitions of possibility prior to Duns Scotus’s contribution in the 1300s. After all, conceiving of each throw of the dice as the expression of a wider space of simultaneous alternate outcomes is requisite for grasping probability theory. Without grasp of “synchronic” possibility, there could be no such visualisation.
Of course, in sortition, casting of lots, and astragali—that is, heel bones used as dice—there is a rich prehistory to understanding of luck and randomness. Yet this was often apprehended as inscrutable “fortuna” rather than anything formally tractable: until, that is, Gerolamo Cardono’s Liber de Ludo Aleae, which was written around 1550 and published posthumously in 1663.
In this pathbreaking work, Cardano—a polymath who often resorted to gambling to support himself—conducted the first real experiments with the mechanics of chance. Cardano’s breakthrough—deceptively intuitive to us now, yet at the root of the entire edifice of the modern world—rested in conceptualising each dice-throw as the expression of a wider set of simultaneous possibilities. In deploying numeral notations to track frequencies within which reference class, Cardano invented the modern field of probability and made the future—and its hazards—enumerable. This way, these breakthroughs, tracing back to the Subtle Doctor and further beyond, lie upstream of our modern world of financial markets and profitable risk. The focus of humans began being dragged further and further into the future, and has continued tending this way—at accelerative pace—ever since.
Science As Exaptation of Voluntarism
But not only did the subtle doctors—the omnipotence-struck theologians—accidentally provoke the discovery that the world in fact is otherwise. They also, eventually, opened the floodgates of moral expectation, by bequeathing the expressive tools—again, in an act of exaptation—with which to petition that this world must be made otherwise.
As mentioned, ancient and medieval theories of ethics were, by and large, “aretalogical.” That is, rooted in cultivating virtue, or, recapitulating the habits of prior upstanding example. This, clearly, fits with a solely “diachronic” conception of possibility. If all possible actions have been perpetrated before, ethics becomes a matter of embodying best precedent.
However, the new logic, of “synchronic” possibility, introduced a new way of formulating the individual will. Namely, as an echo of the divine one. It enabled the conceptualization of each and every decision as selection between alternate courses of action: some of which may thereafter remain forever unrealized, despite the fact all of them remain simultaneously possible in the moment of deciding.
Moral action could begin becoming less about following established example and more about attempting to renovate the world in ways that wouldn’t otherwise reliably come to pass. Rather than being a mere matter of time and inevitability, consequences could be construed as entirely contingent upon the will of the agent. In other words, if it’s the case that not all outcomes manifest themselves regardless, the selection between them becomes meaningful.
This way, evaluative outcomes could start falling under the jurisdiction of decision rather than character or habit. As God’s untrammelled will had freely selected the world he created, our human wills—in their limited ways—could begin shaping the world we inherited. What’s more, they could begin shaping it in ways as-yet-unchronicled, given possibility’s coincident untethering from precedent.
Already, writing in the 13th century, the English polymath Roger Bacon stated that “individuals, cities, and whole regions can be changed for the better” through study and education. He mistily foresaw the development of “chariots” capable of moving without animals and even, apparently, flying machines—capable of relieving humans and beasts from burdensome toil.
Suddenly, it began seeming to some that this world could be made better, in ways that wouldn’t otherwise transpire, by human effort alone. Though it took centuries to fully take hold, this would later blossom into the modern idea that we can ameliorate our lot in ways that meaningfully cumulate across the generations.
Modern Ethics an Exaptation, Too
The late medieval reformulation of the ethical will—as based on decision between simultaneously alternate possibilities rather than recapitulation of upstanding precedent—provided the matrix within which the two major modern systems of morality could emerge and develop: utilitarianism and deontology.
Utilitarianism appeals to the idea that the aggregate good of the world is—in the last instance—variable, which depends on the idea that not all possible outcomes will come to pass regardless. From this premise comes the principle that, when selecting between possible courses of action, we should always select the one that maximizes this aggregate. Coupled with the receding sense of any wider guarantee of superhuman cosmic justice , this imperative becomes articulable as a demand to make this world a better place than it otherwise would have been. Or, in the influential 1884 words of the utilitarian philosopher Henry Sidgwick, we must act “from the point of view of the universe” itself. (Notably, Clifford, when articulating his idea of a “cosmic emotion,” claimed the phrase came to him in conversation with Sidgwick.)
Similarly, deontology—the other dominant modern ethical outlook—also requires every decision be formulated as selection between alternates, rather than mere cultivation of reliable habits. Unlike utilitarians, deontologists care less about the consequences of our actions, and focus more on their rational consistency and universal applicability. This rule of conduct remains meaningful even if it is never fully obeyed by any limited, finite being. It is a governing ideal whose force over our individual actions derives not from having manifested anywhere or anywhen by, but from its rational authority alone. The imagined utopia of Immanuel Kant, wherein all agents act in consistency with selfless laws derives, unlike that of Plato, no authority from having already existed. Its motivating power, therefore, compels us to, again, make the world better than it would have otherwise ever been.
Making the World a Better Place (Than Otherwise)
Given these two outlooks—opposed yet conjoined in their new view of human ability to forge the world otherwise—emerged in the 1700s and effloresced in the 1800s, it should be no shock that the phrase “making the world a better place” also took hold during that latter century.
The phrase appears to have been ensconced in popular consciousness by none other than Charles Dickens in his 1848 novel Dombey & Sons. Midway through, the narrator professes hope that the human collective—“like creatures of one common origin”—might rally themselves in “tending to one common end, to make the world a better place.” Much-quoted, the phrase becomes increasingly prevalent from then to the present day: eventually gaining the prominence it now holds as a secular call to make our local cosmos better than it would otherwise be.
Prior to this, there are a few scattered usages of phrases such as “reforming the world” and “to better the world,” trailing back into the 1700s, but these are often found in overtly theological contexts. That is, in connection to proselytizing and spreading the Word. But, after all, the Abrahamic prophets who delivered that message were, in many ways, the first world-reformers—manifestations of untrammelled Divine Will in human flesh, accidental proof that incarnate agents can alter everything in ways entirely unprecedented.
The Phylogeny of Worldviews
Presciently, the Swedish chemist Svante Arrhenius wrote the following all the way back in 1908:
With ideas, it is the same as with living things. Many seeds are sown, but only a few germinate. Of the organisms that develop from them, the majority are selected out in the struggle for existence, and only a very few survive. In the same way, ideas that are best adapted to the natural world are selected.
This is no doubt a truth to this. However, Arrhenius’s stress on “adaptation,” alone, also implies that, given enough time, all inquirers will somehow arrive on the same—that is, the most “adapted”—ideas, regardless of starting points. This would expunge all meaningful history from the process.
But cumulative processes—like the development of lifeforms or worldviews—are, again, conservative. In biology, the reality of exaptation proves this. Chance and history play an insuperable role. Adaptation is not some boundaryless search for the best of all traits possible, but a tinkering toward local betterment within the constraints entrenched by a chancy past. Some features later become useful purely for fortuitous reasons, rather than for reasons of prior selective pressure. I believe it is the same with ideas, given knowledge is also a cumulative, genealogical process. It is always a scaffold or bricolage of what came before. We cannot guarantee our current ideas would not be wildly different if other religions had collectively flourished in the past. Nor can we guarantee those other unrealized possibilities wouldn’t explain facts or guide actions just as efficaciously—or perhaps better—than those we live and breath and have our being within today.
This is one reason why it isn’t productive, today, to accuse ideas or ideologies of being “mere secular religions”—because, in a sense, everything is, save for avowed religion itself.
Modernity As Felix Culpa
But, once more, should this inability to overcome the contingent past make us suspicious of our beliefs? Should it make us doubt their veracity? Should we question their truth?
I argue no. We need not be masters of suspicion, like Löwith and his followers; but neither need we naively assume that our opinions can ever entirely overleap their historically arbitrary origins. Point being, we can imagine such overleaping, and we can desire it desperately, but this is not the same as approaching it like a guarantee. Ideally, we might imagine a tract of time wherein the contingency of our beliefs—their rootedness in the chanciness of what came before—washes out. That is, the entire space of alternate positions becomes available, such that there can be a convergence toward optimality across all domains. On the surface, this resembles the notion of Charles Sanders Peirce, the late 19th-century American pragmatist, who defined truth as that which “everyone will believe in the end” given enough time, investigation, and assiduous elimination of competing alternate stances.
But Peirce, admirably circumspect, allowed that the amount of time required for this could, perhaps, be longer than the probable lifespan of Homo sapiens. It could even, he implied, be greater than the combined lifespans of all intelligent species and sapient investigators, cosmically construed.
And here’s the point: Peirce himself was alert to the fact that this is a regulative ideal. Or, in other words, an act of faith. He wrote:
Our perversity and that of others may indefinitely postpone the settlement of opinion; it might even conceivably cause an arbitrary proposition to be universally accepted as long as the human race should last.
There may always remain facts unturned, alternates unconceived, exclusions missed. This isn’t an argument not to seek them out, of course. Rather, it is to recommend sensitivity to the fact that the genealogies of our beliefs—and the constraints they confer—may always redound, interminably, warping our actions in invisible ways, great and small.
The latitude and leeway for unconceived alternate beliefs is very easy to underestimate. History matters in evolution, because not all possible animals that can exist ever will, such that what propagated in the past meaningfully constrains what can propagate into the entire future. So too are there likely far more possible conceptual positions than can ever be meaningfully searched through, meaning inherited constraints likely cannot ever fully be overcome. Our “perversity” of belief may be insuperable.
For these reasons, Peirce himself regarded his notion of “the end of inquiry” entirely as a regulative ideal: as something that shepherds present inquiry but holds no concrete guarantee of manifestation, in the slightest, beyond this.
It is a hope we must presuppose to get questioning off the ground in the first place. That is, we act as if all questions—moral as much as material—have a best answer, which all would eventually happily agree on, given enough time. If we didn’t, why bother inquiring and correcting others in the first place? We must act as if there is some single best solution, upon which everyone could converge, thus overstepping all arbitrariness inherited from the origins of our elder beliefs. This is an ideal presupposed primordially by inquiry, as it’s what motivates us to jettison old beliefs when we happen upon new ones that are better. It is what motivates us to argue and hone each other’s positions, through unceasing debate ever-renewing, within our generation and across generations over time. But, as Peirce well knew, this is entirely an act of faith. Truth, that is, is a motivating hope, not a destiny—a far-off lodestar we simply must believe in.
This way, faith always will lie, and has always lain, deep at the heart of thinking and acting in the world. Will modernity ever fully overcome its sacral origins? Almost certainly not, but this in no way invalidates our sense of determination when we consider the starry heavens above. Modernity may have been a happy accident—a liberatory force accidentally exapted from lurid visions of limitless divine dictatorship—but so too was our evolution and, likely, the miraculous emergence of life itself. Lucky eucatastrophes, all of them. For me, it’s easier to be thankful for something that didn’t have to have happened.
The sooner we come to terms with the fact we are creatures with indelible traces of our entire past always within us, the sooner we might build a future worthy of that heritage, and, thus, become conduits of thoughts worthy of the title “cosmic emotion.”