AMOR FATI: LET’S GET IT ON
By Christopher Brunt
T
he premeditatio malorum, or premeditation of evils, is a Stoic exercise to develop resiliency in the face of uncertainty—according to whoever is behind the @dailystoic’s Twitter feed. The @dailystoic is always trying to get me to imagine bad things happening, for my own good, as if I needed any help in this department. “Rehearse them in your mind: exile, torture, war, shipwreck. All the terms of our human lot should be before our eyes,” says Seneca. I am a neurotic, news-addicted writer in 2019, meaning I am constantly on Twitter, and more than sufficiently fearful of tomorrow—conjuring up war, shipwreck, and exile right around the corner has become a natural reflex of my personality.
Thankfully, we have the gnomic wisdom of Marcus Aurelius to round out the picture: “What stands in the way becomes the way,” writes the philosopher-king. Aurelius and his Stoic forebears advise us not just to steel ourselves for the inevitable misfortunes that lie ahead, but to reinterpret the past and let that reinterpretation dictate the terms of the present—to cultivate amor fati: a love for what happens. This idea can be seen as a countervailing force against a particularly American stubbornness to admit failure. To see it, name it, and sit with it with some amount of patience, to let it be a teacher. To understand failure not as a verdict, but as a lens.
I never, ever, would have believed as a teenage poet with more ego than body mass (or, for that matter, work ethic), that it would take me this long to get a book out. True story: once, as a freshman in college, a professor with whom I’d been talking about the Platonic Forms or the Eleusinian Mysteries or some such esoteric bauble, asked me where I wanted to end up in life and, as we happened to be standing right underneath it, I’d pointed to the top of the library wall, where the names of the masters were etched in stone. “Right up there after Brönte,” I said. What. A fucking. Asshole. Now I can forgive that young man his arrogance and his missing-of-the-point if you can, reader, for he was eighteen years old and drunk with poetry and in love with the way life felt on his greedy little senses. Surely there are worse ways to be eighteen. But in a couple of weeks from now, I will turn a robust and mellowed thirty-seven, and on my desk sit two unpublished manuscripts that I’ve spent much of the last eight years at work on, and very recently I ripped up a contract for one of them. (There is a story there and I will tell it, but first things first.)
Youthful delusions of grandeur aside, once you’ve passed through several life stages without experiencing major (i.e. public) success, it becomes possible to continue to hope and work for that success while at the same time losing the ability to imagine it. Maybe there should be a word for this: disillusionedish. To become so acclimated to obscurity that the notion of its opposite—namely, fame—seems a dimension of biographical reality only open to other kinds of people. Even though, as writers toiling in obscurity for years and years, we’re bound to know some of these kinds of people, the famous ones, the elect. They’re our teachers and mentors and colleagues and friends, and they don’t seem so different from us, other than the fact that by some mysterious movement of fate their book came out, and did very well, and now they’re occasionally to be found on a stage accepting an award or being asked by an auditorium crammed with readers to opine on the human condition, or the role of art in this or that, or the Way We Live Now. But each of us only gets one career, and a literary career without marked success is, by the definitions we’re employing here, one marked by failures: failure to develop, failure to finish, failure to publish, failure to publish enough, failure to beat the odds, know the right people, win acclaim, spark a movement, inspire the masses, and so on. A writer in his or her twenties may not even regard these as failures so much as “not-yets”—surely these feats can be safely regarded as next year’s harvest from this year’s steady labor. But at what point should a writer look back and conduct an audit? At what point should there be a reckoning?
Granted, to speak of literary success/failure like this is to speak of it as a public, even commercial phenomenon. So be it. It is worth thinking about failure on just these terms since it is by the measure of those terms that nearly everyone who tries to become a writer will fail for at least some part of their career.
Last May, an independent publisher offered to buy my poetry collection. In four consecutive years on the book prize treadmill, my collection had been named a finalist over and over again only to walk off empty-handed, and I’d grown accustomed to girding myself up year after year to go through the process all over again—the submissions, the entry fees, the six-to-nine month wait, the announcement of the finalists, and then the worst part of all—that last month or two of waiting while the judge anoints the winning collection out of the handful of finalists, and then the great let-down, or cynical sigh of resignation, when it’s someone else’s book that is chosen. (The contest process, by the way, is how the vast majority of debut poetry collections get published in the United States. It is in every way an abomination, and virtually the only path new poets have to book publication.) But here, finally, last May, a lifeline: from a well-known poetry publisher with an impressive back catalogue and a team of highly regarded editors. Even though I hadn’t won their contest, they still wanted to bring out my book. I was ecstatic. And yet—something was off. The cognitive dissonance was there at the outset, but as we sometimes do when we want something very badly to not be true, I worked hard to ignore it.
And so, over the last six months, I prepared to capitalize on the first major success I’ve had as a writer: curating materials for a website, hunting for a good photographer for my author photos, pinning down the art I wanted for the cover, putting together a plan to represent the book at festivals and readings, and also, dreamily trying to conjure the book itself as an object out in the world—resting there on the shelf between Jericho Brown and Bukowski at your friendly neighborhood bookshop, waiting for a stranger to pick it out, take it home, and read it. A writer’s first book is not just a token of professional status, not just a ticket to the dance—it’s the sword pulled out of the stone. It’s a miracle, a coronation, and a wedding all at once. (To whom does our first book wed us to? To readers! To Literature! To all the other Writers-With-Books out there in the world. It is a very large, confusing wedding.)
While I was planning and conjuring and mixing my metaphors, my new press was capsizing, sending out panicked and confusing flares from their little sea of turmoil. Here is what I never got from the publisher in the six months they had my book under contract: a date or even target year for publication, an actual conversation by telephone with anyone who worked there, a straight answer to any question I ever asked.
And here is what I did get, finally, in a late October email from the director of the press: an inquiry as to whether I’d be willing to subsidize the publication of my book by preordering hundreds of copies, whether I could personally secure a wealthy patron for the press, whether I’d be willing to switch over to a crowdfunding or even “co-op” (i.e. vanity press model) for my book, whether I could guarantee the book would be reviewed via my own media contacts, and whether I could articulate some claim of urgency that would necessitate my book coming out before 2021–22. All of these questions were framed as the conditions that would enable the press to decide whether my book was worth the risk of publishing after all. Never mind the absurdity and affront—all of these questions had been answered already by the very contract the press had offered me and which I had signed half a year before.
I pulled the plug. My manuscript is no longer under contract, no longer slated for publication. This fact—premeditatio malorum be damned—still hurts. What I thought was my first big success turned out to be a spectacular failure, and a colossal waste of time. I want, at least for now, to regard it that way, and to try to learn something by looking through the lens of failure at myself and at my work.
There is the obvious lesson about trusting your instincts, even and especially when they’re telling you something you do not want to be true because it is in direct conflict with your strongest desires. But there’s also something here about how I’ve chosen to define success, and how that definition is both A) reasonable and clear-eyed about the literary industry, and B) a prison of my own making.
Seeing the publication of my first collection as the necessary step to graduate from one phase of my literary career to the next means that I am stalled in this phase until that happens for me, and I will therefore cede dangerous amounts of sovereignty over my work, state of mind, and wellbeing to virtually anyone who comes along to make this possible. I have made myself very vulnerable indeed. Meanwhile, the single-minded pursuit of the book contract also takes my focus away from the poems themselves, especially the need to be writing new ones. It is very hard to be a poet without a book. It is hopeless to be a poet without any new poems. I’ve spent the last year believing my poetry collection was on its way out into the world while devoting all of my writing time and energy to trying to finish my novel, and now, after this catastrophe, I have to retool, learn again how to be a working novelist and working poet at the same time. Something that, at eighteen ignorant years old, I would have said was not only natural, not only doable, but desirable.
Somewhere in the fourth year of drafting my novel, I realized, out of the blue, what it was about. I’d long known what was in it, of course: Spinoza and Leibniz, international espionage and poison plots, a slave revolt and a voyage across the Atlantic, magical-realist weirdness and point-of-view hijinks and nested narratives, but I hadn’t realized what it was about-about—the grand or meta-theme—until that moment when, all at once, it smacked me upside the head. My novel, so I discovered, is organized by a network of failures, private and public, personal and political, intimate and world-historical. It is, by those lights, an honest account of how I see the world and, alas, myself.
We commonly look to breakthroughs as the defining moments not just in our lives but in history: as the inflection points in our societies and cultures. That first book comes out, a writer is born! John Hancock signs Jefferson’s eloquent list of grievances—America is born! The Beatles play Ed Sullivan—the Sixties are born! This makes obvious sense. Someone effectively acts, things that were once one way are now another. But as we’ve seen, this way of seeing progress as comprising only the breakthrough itself can be distorting, counterproductive, and even dangerous to the enterprise or inquiry in question. Perhaps looking at history through the lens of our failures, not only at the personal but at the civilizational level, can better focus the eye on the meaning of these transformations and upheavals. (In seeking to understand the genesis of this country, do we learn more from the triumph of 1776 or the monstrous stain of 1619?) Perhaps it is our failures, in other words, that explain what it feels like to be alive at a certain time.
Baruch Spinoza and Gottfried Leibniz are not often, or really ever, thought of as failures. I wouldn’t call them that either, as it would be both rude and largely inaccurate, but they did both suffer a slew of failures in their extraordinary lives and work. Spinoza we remember for more or less inventing the notion of modern secular society, while Leibniz was a Da Vinci-like polymath who contributed to dozens of fields and shares credit with Isaac Newton for inventing calculus. But I was attracted to them both as novel subjects and as characters precisely because their failures were so spectacular and personally meaningful. Spinoza’s first great work, the Theological-Political Treatise, was vehemently condemned as heretical and seditious by nearly everyone, including Leibniz, and summarily banned throughout the continent. His follow-up, the seminal work of rationalist philosophy titled The Ethics, was never published during his lifetime.
Formally excommunicated for heresy from his Jewish community in Amsterdam—even before the publication of his incendiary Treatise—Spinoza lived a monkish existence in the Dutch suburbs and finally the Hague, eking out a living as a lens-grinder, ever in the crosshairs of religious and monarchical authorities across Europe. Predicting correctly that The Ethics would meet a firing squad reception the moment it debuted, and that his freedom and very life would be in jeopardy thereafter, he secreted the manuscript away and left instructions to his closest friends as to what to do with it after he died. While his body was still warm, an inter-faith posse of church goons came to seize the infamous document, but found not a trace of it in the dead man’s flat, for it was locked inside his big work desk which was right then floating on a barge upriver to Amsterdam, where it would safely reach his trusted underground publisher and be disseminated to be read in secret wherever banned books were read, vexing the powers-that-be for the next two hundred years.
Leibniz is another story. As furious and manic in his output as Spinoza was serene, cautious, and regimented, most of Leibniz’s grandest designs came to naught. Appearing hat in hand at Louis XIV’s Paris court in 1672—for which he learned French in a matter of weeks—Leibniz attempted to sell the Sun King’s ministers on The Egypt Plan, a proposal to unite all the warring realms of Europe behind a new crusade down the Nile. That is, this champion of humanism was pitching, at the dawn of the Enlightenment, a full-blown holy war against a Muslim nation, not out of theological conviction but out of some fever dream of political expediency. Louis’s ministers laughed off the proposal, thank God, as they had far more serious and profit-minded business at hand (namely, invading the Low Countries). If Leibniz were alive today, he would probably end up as Trump’s national security advisor. But in my novel, his most historically impactful failure is not one of his manifold diplomatic boondoggles or wild-eyed business schemes, but a philosophic failure, secretly embedded in his “best of all possible worlds” thesis—the failure of God to give us a better world.
In my research following Leibniz and Spinoza as their paths cross and then diverge, I became interested more broadly in the failure of the Enlightenment to extinguish the things it’s meant to have extinguished: superstition and religious bigotry, tribalism and barbarity, feudalistic economies and stupid, evil, mass death. Their world, on the cusp of a new modernity, begins to look more like ours than not: the post-Westphalian order giving rise to the nation-state and with it, bellicose nationalism, the Trans-Atlantic slave trade birthing global capitalism and the system of racial hierarchy that persists today in its wake. And that brings us to perhaps the biggest failure of the Enlightenment tradition. Even radical thinkers like Spinoza failed to construct a humanism that extended to, well, all of humanity.
For their part, Spinoza and Leibniz are strangely silent on the question of slavery and abolition, though one can imagine—and I do—that Leibniz was only interested in the slave trade insofar as he could see a way to turn a profit from it. Spinoza simply never makes mention of it in his copious writings on politics, systems of government, law, and ethics, though the Spinoza family business was an international shipping concern and the headquarters of the Dutch East India and Dutch West India Companies were a few canals over from the house where he grew up. The silence of this otherwise saintly figure, to me, rings out with tragic, unmet possibility.
Their successors in the High Enlightenment are worse than silent: Hume, Voltaire, and Kant all explicitly exclude Africans and their diasporic descendants from membership in the human community and the natural rights afforded them thereby, relegating them to an inferior sub-species status, while endorsing a hierarchical construct of race that placed white Europeans at the top. From that picture of “humanism” sprouts the bogus race science of the 19th century and the genocidal slaughters of the 20th. As Adorno and Horkheimer had it, and Aimé Césaire after them, “At the end of formal humanism and philosophical renunciation, there is Hitler.”
Under Leibniz’s cosmology, one begins to see a kind of parody of a system in which exalted ends—authored by God himself—justify whatever evil means may come. (This is how one arrives at an Egypt Plan in order to “unite the Church” and “extend Christian charity” across the globe.) It is how one speaks of empire, even if one knows full well that the real ends are not a united church or a Europe at peace, or, for that matter, human rights and democracy, but rather, as Césaire has it, the “blood-stained money piling up in your coffers.” Common in Enlightenment thought was a view of Providence as “reason actualized on the global scale,” a characterization which the postcolonial philosopher Tsenay Serequeberhan applies to Kant, but which could certainly extend to Leibniz. In the worst cases, this notion of Providence can justify horrific crimes (e.g. the Egypt Plan). Put simply, if God has an ultimate agenda, which the universe in its deterministic fashion is endeavoring to body forth, and that agenda happens, sadly, to include things like the Trans-Atlantic slave trade, then the rational response to a question like abolition is a kind of amor fati dressed in the philosophic garb of the Age of Reason. But even in its less theologically risible versions, it underwrites a form of Enlightenment quietism: a belief in “natural processes so powerful and inevitable that there is no point in attempting to counter them,” as philosopher Mark Larrimore wrote of Kant’s putative worldview. For this bloody world must be, as Leibniz’s most famous idea has it, the best of all possible worlds. If God could have made a better world, so this thinking goes, one without slavery, war, famine, etc., he would have done so.
Spinoza did not believe in Providence, he believed in blessedness—not a state of being we enter into by the grace of a merciful deity, but the natural outcome of seeing all life under the “aspect of eternity,” from the vantage point of God-or-Nature. In other words, blessedness is what comes from taking the longest possible view. The ancient Stoics called this the View From Above. Writing more than a millennium prior, Marcus Aurelius puts the idea quite beautifully:
One who would converse about human beings should look on all things earthly as though from some point far above, upon herds, armies, and agriculture, marriages and divorces, births and deaths, the clamour of law courts, deserted wastes, alien peoples of every kind, festivals, lamentations, and markets, this intermixture of everything and ordered combination of opposites.
For Spinoza, however, this same View From Above that stipulates religious tolerance and egalitarian social views can be used to justify political quietude just as readily as Leibniz’s quasi-deterministic Providence. If one zooms far enough out and sees systematic injustice as only one part of the grand scheme of things, as a minor chord within an inconceivably vast and beautiful harmony, is one not thereby absolved of the personal, moral responsibility to fight against that injustice?
The ancient Stoics are doggedly deterministic in their worldview, yet they still exhort us to be agents toward the good, even as we accept our place in the machinery of fate, even as we acknowledge how little falls within our individual control. The Enlightenment thinkers at the dawn of the modern period, Spinoza and Leibniz included, fail to compass the human cost of their quietude, fail even to compass who is human to begin with. We live in the rubble of that failure now.
Maybe it’s inevitable, whenever a writer spends years of their life with a cast of characters from a particular historical period, that the historical time being imagined will come to resemble more and more the writer’s own. But writing this novel has forced me to see our era not so much as a reprise of the Early Enlightenment, but its bookend. What began in the seventeenth century seems to be hurtling to its end right now, as media and technology short-circuits civic discourse to a terminally irrational state, as global capitalism endgames toward dystopia and a desiccated planet, as democracy dismantles itself worldwide in the span of our own lifetimes. We are, if we’re honest, in the grip of an interlocking host of spectacular, systemic failures. It is time to look again at the beginning and to see where and by whom the seeds of destruction were first planted.
To do that, we have to choose a lens that allows us to see what has been obscured. Sometimes that lens is a mirror. We may come away from it chagrined, ashamed, bowled over by how much work these long-running failures demand of us. But after the pain of seeing clearly, so we hope, comes renewed purpose and a new plan of attack, comes the serious work we were meant to do. What stands in the way becomes the way, so we’d better damn well know what it is we’re looking at.
“Serious Work” is part of our weekly story series, The By and By.
Enjoy this story? Subscribe to the Oxford American.