Category Archives: Book review

Theory of Moral Sentiments

A Theory of Moral Sentiments is Adam Smith’s first book. Compared to Wealth of Nations, his magnum opus, this book was less well-known.  Steve Dubner discussed it extensively in a Freakonomics series, which argued that Smith has been misread by modern economists like Milton Friedman, and that the real Adam Smith was in fact an “affable moral philosopher”, rather than “the patron saint of cutthroat capitalism”. The podcast piqued my interest in Adam Smith and his theory of moral sentiments. The book was not an easy read for me, as it took some time to adjust to the 18th-century writing style.   However, I think the time was well spent.

Central to Smith’s theory is the proposition that the perception of right and wrong comes from sense and feeling rather than reason.  Human happiness, according to Smith, chiefly “arises from the consciousness of being beloved”.  Because we desire to be loved by our brethren— taken to mean relatives, friends, neighbors, and countrymen—we seek their approval and avoid their disapprobation. It is through this pursuit of love and happiness humans acquire sympathy, the ability to share and approve the feelings or interests of another person.  However, to truly sympathize with another’s feelings—to empathize with them (although Smith never used this term)—we must first overcome our own selfishness.

To make this crucial point, Smith proposes a thought experiment, which imagines how “a man of humanity in Europe” would react to the news that a huge earthquake has suddenly destroyed China and all its people. He would, Smith wrote,

“express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general.”

However, after “all this fine philosophy was over”, the man would return to his regular life as if nothing had happened. Indeed, an accident of trivial scale—compared to that catastrophe in China—befallen on him, say the loss of a little finger, would cause him to lose more sleep than what he would over “the destruction of that immense multitude”. If this is so, Smith asks, would this person be willing to sacrifice the lives of all those Chinese to prevent that “paltry misfortune to himself”?   Smith claims humankind has never produced a villain that could be capable of entertaining such a horrific thought. On this point I disagree with him, though his faith in humanity is understandable. After all, Smith has never witnessed the world wars, heard of Holocaust, or met the infamous dictators of the 20th century.

Smith claims what prevents most people from placing their own interests above the greater interests of others is an impartial spectator that grows and resides within them.  The impartial spectator is “the great judge and arbiter of our conduct”, who teaches us that

“we are but one of the multitude, in no respect better than any other in it; and that when we prefer ourselves so shamefully and so blindly to others, we become the proper objects of resentment, abhorrence, and execration. It is from him only that we learn the real littleness of ourselves, and of whatever relates to ourselves, and the natural misrepresentations of self-love”.

Thus, to become a moral person is to forge and train this impartial spectator, and to be guided by him.  There is a subtle but crucial difference between a moral person and a virtuous one: the former merely follows the impartial spectator’s rules, whereas the latter adopts and embodies his moral sentiments. In some sense, the virtuous person becomes a proxy of the spectator, unified with him in both spirit and conduct, thereby entering a state of spiritual freedom, at which the bounds of moral constraints are no longer felt.

Impartiality is central to many theories of morality. For example, John Rawls’ “veil of ignorance” serves as an instrument of impartiality in his theory of justice. Smith’s impartial spectator also resembles what a Confucianist would call “inner sage” (内圣), or the “innate moral knowledge” (良知) in Wang Yangming’s Theory of Mind (心学).  The unifying state achieved by a virtuous person, I believe, is “知行合一” in the Theory of Mind, and the process through which to arrive at that state is called “致良知”.  Like Smith, Wang also emphasizes sympathy as the approach to morality.  In Instruction for Practical Living (传习录), he writes,

“世之君子惟务致其良知,则自能公是非,同好恶,视人犹己,视国犹家,而以天地万物为一体。”

Thus, with the help of the impartial spectator (良知), the virtuous person (君子) can be just (公是非) and have empathy (同好恶,视人犹己).

Smith believes moral norms first emerge to forbid actions that inflict pains on a person, such as endangering their life and body, depriving their possessions and property, and violating their rights to basic liberty.  This is because humans are disposed to sympathize with sorrow more strongly than with joy.  Moral norms are extremely important, as they form the laws of justice, without which human society cannot survive.  Yet, the sense of justice only enables people to behave with minimum propriety and decency.  To Smith, it is a mere “negative virtue” that does no real positive good.

Throughout much of the book, Smith explains the transition from adhering to basic moral norms to cultivating positive virtues. The mechanism is still sympathizing, and the secret is to overcome the less desirable aspects of human nature.

What makes us jealous of the success or good fortune of another person?  Again, the reason is that humans are generally more focused on avoiding pain than seeking happiness. As a result, it is more difficult for us to will the good of our brethren—i.e., to truly love them—than to avoid harm to their person and property.  The sentiment of envy is strongest when the person is regarded as an upstart.  As Smith notes,

“The man who, by some sudden revolution of fortune, is lifted up all at once into a condition of life, greatly above what he had formerly lived in, may be assured that the congratulations of his best friends are not all of them perfectly sincere.”

However, thanks to the impartial spectator, we are also ashamed of our own envy, and “often pretend, and sometimes really wish to sympathize with the joy of others”.  A man who fought and won this battle with our sentiment of envy is capable of that magnanimous act of willing the good of our brethren, loving them as much as we love ourselves. He may also learn to maintain prudence and humility no matter what stellar successes he has just achieved and how much he thinks he is entitled to boast about them.  Sympathy reminds him that, by overly displaying joy in his achievements, he could arouse among his brethren envy and jealousy, and the shame and self-pity that come with it.  Therefore, he always “endeavors, as much as he can, to smother his joy, and keep down that elevation of mind with which his new circumstances naturally inspire him.”

Smith was not utilitarian, despite being revered as the father of economics—which is built on the notion of utility-maximizing homo economicus—and invested as a god of capitalism.   As the book makes abundantly clear, Smith did not endorse, much less celebrate, cold-blooded self-interest. His famous “invisible hand” explains why society can work well despite, not because of its members being utterly self-interested.  Surprisingly, he made the same point in this book, which was first published in 1759, seventeen years before the Wealth of Nations. He writes that the rich,

“though they mean only their own conveniency… are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

If Smith believes that self-interest can be guided toward positive outcomes by the invisible hand, he clearly opposes such consequentialism in matters of morality. He was deeply troubled by the fact that “the world judges by the event, and not by the design”, which he called “the great discouragement of virtue” throughout the ages. Smith conceded that, in the realm of justice, punishment should be proportional to the consequences of our actions, rather than our intentions. However, he forcefully argues that the opposite should apply when assessing our own character and conduct.

In this regard, Smith is nearly a moral idealist. He believes we should strive for “exact propriety and perfection” rather than settle for the lower level “which is commonly attained” by most people. Smith argues that focusing on the inferior standard is what led many historical figures to become arrogant, presumptuous, and extravagantly self-admiring.  Self-admiration may be necessary for their success, as it drives the great men to pursue ventures that a more cautious mind would never consider.  “When crowned with success”, however, this presumption “has often betrayed them into a vanity that approached almost insanity and folly”, and “precipitated them into many rash and sometimes ruinous adventures”.  Somehow, Elon Musk’s face crossed my mind when I read the above passage.

Since to be loved by others generally means to receive their attention and praise, a great deal of human energy has been consumed by the struggle to stand out and be recognized.  Smith refers to this desire for attention and praise as “vanity”.  Although vanity is not inherently a vice, it becomes problematic when it is directed towards the wrong objects. Therefore, writes Smith,

“the great secret of education is to direct vanity to proper objects”.

Because a man sees wealth and power attract attention and submission, he is often compelled to pursue them. Similarly, observing that fame and glory earn respect and praise, he aspires to be famous and honored. Consequently, he mistakenly equates these pursuits with achieving love and happiness. Smith tells us that

“nature has endowed a man, not only with a desire of being approved of, but with a desire of being what ought to be approved of.”

Wealth, power, fame, and glory all signal approval from others, but not necessarily “what ought to be approved of”. To Smith, pursuing praise and pursuing what is praiseworthy are distinctly different. The former often leads us to chase misguided objects of vanity, while the latter inspires a genuine love of virtue.   A virtuous man derives little pleasure from praise where it is not due; instead, he often feels the greatest satisfaction in doing what is praiseworthy, even though “no praise is ever to be bestowed upon it”. Thus, “to be that thing which deserves approbation” is “an object of the highest” to him. If succeeded in this endeavor, he no longer needs approval from others.  He would become assured of “the perfect propriety of every part of his own conduct” and be content with his self-approbation, which, according to Smith, is virtue itself, the only thing which he can and should care about.

Smith’s emphasis on praise-worthiness rather than praise, and on self-approbation rather than approval by others, appears to be rooted in Stoicism.  Smith writes that the Stoics believes

 “human life…ought to be regarded but as a mere two-penny stake. …Our only anxious concern ought to be, not about the stake, but about the proper method of playing. If we placed our happiness in winning the stake, we placed it in what depended upon causes beyond our power, and out of our direction. We necessarily exposed ourselves to perpetual fear and uneasiness, and frequently to grievous and mortifying disappointments. If we placed it in playing well, in playing fairly, in playing wisely and skillfully; in the propriety of our own conduct in short; we placed it in what, by proper discipline, education, and attention, might be altogether in our own power, and under our own direction. Our happiness was perfectly secure, and beyond the reach of fortune.”

In a nutshell, to shield our happiness from the whims of fortune, we should remain as indifferent as possible to praise, recognition, and all the superficial allurements of vanity. This philosophy aligns with a precept I learned many years ago from a Chinese author: 但行好事,莫问前程 (Focus on doing the right thing, rather than on achieving the perfect outcome).  It also echoes my favorite quote from Daniel McFadden’s Nobel Prize autobiography (the emphasis is mine):

“My parents taught me that to lead a virtuous life, I should be modest, take my satisfaction from work done well, and avoid being drawn into competition for status and rewards.”

This idea is precisely what I have been trying to tell any of my doctoral students who would listen: To truly enjoy academia, you must find joy in the research itself, independent of any external rewards it might bring, whether that’s funding, awards, or even the opportunity to change the world.

Marco Nie

April 14, 2024, Evanston, IL.

Chip War

Chris Miller masterfully told the story of the spectacular rise of the semiconductor industry (the “chip”) and its ever-growing entanglement with geopolitics (the “war”).  It’s a fascinating narrative, filled with ups and downs, twists and turns, heroes and villains, and a cast of victors and losers — well worth reading for its own sake .  It is a must read if you want to understand the current U.S.-China relationship and the slow-moving crisis hanging over the Taiwan Strait.  Semiconductors have become central to the U.S-China relationship, with one side aggressively playing catch-up, and the other striving to maintain its waning lead.  Taiwan has the misfortune to be caught in the middle of this seemingly inevitable epic-clash, not so much because it offers a beacon of hope for “the free world”, as because it houses Taiwan Semiconductor Manufacturing Company (TSMC), the sole fabricator of the world’s most sophisticated chips.

As I read about the legends of semiconductors unfolding in the book, I came to realize my own ignorance about an industry that has profoundly transformed humanity.

I did not know Williman Shockley who, along with two other scientists at Bell Labs, discovered semiconductors and invented transistors. He also started a company called Shockley Semiconductors Laboratory that counted Gordon Moore (yes, that’s the Moore after which Moore’s law is named) and Robert Noyce among its first hires. The pair would rebel against Shockley later and go on to become the giants of the burgeoning industry. They first founded Fairchild Semiconductor that supplied the computing power to land men on the moon in the 1960s, and then Integrated Electronic, or Intel – a household name in today’s tech world.

I had never heard of Texas Instruments (TI) before I read the book.  But among TI’s earlier employees are Jack Kilby, who won a Nobel prize in physics in 2000 for inventing the integrated circuit (集成电路), Jay Lathrop, who created the first photolithograph (光刻) scanner, and Morris Chang, an immigrant from Mainland China and the founder of TSMC.

Nor could I distinguish between memory chips and logic chips, PC chips and smartphone chips, or deep ultraviolet (DUV) lithography and extreme ultraviolet (EUV) lithography.  What has struck me the most, however, is the incredible difficulty to keep up with Moore’s law, which posits that the number of transistors on a microchip doubles approximately every two years. Indeed, the cutting-edge chips have become so complex that TSMC is the only manufacturer in the world capable of fabricating them at scale.  TSMC does this with “ultra-pure silicon wafers and specialized gases from Japan” and machinery that “can etch, deposit, and measure layers of materials a few atoms thick”. Supplied by only five companies, these tools themselves took decades and an astronomical amount of money to develop, and their core technologies are closely guarded trade secrets.  Take the development of the EUV lithography for example. The project was launched in the early 1990s thanks to a $300-million investment from Intel.  However, it wasn’t until nearly 30 years and billions of dollars in spending later that the Dutch manufacturer ASML finally introduced EUV scanners to the market in 2018, at a price of 100 million apiece for an expected lifetime of four years. For a layman like me, it is mind-boggling to read just how the scanner produces enough EUV light needed for fabrication:

The best approach was to shoot a tiny ball of tin measuring thirty-millionths of a meter wide moving through a vacuum at a speed of around two hundred miles per hour. The tin is then struck twice with a laser, the first pulse to warm it up, the second to blast it into a plasma with a temperature around half a million degrees, many times hotter than the surface of the sun. This process of blasting tin is then repeated fifty thousand times per second to produce EUV light in the quantities necessary to fabricate chips.

It does sound like a miracle, as Miller put it, that something this delicate not only works, but “does so reliably enough to produce chips” that can make lots of money.

This sums up the history of chips. What about war?  The book describes three chip wars that took place between the U.S. and her rivals in different eras.

The war with the Soviet Union, fought mostly in the first half of the cold war, was won with relative ease. The USSR treated its semiconductor industry as a weapons program, similar to its treatment of nuclear and space technology.  In hindsight, this strategy was a huge mistake, as the immensely profitable civilian applications of semiconductors turned out to be such a strong driving force for innovations that no level of government spending could hope to rival.  Faced with the lack of progress, the Russians tried to copy the U.S. technology through espionage. Yet, this did not work either.  For one, even the most skilled spies cannot steal all the technical know-how involved in complex production processes. More crucially, the “copycat” mindset inevitably condemned Russians to a perpetual game of catch-up, rather than allowing them to lead the way.

Japan was a much greater threat.  Thanks to favorable technology transfer and trade policies that the U.S. willingly offered in exchange for the Japanese support of America’s global order, Japan’s semiconductor industry evolved from a niche player specializing in consumer electronics in the 1960s and 1970s to a formidable powerhouse in the 1980s. By 1985, Japan had begun to outspend the U.S. in capital investment for semiconductors, and by the end of that decade, it had become the dominant supplier of the world’s Dynamic Random-Access Memory (DRAM) chips (90% market share) and lithography equipment (70%). Trade disputes soon ensued.  The skirmish started with the U.S. accusing Japan of espionage, double-dealing, and dumping.  It escalated to the point where the U.S. openly threatened tariffs, ultimately compelling Japan to impose quotas on its exports of DRAM chips to the U.S. in 1986.  This did not help Silicon Valley recover their lost ground, however.  Eventually, nearly all American companies, including Intel, were driven out of the DRAM and lithography markets.

Carried away by their astonishing success, the Japanese began to dream about, in the words of Sony Founder Akio Morita, overcoming the United States economically and becoming “number one in the world”.  The U.S. was understandably frightened by the pent-up nationalism revealed in The Japan That Can Say No—which Morita co-authored—and the gloomy prospect of relying on a foreign country to maintain the most important edge of her military. In response, the U.S. launched a campaign to curtail Japan’s dominance in chip-making industry.  The core strategy involves mobilizing South Korea (Samsung), Taiwan (TSMC), and to a lesser extent, Mainland China, to erode Japan’s competitive advantages by enabling cut-throat competition against her companies.  It worked like magic.  In 1998 Japan’s share in the DRAM market fell to 20% from a near monopoly less than a decade ago, while South Korea dethroned Japan to become the largest producer of memory chips. Not only did Japanese firms suffer tremendous share loss in the DRAM market, but they also missed the emerging opportunities in the nascent PC market.  In what Miller dubbed as one of the greatest comebacks in industry history, Intel, under Andy Grove’s leadership, reinvented itself as the king of microprocessors for PCs.  For what seemed like an eternity in this fast-paced industry, Intel was literally the icon of the PC industry, as the blue trademark of its processors emerged as the most recognizable feature on most PC sold globally. Indeed, I remember the first PC I ever owned— which my five college roommates and I purchased in 1995 using pooled funds—simply as a 486, because it was powered by Intel’s 486 microprocessor.  According to Miller, that very computer chip was the first ever with over a million transistors!

This brings me to the latest, still on-going chip war with China.  On the surface, the plot of the Chinese edition bears resemblance to that of Japan: the wary incumbent hegemon, spooked by the rapid ascent of an upstart, is compelled into massive counteractions to neutralize the threat, real or imagined.   However, unlike Japan, China has never really overtaken the U.S. in any high-end technology areas of the semiconductor industry.  Not even close.  According to Miller, toward the end of the 2010s, China had less than 1% of the global chip design software tool market, about 2% of core intellectual property related to “the building blocks of transistor patterns”, 4% of silicon wafers, 1% of fabrication machinery, 5% of chip design, and 7% of fabrication concentrated in the non-cutting-edge chips.  If that is the case, has the U.S. overreacted with her heavy-handed sanctions and embargo against China’s tech sector?

Regarding this, Miller’s insights on the crackdown of Huawei were particularly enlightening. He acknowledged that the charges against Huawei, which included theft of intellectual property, ties with the Chinese military, and violation of U.S. sanctions on Iran, were “ultimately a sideshow” – basically a euphemism for made-up excuses.  The real issue was, Miller wrote,

That a company in the People’s Republic of China had marched up the technology ladder… Its annual R&D spending now rivaled American tech giants…, it was the most successful exporter [of all Chinese tech companies], giving it detailed knowledge of foreign markets. It not only produced hardware for cell towers, [but] it also designed cutting-edge smartphone chips. It had become TSMC’s second biggest customer, behind only Apple.

Therefore, the real question was: “Could the United States let a Chinese company like this succeed?” That is a rhetorical question in case you did not catch the drift. But why?

I can think of several reasons.  

First, unlike Japan, China was not a liberal democracy.  Judged by what was going on in the country since the early 2010s, China absolutely has no interest in becoming one anytime soon. To make things worse, under the current leader, China has repeatedly asserted that perhaps her system, rather than America’s, should be the model that the rest of the world admires, envies, and emulates.  Even when Morita lectured Americans about the superiority of the Japan system, it was seen in Washington as a serious provocation – and he wasn’t even talking about authoritarianism.

Second, unlike Japan, China has never pledged allegiance to the America-led world order.  In fact, in the past decade, China has decisively shifted from biding her time as a halfhearted participant of that order to openly flirting with the idea of challenging it, economically, technologically, and if necessary, militarily.

Third, China has increasingly embraced nationalism as a rallying cry for its people to coalesce around the current regime. However, the inherent logic of this political agenda requires the “unification” of the motherland, regardless of the cost. Whether this stance is a concrete national policy or merely a slogan to appease the little pinks on the internet remains to be seen. Yet it does place China on a collision course with Taiwan and the U.S.  When push comes to shove, the U.S. could find herself in a treacherous standoff with what she now regards as a “peer competitor”. The stakes are incredibly high. Retreating from the American commitment to Taiwan’s security would spell the end of the current global order, potentially plunging the world into chaos.   More importantly, losing Taiwan could hand China a golden opportunity to erode the America’s technological supremacy, which has been a cornerstone of her national security since at least World War II.

As of this writing, China has been denied access not only to high-end chip-making technology but also to the high-end chips themselves. Lacking essential tools (e.g., EUV scanners) and raw materials (e.g., pure silicon wafers), China’s semiconductor industry, as well as her tech sector in general,  is likely to fall behind. Indeed, it has already missed out on the latest gold rush in AI, particularly the triumph of large language models, partly because her access to computing power (GPUs) was severely restricted by sanctions.

Could China break this “neck-strangling” (卡脖子) situation entirely through its own initiatives? Before reading this book, I thought there must be a way, if the nation threw its entire weight behind the challenge.  I am much more pessimistic now. If there’s one thing I’ve learned from the book, it’s that the creation of cutting-edge chips can no longer be achieved within the borders of a single country, not even the U.S. Moreover, the pursuit of technological innovations as a nationalistic project may not be a sound strategy for long-term success, as demonstrated by the failure of the USSR.

Could the chip war have been averted had China chosen a different path in the early 2010s?  What impact will the current conflict have on the “China Dream” and the lives of 1.4 billion Chinese? No one knows the answer.  One can only hope that the war remains confined to the realm of chips and continues to be fought by scientists and engineers with their computers, rather than soldiers with guns and missiles.

 

Marco Nie

Wilmette, IL

3/3/2024

Crusaders

Dan Jones is a great chronicler.  He knows how to turn dry events into vivid stories, which characters to focus on so his narrative always has a humanly anchor point, and when to make witty quips without being seen as overly opinionated.   Some writers have the talent to captivate their audience with no more than the charm of their language – I think Jones is one of them.

“Crusaders” covers nearly four centuries of medieval history, from the end of the eleventh century CE, when Pope Urban began to preach the Holy War against the infidels in the east, to the conquest of Jerusalem by the Ottman empire in 1517.   Officially, crusading met its calamitous end in 1291, when Acre, the last stronghold of the Christian kingdoms in the east, fell to the hands of Mamluks.   However, as a phenomenon, crusading continued until Columbus’s discovery of America––which was “full of things to trade or steal, and teeming with people to subjugate, convert or kill”—convinced Western Christendom that its future “lay to the west, not the east”.

Out from this eventful and bloody chapter of human history stand a few prominent and complicated characters that I think deserve some ink even in a brief book review.

Richard the Lionheart, the legendary king of England who spent most of his adult life in France, was the commander in chief in the Third Crusade.   Rumored to be a gay, Richard was famed for his martial prowess, courage and generosity. He also was a man of letters who loved lyric poetry and music and courted poets of High Middle Ages.  Under Richard’s leadership, crusaders retook Acre and delivered a string of humiliating blows to the army of the mighty sultan Saladin of Ayyubid Dynasty, but ultimately fell short of seizing Jerusalem itself.  The struggle ended with a negotiated truce that placed the coastal towns between Jaffa and Acre under the Christian rule, while allowing Christian pilgrims and merchants to access the Holy City.  Although the settlement helped stabilize the Kingdom of Jerusalem for decades to come, it forever transformed crusading from a religious imperative into an enterprise of colonization.

Like many powerful men of his age, Richard was often reprimanded in history books for being lustful, greedy, and cruel.  I suspect some of Richard’s vices were exaggerated by the clergymen who resented him for being forced to pay for his military adventures.  That said, the extent of Richard’s cruelty is indisputable.  The most notorious episode was the execution of 2600 unarmed and bound prisoners of war at Acre, as a retaliation against Saladin’s failure to fulfill his promise to “return the relic of the True Cross and pay his bounty”.   Be technically legal as it may, noted Jones, this despicable act of cruelty was “excessive even by the standards of the day”.  Little wonder Richard’s name has acquired such an infamy in the Muslim world that it was often invoked by impatient moms to calm their unruly children.

Enrico Dandolo, the doge of Venice, was the hero––or the villain, depending on who you ask––in the Fourth Crusade.  He took the cross at an incredibly advanced age of 95, having gambled his country on a military alliance according to which Venice would equip and supply the Fourth Crusade in exchange for 85,000 silver marks.  When Dandolo realized his airheaded partners could not pay their dues, he decided to save Venice from bankruptcy by what essentially amounted to organized robbery.   His first target was the city of Zara, a possession of King Emeric of Hungary who was not only a pious Chrisitan but also a fellow crusader.  Zara’s sacking infuriated Pope Innocent III as he had explicitly forbidden it.  As a result, all Venetian crusaders were “excommunicated”, i.e., officially expelled from the Catholic Church.  Dandolo couldn’t care less. He soon seized another opportunity that promised even more money, by injecting the crusaders into a conspiracy aimed at dethroning the Byzantine emperor.  There is no space to recall the entire drama – suffice to say that it led to the siege and fall of Constantinople in 1204.  Once again, Dandolo’s allies failed to hold their side of the bargain, so it seemed as if he almost had no choice but to help himself with what was promised to him.  For three days, the crusaders vandalized the richest city in the West.  The estimated total value of the loot amassed during their plundering is believed to be around 900,000 silver marks.  If this figure is accurate, then Venice’s investment in the Fourth Crusade yielded a staggering tenfold return.   Dandolo thus exemplified the notion of prospering by doing God’s bidding – a modern entrepreneur from Silicon Valley would recognize this as the medieval version of “doing well by doing good”.

At the time, many ancient and medieval Roman and Greek works were stolen and sent back to Venice. The most notable were the four bronze horse statues from the Hippodrome, believed to have been crafted in the second or third century CE.    When I visited Venice in the summer of 2023, a replica of these magnificent statues was indeed, as Jones teased, “still proudly displayed at Saint Mark’s Basilica.”  Our Venetian tour guide was careful not to dishonor what is considered a national treasure in her country. The horses, she told us, were “brought back” from Constantinople 800 years ago.

Dandolo died a year after the fall of Constantinople. He was 98 and had been visually impaired for more than three decades.  The crusaders understandably cheered what they had accomplished under the command of the aged and fragile man as a miracle.  To many a Christian, however, the brutal sacking of Constantinople was a dark and scandalous chapter in the history of their faith.   The cruel irony—a mission sanctioned by the Catholic papacy resulting in the destruction of the spiritual capital of the Eastern Orthodoxy—was simply beyond the pale.  Jones summarizes Dandolo’s controversial involvement in the crusade aptly:

“He had bravely defied his physical disability and his decrepitude, and his pragmatic leadership and dauntless personal valor were beyond question. Yet in the end Dandolo had turned his talents to a wholly disreputable end, playing a leading part in a dreadful episode that, even by the cruel standards of the crusading era, thoroughly deserved the epithet leveled against it by Choniatēs: “Outrageous.”

Another fascinating historical figure from this era is the leader of the Sixth Crusade, Frederick II, the emperor of the Holy Roman Empire.   His famed grandfather, Frederick I “Barbarossa”, drowned while attempting to cross a river during the Third Crusade.  About 750 years later, Adolf Hitler, in a seemingly ironic twist, named his ill-fated Russian campaign after the elder Frederick.  However, Frederick II succeeded where his progenitor faltered. Through an agreement reached with the Ayyubid sultan Al-Kamil, he regained control of Jerusalem in 1229, a feat that three costly crusades had failed to accomplish in four decades.  To be sure, Frederick II enjoyed good fortune, as Ayyubids were distracted by potential conflicts with its Muslim brethren in Syrian and Mesopotamia. However, there is no question that the emperor’s intelligence, personality, pollical acumen and breadth of knowledge also played a crucial role. Fredreick II was, in the words of Jones, “a shockingly liberal intellectual and a bluntly pragmatic ruler”.    He spoke six languages, including Arabic and Greek, boasting a reputation as a polymath.

Fredreick was a man with an insatiable curiosity about the natural world that extended far beyond the tenets of Christian Scripture. He loved natural sciences, astrology, logic, rhetoric, medicine, law, philosophy and mathematics…(and) surrounded himself with Latin, Greek, Muslim and Jewish tutors, advisers, poets, scholars and bureaucrats. Well into adulthood, he retained a personal Arab tutor in logic, and he corresponded with Jewish and Arab scholars in southern Spain.

In short, Frederick was a philosopher king in the Platonic ideal, reminiscent of figures like Marcus Aurelius of the Roman Empire and Kangxi of the Qing Dynasty in China.

Paradoxically, the “greatest and least bloody crusading victory” won by Fredreick was met with universal condemnation rather than exaltation among his fellow crusaders.  When the emperor left Acre, it was reported, he was “hated, cursed, and vilified”. Why? Ostensibly, the reason was that his participation in the Six Crusade was technically illegal because he had been excommunicated by the pope for allegedly failing to honor his previous crusading pledge.  However, his quarrels with the papacy ran deep and deteriorated following his triumph in the east.  Eventually the most successful crusader of his time would become himself the target of a crusade officially endorsed by the Catholic church.  Although Fredreick “could be infuriating, overbearing and self-serving”, concluded Jones, it is still difficult to “conceive of a greater perversion of the institutions and language of crusade than for such a war to be preached against” him.

Beneath the veneer of glory surrounding these crusading kings and generals lay unspeakable violence, horrific human suffering, and ferocious atrocities.  After all, as Jones noted, “there was precious little time for thoughts of human rights on either side” of the crusading divide.

When Baldwin II of Kingdom of Jerusalem laid siege to Aleppo in 1124––toward the end of his futile effort to break into the Syria interior—his army reportedly engaged in “elaborate rituals of depravity” against the Muslim residents.  According to Jones, the crusaders

“raided Muslim funeral chapels, took coffins to repurpose as storage chests for their camp, then goaded the citizens with the sight of their dead relatives’ corpses being grotesquely desecrated…Whenever the Franks captured an Aleppan Muslim, they cut off his hands and testicles.”

During the Fifth Crusade, Damietta, the third-largest city in Egypt, endured a siege lasting a year and a half.  Even the battle-hardened crusaders were apparently horrified by what they saw in the once-thriving city. It had transformed into a ‘fetid, disease-ridden graveyard, inhabited by mere skeletons and ghosts.’ The few survivors were overwhelmed, unable to bury the countless corpses that littered the streets, and the stench “was too much for most people to bear”.   Shocked as they might be, the crusaders showed little pity, much less remorse. Soon enough, wrote Jones, “Christian thieves” began to “run around taking what they could” and force starving Muslim children to undergo baptism.

When Jerusalem fell to the raid of a Khwarizmian (花刺子模) mercenary army of Ayyubid sultan in 1244—only 15 years after Fredrick’s diplomatic victory—it was utterly devastated. The Khwarizmians hunted down and slaughtered six thousand Christian civilians trying to flee the abandoned city. Then, on August 23,

the Khwarizmians entered the almost empty city of the Israelites and in front of the Sepulchre of the Lord they disemboweled all the remaining Christians who had sought refuge inside its church. … The marble around Christ’s tomb was either smashed or scavenged and the tombs of all the crusader kings of Jerusalem buried near Calvary were opened and their bones tossed away. Elsewhere other highly revered Christian churches and shrines received the same treatment: the priory at Mount Sion, the tomb of the Virgin Mary in the valley of Jehosophat and the Church of the Nativity in Bethlehem were all desecrated.

Ironically, the Khwarizmians were themselves victims at the hands of an even more formidable force. About 25 years earlier, the horde of Genghis Khan had besieged and pillaged Samarkand, the capital of their empire.   In some sense, he was indirectly responsible for the terrible losses of Christians in 1244, as the collapse of the Khwarizmians empire had left its jobless soldiers to scatter around, much like a deadly shock wave sweeping through the Middle East.  The Mongols, of course, did not discriminate between Christians and Muslims.  When they captured Baghdad, arguably “the most civilized of cities” at the time, they killed at least 100,000 Muslims.   Yet, their worst crime against humanity was probably destroying the great city’s House of Wisdom, a library that “contained the largest and most sophisticated collection of books on earth” – so many books were thrown into the Tigris, wrote Jones, “that the water was said to have flowed black with ink.”    

No medieval horror movie would be complete without mentioning the hideous crimes against Jews.  In fact, the First Crusade marked a tragic turn in the fortunes for Jewish diaspora in Western and Central Europe.

In 1096, even before leaving their own country for the First Crusade, French and German crusaders turned on local Jewish communities.  At Mainz, they stormed the residency of archbishop Ruthard where seven hundred Jews sheltered for his protection.  The indiscriminatory slaughtering by this mob was so appalling that many desperate Jews killed each other to avoid execution by the “weapons of the uncircumcised”.  Similar mass murders took place elsewhere.  In Cologne, according to Jones, “young men and women threw themselves into the Rhine and fathers killed their children rather than see them fall into the hands of the enemy”.   This “orgy of anti-Semantic violence”, collectively known as Rhineland massacres, is widely seen as a harbinger for what was coming for Jews in Europe in the next millennium.

About a hundred years later, the fervent zeal ignited by the Third Crusade engulfed the English populace. Months of riots against England’s Jews ensued.  During this period, it was not uncommon to witness mobs chasing and assaulting Jews in the streets, forcing them into coerced baptisms.  The worst incident occurred in York in March 1190, when hundreds of Jews, seeking refuge in the city’s castle, were either killed or forced to commit mass suicides.  The persecution of Jews in England would continue and culminate in 1290, when the country officially expelled its Jewish population and enacted a ban that would last nearly four centuries.

Shortly after I finished reading “Crusaders”, on October 7th, 2023, Hamas militants perpetrated the worst mass murdering of Jews since the Holocaust.  There is no need to recite the details of the crimes.  Anthony Blinken, the US Secretary of State, summed it up well: “depravity in the worst imaginable way”.   Viewing this incident in the context of crusade, however, I felt that I have seen the movie before. The latest version is set on the same stage and has a similar plot, though played by different actors.  In this movie, it was Jews, rather than Christians, who were the infidels that Muslims tried to expel from the land they believed was righteously theirs.  

History has never stopped projecting the conflicts in Palestine through the lens of the Crusades.  When British general Edmund Allenby marched into Jerusalem as a victor in 1917, ending the four-hundred-year control of the Holy City by the Ottoman Turks, he proclaimed, allegedly, that “the wars of the crusades are now complete”.   Whether he said it or not, the forecast was wrong. The British mandate of Palestine would give way to the rebirth of the Jewish state in what many Muslims saw as a continuation of the medieval crusades, only this time Jews and Christians were co-conspirators. Surely that was how Osama Bin Laden saw it. In the ‘Letter to the American People’, now widely circulated thanks to Tik-Tok, he wrote,

Palestine has been under occupation for decades, and none of your presidents talked about it until after September 11. … You should be aware that justice is the strongest army and security offers the best livelihood; you lost it by your own making when you supported the Israelis in occupying our land and killing our brothers in Palestine.

Likewise, President George W. Bush once likened the US response to the 9/11 attack to a crusade, warning the American people that “this crusade, this war on terrorism, is going to take a while”.  

Even the rhetoric sounds eerily similar, and it always invokes some version of a just war, i.e., the “violence that was regrettable but legitimate and even moral, so long as it was undertaken to protect the state and would ultimately serve to produce or restore peace.”  Bin Laden put it more bluntly, “it is a sin to kill a person without proper, justifiable cause, but terminating his killer is a right.”  What remains unsaid and perhaps unknowable, however, is who gets to decide what causes are proper and justifiable, and how far back in history one must trace them.

Hence, the life-and-death struggle for the Holy Land, waged in the name of that One True Faith, has never really ended. And the idea of crusading will perpetuate cycles of violence and plight as long as there are crusaders on Earth.

 

Marco Nie, Northwestern University

December 30, 2023

 

 

The song of Achilles

I read The song of Achilles about two years ago, wrote a short review then but never got the chance to post it here.  This is one of the few fiction books I have read cover to cover  since I turned 40 – thanks to my daughter’s recommendation.


My 11-year-old daughter fell in love with Greek mythology lately and has filled her bookshelf with the likes of Percy Jackson and Trials of Apollo.  Frustrated with my complete ignorance of the subject, she tried repeatedly to get me to read some of her books.  She marveled at The Song of Achilles all the time and insisted I must read the book because it is simply “too good” to pass over.  Eventually, I caved in despite my reluctance—novels have largely ceased to interest me, let alone a novel about Achilles, whose story has become a cultural cliché, even in China. Who could forget about the heels that his mom famously failed to wash in the magic spring?

It turns out I enjoyed the book more than I thought I could.  Madeline Miller made me constantly guess the theme of the book, but she managed to outwit me at every turn.  Initially, it seems that the book is about the love between two young men: Achilles and I the narrator (Patroclus). Then, I thought the focus is the insanity of the Trojan war, and how it transforms an innocent boy into a monstrous killing machine.   At one point, Miller mocked nationalism and advocated humanitarian principles, when she proclaimed through Chiron (a centaur) “nations were the most foolish of mortal inventions” and “no man is worth more than another, wherever he is from”. Eventually, I realize the central plot may be the ancient conflict between a jealous mother and her son’s spouse (a son-in-law in this case).  Achilles’s mom, Thetis, refused to endorse his relationship with Patroclus till the very end, even after they are buried together.   In the eyes of the jealous mom, Patroclus is an unattractive mortal unworthy of Achilles, a man who cannot bear an offspring for him, and above all someone who committed the unforgivable sin of sharing the love of her son.  But more fundamentally, Thetis and Patroclus fought hard to bring about a different Achilles in the book: Thetis wants a god-like, ruthless warrior, while Patroclus prefers an empathetic, creative human.  It seems to me this discrepancy, not the Prophecy, finally sealed the tragic fate of the couple.

Having finished the book, I must say I don’t quite understand why my daughter and her friends like it so much.  It is a book written for adults, with contents that I imagine some parents might find objectionable for kids of her age.  I know for a fact in my generation such a book would be considered off limits for 11-year-old. But, hey, we live in a different age, don’t we?

Solomon’s Ring

Legend has it that King Solomon’s ring, also known as the Seal of Solomon, conferred on him the ability to command the supernatural and to speak with animals.  Despite the enticing title, the book has nothing to do with King Solomon and his famous ring, or Jewish history, or the Israel-Palestine conflict (since this topic is on everyone’s mind these days…).  Instead, it consists of interesting stories about the animals that the author raised to observe their behaviors.  Widely considered “the father of ethology”, Konrad Lorenz won Nobel Prize in Medicine in 1973 for his foundational contributions to the study of non-human animal behaviors.  King Solomon’s Ring, published in 1949 and written for a popular audience, remains his best-known book.  Lorenz was a controversial figure due to his association with Nazism, which apparently came to light only after his death.  According to Wiki, not only was Lorenz a Nazi, but he served as a psychologist in the notorious Office of Racial Policy during the war.  In his application for the party membership, Lorenz pledged to devote “his whole scientific work to the ideas of the National Socialists”.  That said, I found no racial slurs, dog-whistles or anything that can be construed as remotely antisemitic or hateful in the book.   Quite the contrary, the book was a relaxing and enjoyable read that made me giggle more than any book in recent memory. Beyond fascinating facts about animals, the reader will also be confronted with thought-provoking questions concerning human nature and the relationship between men and animals.

Lorenz described many species of animals that he kept in and around his home, ranging from fish and birds to dogs and monkeys.  Notably, he did not keep these animals in captivity but instead let them – to the extent possible – freely wander around on his property, even in his office.  In some sense this was the mandate of his work, since only free ranging animals can “be themselves” and thereby reveal their natural behaviors. However, to Lorenz these animals were more than just a research subject.   He lived with them, bonded with them, and cherished their company.   He saw humanity in these animals – or animal traits in humans, depending on your perspective – because humans, in a quite literal sense, are their descendants.   As a result, his writing adores and humanizes them.

I was never a big fan of animals. Growing up in a small and poor city in China, where few families keep pets in their home, I was naturally disposed to be afraid of most animals, including dogs and cats.  Yet, I think even I would find the gaze of Lorenz’s beloved dog, named Tito, irresistible.  Tito was an Alsatian (or German Shepherd), famous for being “exaggeratedly faithful”.    Lorenz recalled that Tito would remain lying at his feet for hours and hours as he works at his desk, and

she was far too tactful to whine or to call attention to herself by the slightest sign. She just looked at me. And this gaze of the amber-yellow eyes in which was written the question “Are you ever going to take me out?”, was like the voice of conscience and easily penetrated the thickest walls.

Lorenz injected a delightful sense of humor into his storytelling that is truly infectious. I remember several instances when I laughed so loudly in my office that people in the hallway could probably hear me.   His vivid account of the territory-setting battle between two stickleback fish was a great example.  He wrote, describing how the distance from a male fish’s nest is a reliable predictor for the strength of not only his will, but also his actual ability to defeat his rival,

In the immediate neighborhood of his nest, even the smallest male will defeat the largest one…. The vanquished fish invariably flees homeward and the victor, carried away by his successes, chases the other furiously, far into its domain. The further the victor goes from home, the more his courage ebbs, while that of the vanquished rises in proportion. Arrived in the precincts of his nest, the fugitive gains new strength, turns right about and dashes with gathering fury at his pursuer. A new battle begins, which ends with absolute certainty in the defeat of the former victor, and off goes the chase again in the opposite direction.

On another occasion, Lorenz saw a father jewel fish accidentally swallow, at the same time, his own baby—a duty he routinely performs to save his children from drowning—and an earthworm, his favorite food. The father thus faced a dilemma, as in his mouth were two different things “of which one must go into the stomach and the other into the nest”. Lorenz recalled with amusement what unfolded next,

The fish stood stock still with full cheeks, but did not chew. If ever I have seen a fish think, it was in that moment! … For many seconds he stood riveted and one could almost see how his feelings were working. Then he solved the conflict in a way for which one was bound to feel admiration: he spat out the whole contents of his mouth: the worm fell to the bottom, and the little jewel fish, becoming heavy in the way described above, did the same. Then the father turned resolutely to the worm and ate it up, without haste but all the time with one eye on the child which “obediently” lay on the bottom beneath him. When he had finished he inhaled the baby and carried it home to its mother.

Using his jackdaw bird colony, Lorenz repeatedly explores what appears to be an important theme of the book: the similarities and differences between human and animal behaviors.

He observed how jackdaws teach their youth about the danger of the enemy by making a rattling sound in response to a dangling black object in sight. This is remarkably “human” for two reasons. First, knowledge is passed on to the next generation through “learning” rather than “inheritance”. Second, like jackdaws, humans also fall victim to such blind, instinctive reactions (the black object). I am certain Lorenz had his former Fuhrer in mind when he asked,

“Do not whole peoples all too often react with a blind rage to a mere dummy presented to them by the artifice of the demagogue?”

Lorenz observed that a “married” jackdaw couple would not only take each other to love and to cherish till death do they part, but also, apparently, maintain “the glowing fires of the first season of love” throughout their marriage.    Even after many years, he wrote, “the male still feeds his wife with the same solicitous care, and finds for her the same low tones of love, tremulous with inward emotion, that he whispered in his first spring of betrothal and of life”. At first glance such a relationship feels amazingly human; but if you pause and think again, you realize it is in fact quite nonhuman, if not superhuman.  Although humans may live in a life-long marital union, Lorenz lamented, they tend to forget “the thrilling enchantment of courtship’s phrases entirely” as time goes on, and only perform the ritual of their marriage “with the mechanical apathy common to other everyday practices”.

It is well known that a definite order – by which each animal is afraid of those above them in rank – exists in many social animals.  Lorenz’s jackdaw colony is no exception. The interesting twist is that a female jackdaw can acquire a higher rank by marrying a male who ranks above her – a social mobility that is, unfortunately, not available to a male (again, how very human this is!).  If the bird marries the king, she will be granted by every member of the colony the status of a queen.  When this happens, the news of the marriage, and hence the promotion of the wife, spreads quickly in the colony. The funniest part of the story is how the newly crowned queen, having suddenly risen far beyond her own station, would “conduct herself with the utmost vulgarity” when she encounters other jackdaws whom she must look up to only a few days earlier:

She lacked entirely that noble or even blasé tolerance which jackdaws of high rank should exhibit towards their inferiors. She used every opportunity to snub former superiors, and she did not stop at gestures of self-importance, as high-rankers of long standing nearly always do.

Establishing a pecking order is one way by which social animals resolve conflicts without suffering excessive casualties. Lorenz mentioned another mechanism that I shall call the surrender’s inhibition.  According to this law, a victor emerging from a bloody battle for dominance would be inexplicably “forbidden” from hurting the loser, as long as the latter surrenders, i.e., offering to his adversary the most vulnerable part of their body as a submissive gesture. Humans evidently have inherited the habit of making submissive gestures (e.g., kneeling and bowing) when facing a dominant aggressor. Unfortunately, such an appeal to mercy is not as failproof among humans as in the animal world. Homer’s heroes, noted Lorenz, often killed supplicants “without compunction”.  Bai Qi, a Qin Kingdom general, killed 400,000 surrendered soldiers after the Battle of Changping, a prelude to the kingdom’s brutal campaign to unite China under imperial rule.  Mongols, of course, had an abhorrent reputation for indiscriminately slaughtering entire cities of people when they faced even the slightest resistance during their conquests. Nor do we have to go back to primeval or medieval times for the evidence of our species’ sub-animal barbarity.  About three weeks ago, on October 7th, 2023, Hamas militants invaded Israel and killed more than 1,000 civilians, including many children and elderly – many of the victims, I imagine, would have begged for their lives, but to no avail. Why?

Lorenz argues that the surrender’s inhibition is a result of evolutionary adaptation.  That is, for a species to survive, it must develop a social inhibition to prevent the abuse of its lethal weapon which could endanger the existence of the species.  However, we humans make our weapons “of our own free will” rather than grow them on our bodies as dictated by nature.     Because human weaponry developed so rapidly relative to the time scale of evolution, our instincts could not keep up with it, leading to a lack of adequate inhibition in its usage.  There is a certain truth to this argument.  However, humans also have far more reasons to murder the members of their own species than the imperative of survival. Ideology, for example, offers a powerful motive for mass killing infidels, heretics, or those who happen to have an intolerable identity.  In the end, Lorenz expressed optimism that humans can learn from animals, that if anyone slaps us on the right cheek, we should, as Bible teaches us, turn to him the other cheek also.  This is not so that our enemy may strike us again, explained Lorenz, “but to make him unable to do it”.  I admire his faith in humanity and wish he was right, but I am deeply skeptical whether this age-old wisdom would have saved anyone who was killed by Hamas fighters on October 7th.

Team of Rivals

Doris Goodwin’s ‘Team of Rivals’ was the first presidential biography I ever read.  Biography was not among my favorite genres, but I did have a desire to learn more about Abraham Lincoln.  He is widely considered the greatest American president. In fact, to many even that title seems an understatement.  Tolstoy once wrote that Lincoln ‘was bigger than his country—bigger than all the Presidents together…and as a great character he will live as long as the world lives’.    Like most people, I’ve heard about the highlights of Lincoln’s remarkable life as I passed through grade schools: the self-made lawyer and politician haunted by family tragedies, the epic struggle to end slavery while forging a truly United States of America, and the ultimate sacrifice for the cause at the zenith of his career.  Still, I am not quite sure how to make Tolstoy’s melodramatic assessment. The book partially solved the puzzle for me.

Goodwin’s narrative is constructed around, and often from the perspectives of, Lincoln’s key cabinet members who were once his rivals:  Salmon Chase (Secretary of Treasure), Henry Seward (Secretary of State), Edward Bates (Attorney General), and Edwin Stanton (Secretary of War). The first three men ran against him for the nomination of the Republican party, and Stanton, when serving with Lincoln as co-counsel in a lawsuit, not only questioned the then country lawyer’s legal expertise but openly ridiculed him as ‘a gorilla and an imbecile’.  As Goodwin follows Lincoln’s footsteps from the humble origins to the poignant end, she recounts many stories of these rivals, often quoting extensively from their public speeches and private letters.  This helps unlock the mystery in Lincoln’s persona that ‘led countless men, even old adversaries, to feel bound to him in admiration’.

Lincoln ‘possessed extraordinary empathy’ and a ‘melancholy temperament’, wrote Goodwin.  These qualities might be the result of the tragic losses he endured from an early age – at 26, he had already lost three women dearest to his life: his mother, his only sister, and his first love. Empathy can be a curse ‘in a world environed by cruelty and injustice’ because, as Goodwin noted, the fellow-feeling for the misery of others inevitably causes pain and suffering.  It also sometimes made him appear weak and lacking the will to do what must be done in difficult situations. His attorney general confided to a friend that Lincoln, despite ‘very near being a perfect man’, was ‘unfit to be entrusted with the pardoning power’, because he too easily succumbed to touching stories and women’s tears. Yet, empathy was a powerful tool for Lincoln to gain the respect, trust, and devotion of others through understanding their motives and desires.  It also rendered him a remarkably magnanimous man, demonstrating an incredible capacity to forgive even those who had opposed, wronged, and betrayed him.

Goodwin also lauded Lincoln’s `literary genius’ and his mastery of rhetorical power.   His ability to explain intricate concepts through storytelling, coupled with a sharp sense of humor, was unparalleled among his contemporaries.  In the strictest sense of the word, Lincoln might not be as great an orator as Seward, who could deliver stirring, completely improvised speeches to a crowd for hours.  Lincoln was much more careful with his words, but he perfected ‘a language of enduring clarity and beauty’ that made him an extremely persuasive and effective communicator.

Lincoln believed in ‘the better angels of our nature’, a term coined in his first inaugural address.  He once told a friend that he preferred to believe in the possibility of human perfection, when asked about whether George Washington was a perfect man.  His entire life may be seen as the pursuit of becoming that perfect, inspiring human being he envisioned. It is the unwavering conviction to ‘engrave his name in history’, Goodwin noted, that underscores Lincoln’s greatness, carrying him through the dreary childhood, the political failures, the personal tragedies, the disintegration of his beloved Union, as well as the devastating military defeats in the early phase of the Civil War.

But Lincoln was also a realist.  Unlike Chase and Seward, who had advocated for radical abolitionist policies on moral grounds, Lincoln carefully charted a moderate path confined within the limits set by public opinion on slavery. His famous Emancipation Proclamation was timed and configured to be perceived by the people of the North as an indispensable instrument to win the war and preserve the Union, rather than as a necessary step to end slavery once and for all.   Goodwin sees nothing wrong for politicians to go along with public opinion, even if that means they must slightly bend their moral compass.  If anything, that expediency made Lincoln ‘the most truly progressive man of the age’, because he neither ‘wasted strength in premature struggles’ with the public nor waited to be ‘dragged by the forces of events’.  To be sure, Lincoln did owe much of his success to his exceptional ability to read and follow the will of the people.  But that does not make him ‘the most truly progressive’.  Based on what I gathered from the book, Lincoln is more of a pragmatist, a shrewd politician, maybe even a ‘political genius’ (As Goodwin likes to call him). Yet, he does not seem to have the burning conviction to reshape the world in the image of his ideology that many a great man of history possesses.  That difference, I think, is precisely what sets Lincoln apart from (or above, depending on how much you love him) that league of great men.

I was always curious about Lincoln’s view on race.  According to the book, Lincoln was against slavery but did not believe in racial equality.  He said the physical difference between whites and blacks would ‘probably forever forbid their living together upon the footing of perfect equality.’  As a result, he was not in favor “of making voters or jurors of n****, nor of qualifying them to hold office, nor to intermarry.”  Nor did he just say these things to get the white people’s votes.  Lincoln was a passionate advocate for colonization, the idea of aiding freed slaves to establish a colony in Central America. To sell this proposal to the country, he even convened a conference of freed slaves at the White House, where he said in his opening remarks, “you and we are different races. We have between us a broader difference than exists between almost any other two races.”  By today’s standard, therefore, Lincoln is a textbook racist. Should harboring racism in 19th century diminish his greatness?  I imagine Tolstoy and Goodwin would dismiss such a thought as quintessential presentism. But many from today’s political left would probably disagree with them.

‘Team of Rivals’ is a thick book of nearly 1000 pages, of which about a quarter were notes.  It was meticulously researched and elegantly written, though at times, the lengthy quotes and extravagant details about the lives of the people in Lincoln’s outer orbit feel a bit excessive. If you don’t want to read the whole book, do not miss the last chapter, in which Goodwin describes how Lincoln met his destiny.  I finished that chapter on an airplane – I still remember having tears in my eyes that I had to hastily cover when a flight attendant asked me if I needed a drink. That rarely happened to me.   I shall end with a quote taken from the very end of the book.

“With his death, Abraham Lincoln had come to seem the embodiment of his own words—’With malice toward none; with charity for all’. The deathless name he sought from the start had grown far beyond Sangamon County and Illinois, reached across the truly United States, until his legacy, as Stanton had surmised at the moment of his death, belonged not only to America but to the ages—to be revered and sung throughout all time.”

Can Artificial General Intelligence ever be Human Compatible?

When I was in graduate school in the early 2000s, the phrase Artificial Intelligence, or AI, did not have the mesmerizing power it possesses today. The field might have been slowly recovering from the twilight of 1990s, but remained an obscure subject that did not exactly inspire enthusiasm among graduate students –– certainly not in my field of study.  I might be more biased against AI research than most in my cohort, having acquired a distaste for it from the Dreyfus brothers’ contentious book, Mind Over Machines, which I interpreted at the time, perhaps over simplistically, as a rebuke of AI aspiration.   Much has happened since then. In the past decade, AI has made breath-taking progress that enabled computers to navigate complex urban environments and beat the best human Go players.  The Dreyfus brothers would probably read the news of these developments with astonishment and disbelief, though they may still not be ready to withdraw their opposition. For me, the last straw was ChatGPT, the chatbot that demonstrates human- and superhuman-level performance in tasks that I never thought can be done by computers in my lifetime: write essays, produce arts, and even achieve top 1% scores in the GRE verbal test, all delivered instantly by conversing fluently in natural language.  I am convinced that I need to reassess my outdated opinions about AI.  This conviction led me to delve into Human Compatible, a book written by Stuart J. Russell in 2019, whose work I initially came across on Sam Harris’s Podcast.  Russell is a world-renowned AI researcher at UC Berkeley, where, ironically from my perspective, the Dreyfus brothers had spent most of their teaching careers.

Russell began by defining human intelligence loosely as the ability to achieve one’s objectives through actions.  He believed AI should be described and assessed similarly. Yet, he argued that the focus should not be the “strength” of that ability, but rather its “usefulness” to humanity.  In his words (the emphasis is mine), “machines are beneficial to the extent that their actions can be expected to achieve our objectives.”

Paradoxically, a machine that strives to achieve our goals could still be an eminent danger to us.  For one thing, humans do not always know their real objectives.  Steve Jobs famously said, “people don’t know what they want until you show them.” Russell quipped about the perils of “getting exactly what you wish for”, as everyone who has been granted three wishes by a god can relate to.  He calls this the King Midas problem, because the legendary Greek King demanded that everything he touched would turn into gold, only to later regret his ill-fated wish.  Second, a rigid, human-specified goal can often be best achieved by violating norms and values that we humans consider common sense.  In a thought experiment, Russell imagined a super-intelligent machine, being asked by its human masters to cure cancer, decided to deliberately induce tumors in human beings so that it may carry out medical trials of “millions of potentially effective but previously untested chemical compounds”.  Be the fastest cure as this strategy may, it is an abhorrent violation of the established ethical standards in the field of medicine. This is the infamous value alignment problem in AI research.

At this point, most readers would probably breathe a sigh of relief and dismiss these so-called dangers as the illusion of doomsayers.  Surely enough, no machines that we know of can grant us wishes or cure cancer without any human supervision, right? Russell warned such complacency is dangerous and irresponsible, given the rapidly improving competence of AI systems. Contrary to what Hollywood movies lead us to believe, a conscious machine is not necessarily dangerous even if it hates humans. But a highly competent one surely is.

When it comes to the future of AI competence, Russell can be described as a cautious optimist. Not only does he believe artificial general intelligence, or AGI, is possible, but he once predicted “it would probably happen in the lifetime of my children”. He reminded us, furthermore, he is “considerably more conservative” than most active AI researchers, adding that it is entirely possible that AGI could come much sooner than his humble forecast.  In part, Russell’s confidence stems from seemingly boundless computing power available to machines. At the time of his writing, the fastest computer on earth, the Summit machine at the Oak Ridge National Laboratory, has gained a raw processing capacity in par with human brain, roughly 1017 operations per second (ops).  But this is infinitesimal compared to what machines could acquire in theory: 1051 ops for a laptop-sized computer, according to an estimate “based on quantum theory and entropy”.

To be sure, faster does not mean more intelligent.  As Russell said, a faster machine may simply “give you the wrong answer more quickly”.   According to him, reaching AGI still awaits several conceptual breakthroughs that may be hard to come by, which include: (i) understanding and extracting information from natural language; (ii) cumulative learning and discovery, which is essential to advancing science; (iii) planning and executing activities hierarchically to achieve complex objectives (e.g., going to Mars); and (iv) becoming an autonomous thinker that can manage one’s own mental activity (i.e., knows what and when to think).

Russell asserted that natural language technology was “not up to the task of reading and understanding millions of books”, and even though the existing language models can “extract simple information from clearly stated facts”, they can neither “build complex knowledge structure from text” or engage in “chains of reasoning with information from multiple sources”.  That was four years ago.  Today it seems clear that our first line of defense against AGI has already begun to fall to the advent of ChatGPT.  While this entirely unexpected breakthrough may have caught Russell himself by surprise, it actually proves that he was right all along: we must embrace and prepare for a future in which AGI is an integral part, not in spite of, but precisely because of huge uncertainty.

Russel thinks a super-intelligent machine can understand the world far better and more quickly, cooperate with each other far more effectively, and look much further into the future with far greater accuracy, than any human could ever hope to do.  In a nutshell, in a world with AGI,

“there would be no need to employ armies of specialists in different disciplines, organized into hierarchies of contractors and subcontractors, in order to carry out a project. All embodiments of AGI would have access to all the knowledge and skills of the human race, and more besides.”

What does this extraordinary technological triumph mean for human society?

First, the omnipotent AGI would drive up factor productivity to such a level that scarcity and poverty would be eliminated. When “the pie is essentially infinite”, Russell asked, why fight each other for a larger share? If this utopia sounds familiar, it is because Karl Marx said the same thing about communist society.   This crown achievement, however, will come at the cost of shattering job losses. Russell believed few of us could keep our jobs. It is delusional to think AGI will create more new jobs than it renders obsolete or enhance workers rather than replace them.  His metaphor of “the worker in an online-shopping fulfillment warehouse” is as enlightening as it is frightening.  He wrote,

“She is more productive than her predecessors because she has a small army of robots bringing her storage bins to pick items from; but she is a part of a larger system controlled by intelligent algorithms that decide where she should stand and which items she should pick and dispatch. She is already partly buried in the pyramid, not standing on top of it. It’s only a matter of time before the sand fills the spaces in the pyramid and her role is eliminated.”

The implication seems clear: no matter how indispensable you think you are, there will come a time when you too will be replaced.   That said, Russell told us everything will be just fine, if only humans could, as Keynes had famously advised 90 years ago, cope with their permanent plight of joblessness by learning “the art of life itself”.

Second, we must solve the alignment problem before entrusting all human affairs to AGI and retiring to the purer pursuit of happiness.  The solution to the problem is Russell’s expertise and the essence of the book. Russell argued that AGI development must follow the “Principles for Beneficial Machines”, which state “(i) the machine’s only objective is to maximize the realization of human preferences; (ii) the machine is initially uncertain about what those preferences are and (iii) the ultimate source of information about human preferences is human behavior.”   In a nutshell, Russell’s machine would continuously learn and strive to fulfill the preferences of their human masters. Whenever in doubt, it always defers to them, pausing its actions and seeking permission before proceeding.

I am skeptical these principles would be enough to save us from an AGI apocalypse.  The last part of the book discusses extensively the imperfection of humans, which are “composed of nasty, envy-driven, irrational, inconsistent, unstable, computationally limited, complex, evolving, heterogeneous” individuals.   Given that our species leaves so much to be desired, it seems strange to insist AGI must learn from our behaviors and help advance our (often) ruinous self-interests. Also, history has shown, time and again, humans of ordinary intelligence are perfectly capable of wreaking havoc on earth and perpetuating horrific violence against each other.  It stands to reason that the scale of destruction they can inflict would be incomprehensible when armed with superintelligence.  Unfortunately, that infinite pie Russell promised won’t eradicate human conflicts, because humans fight and kill as much for differences and status as for survival.

To his credit, Russell did concede that AGI must mind the interest of others, as well as that of its own master.  Having reviewed the theories of ethics, he suggested that utilitarianism –– which advocates for maximizing the sum of everyone’s utilities while treating their preferences equally –– might work.  Comparing utilities across individuals is meaningful and doable, Russell reasoned, and therefore, machines can be trained to master the science of ethics by what he called inverse reinforcement learning.  What he did not elaborate, though, is what mechanisms will be used to reconcile the inevitable conflicts between private and public interests. Humans invented pluralistic politics to deal with this ancient and intricate problem. However, super-intelligent machines are likely to find such politics too messy, too stupid, and too ineffective for their taste. Instead, they may favor a top-down approach that promises to “optimize” everything for everyone.  Unfortunately, this very promise had been made and broken before, often with devastating consequences.

Even if Russell’s “beneficial principles” ensure AGI never evolve into a tyrant – a big IF – they are still vulnerable to the “wireheading” trap, which is “the tendency of animals to short-circuit normal behavior in favor of direct stimulation of their own reward system”. Once the machines learn about the shortcut – say, directly stimulating a human’s brain to release pleasure-inducing chemicals – they would exploit it relentlessly to maximize the “total happiness” of humanity.  This tactic is not in violation of Russell’s principles because simulated happiness is still happiness, and to many it is an authentic experience.  The reader may recall that, in the famous movie The Matrix, many people willingly choose that virtual experience (the blue pill) over the real one (the red pill). Even Pascal admitted, “the heart has its reasons, which reason does not know”.  How could you blame AGI for gleefully encouraging their human masters to want what their heart loves more than their reason does?

Perhaps the gravest concern for humanity in the era of AGI will be the potential loss of autonomy.  In order for our civilization to endure, Russell explained, we must recreate it “in the mind of new generations”.   With AGI, this is no longer necessary since machines can store our knowledge and essentially “run our civilization for us”.  What is the point for any individual to spend a significant portion of their life acquiring knowledge and skills that they have no use for, except for the purpose of preserving our collective autonomy? Sadly, human nature being what it is, this tragedy of the commons may trap us all for eternity.

Russell’s writing exhibits a delightful wit, and the breadth of his knowledge in social sciences is remarkable, especially considering he specializes in computer science.  The book would make a stimulating but comfortable read for anyone who has some basic understanding of game theory and machine learning. A reader without such a background may find some materials less accessible.  Nevertheless, if Russell wanted to assuage the public’s concerns about AI safety, he might have fallen short.  If anything, the book had rendered me more pessimistic about AGI’s human compatibility.  While the Dreyfus brothers may be wrong about the superiority of mind over machines, deep down, I still wish they were right after all. To end on a desperately needed positive note, allow me to indulge a favorite quote from their book (again, the emphasis is mine):

“The truth is that human intelligence can never be replaced with machine intelligence simply because we are not ourselves “thinking machines”. Each of us has, and use every day, a power of intuitive intelligence that enables us to understand, to speak, and to cope skillfully with our everyday environment. We must learn what this power is, how it works, where it fits into our lives, and how it can be preserved and developed.”

How the world really works

My former colleague and mentor, Prof. David Boyce, loved Vaclav Smil’s How the World Really Works.  In a short email sent earlier this year, he urged me to read it, adding, “of the nearly 100 books I read this year, this one was the best”.  As encouragement he even mailed a hardcopy to me all the way from his retirement home in Arizona, to my pleasant surprise.  I’ve never heard of Smil before.   According to Wikipedia, he is a prolific and decorated author who counts Bill Gates among his fans.  An immigrant from Czech Republic, he had a PhD in geography but wrote about a wide variety of topics ranging from energy and environment to economics and public policy.

I don’t quite know how to make sense of the book’s seemingly pretentious title. If not for David’s recommendation, the title would probably have turned me away.  Having read the book, I suspect Smil had chosen the title to hide the controversial thesis of the book, which I think is an earnest pushback on the current “mainstream” climate policies and initiatives. Had the book been entitled to reflect this position, however, I imagine many people from the left would reject it out of hand as a manifesto from yet another climate change denier.  Here is Smil’s thesis in a nutshell:

“Complete decarbonization of the global economy by 2050 is now conceivable only at the cost of unthinkable global economic retreat, or as a result of extraordinarily rapid transformations relying on near-miraculous technical advances. But who is going, willingly, to engineer the former while we are still lacking any convincing, practical, affordable global strategy and technical means to pursue the latter?”

Let me first unpack how he reached this conclusion.

Citing Ludwig Boltzmann, Smil argues that free energy (i.e., energy available for conversion) is “the object of struggle for life”.   In the past two centuries, humans have gradually gained access to “a tremendous amount of energy at low cost” from burning fossil fuels.  This has largely transformed life on earth, from scarcity and misery that plagued much of human history to abundance and comfort that so many today had taken for granted.  By 2020, the annual energy consumed by an average person reached 34 GJ, equal to the energy content of about 0.8 tons of crude oil.  If the person would source this amount of energy from physical labor, Smil estimates they would need 60 adult servants working non-stop, day and night.  In affluent countries, this number would increase to approximately 200 to 240.  Clearly, before the energy revolution, only a very small minority could ever hope to avoid hard labor necessary to sustain and advance civilization.  As Thomas Piketty explained in his Capital in 21st Century, in such a world, social inequality was not only inevitable but maybe necessary because “if there had not been a sufficiently wealthy minority, no one would have been able to worry about anything other than survival.” Moreover, “without a fortune it was impossible to live a dignified life”.   Of course, the energy revolution did not eradicate inequality; but living a dignified life and thinking beyond mere survival is no longer the privilege of the super-rich. For this newfound luxury we have the fossil fuel industry to thank.

Central to Smil’s argument is, therefore, the observation that humanity has become deeply dependent on the cheap energy provided by fossil fuels.  The book explores this dependency in the production of electricity, food, industrial materials, and transportation.

  • Although the share of renewable energy (hydropower, solar and wind) in global electricity generation has reached 32% by 2022, fossil fuels (coal and natural gas) remained the dominant source (about 60%). As the uptake of renewables continues, however, the challenge lies not so much in converting solar and wind energy to electricity as in addressing their uneven spatiotemporal distribution.   Tackling this challenge requires the ability to store a massive amount of electricity and transmit it across vast distances. The former is contingent upon a technological breakthrough and the latter, even if we tolerate the cost of transmission, needs expensive infrastructure that currently does not exist. As Peter Nihan pointed out, there is a reason why “95 percent of humanity sources its electricity from power plants less than fifty miles away”.  Indeed, Germany had to keep almost 90% of its fossil fuel power plants as backup despite more than half of the country’s electricity is now generated from renewable sources.
  • The agricultural industries depend on fossil fuels for synthetic fertilizers, among other things (e.g., power for machinery). Smil estimates that more than two thirds of the nitrogen needed for growing crops worldwide is supplied by fertilizers produced from natural gas using the Haber-Bosch process. If we decide to only feed crops by organic wastes, he concluded, more than half of the current global population would be wiped out, and those lucky enough to stick around would struggle to afford regular consumption of meat.  To drive home the crucial importance of fossil fuel to our food supply, Smil painstakingly calculated the life cycle “oil contents” in several staple food items. Perhaps the most memorable example was the tomato grown in the heated greenhouses of Almería, Spain, which consumes more than half liter diesel fuel per kilogram of edible fruit. In contrast, a kilogram of chicken, the most “efficient” meat in terms of energy conversion, can be produced with as little as 0.15 liters of diesel fuel.
  • Smil also surveyed the ubiquitous presence of cement, steel, plastics, and ammonia in our life. The production of these “four pillars of modern civilization”, as he like to call them, relies heavily on energy- and carbon-intensive processes, collectively accounting for about one sixth of the global energy supply and a quarter of all fossil fuel consumption.  Smil asserts that we won’t be able to displace these materials anytime soon given their extensive current utilization. Nor could they be readily decarbonized because their established production processes have no “commercially available and readily deployable mass-scale alternatives”.
  • As for transportation, there are two major obstacles. First, electric motors are still far from a viable substitute to turbofan engines currently powering long-haul aviation.  After all, the energy density of today’s best Li-ion batteries only amounts to about 5% that of jet fuel.  Second, the raw materials needed to build batteries – lithium, cobalt, and nickel, to name a few – may not be able to keep up with the enthusiasm of EV advocates.   To reach a 50% EV market share globally by 2050, Smil estimated that the demand for lithium, cobalt and nickel would grow by a factor of, respectively, 20, 19 and 31.  Take cobalt for example.  A quick Google search shows that, as of now (2022), the world has a cobalt reserve of about 8.3 million tons and an annual production of about 190,000 tons. Per Smil’s estimation, the production of cobalt would rise to nearly 4 million tons in 2050, or nearly half of the entire current reserve.

Having explained why we will be stuck with fossil fuels in the foreseeable future, Smil turned to address what he considered hyperbolic responses to the unfolding climate crisis.  To be sure, Smil is no climate change denier. However, he does raise serious concerns regarding climate science and the way it is being portrayed to mobilize mass action.   Smil tells us the cutting-edge global climate models contributed little to advance our understanding about the greenhouse effect and its long-term consequences.   Instead, the scientific community has been “aware of them for more than 150 years, and in a clear and explicit manner for more than a century”.  He also questions the value of performing long-term forecast with these large, ostensibly sophisticated, and complex models.  Such exercises may produce headlines decorated with impressive numbers.  However, riddled with “layered and often questionable assumptions”, they are little more than “computerized fairy tales”, whose primary function is to help the users
reinforce their own prejudices or to dismiss plausible alternatives”, rather than reliably informing decision making. Smil’s distaste for complex forecasting models reminds me of Douglas J. Lee who, in his famous Requiem for Large-Scale Models, criticized the development of integrated land use and transportation models for the purpose of infrastructure planning. To explain why a more complex model isn’t necessarily better, Lee wrote,

“Including more components in a model generates the illusion that refinements are being added and uncertainty eliminated, but, in practice, every additional component introduces less that is known than is not known”.

In a nutshell, the climate models cannot really tell us what is going to happen in 30 years and to believe otherwise is “to mistake the science of global warming for the religion of climate change”.  Thus, Smil rejects the grim warning that our fossil-fueled civilization will soon collapse unless we immediately take drastic actions to decarbonize the world economy.   He also dismisses the grandiose claims that technological breakthroughs will somehow save humanity from this impending calamity, if only we have faith in them.  He ruthlessly mocks the “techno-optimists” –– who promised that 80 percent of global energy supply can be decarbonized by 2030, and an economy fueled by 100% renewables actually “needs less energy, costs less, and creates more jobs” –– and likens them to “green hymn” singers.

So, what is Smil for?  First, he prefers steady and mundane strategies to “sudden desperate actions aimed at preventing a catastrophe”. Two specific actions he suggested does make sense: reducing food waste, which shockingly amounts to a third of the overall food supply, and curtailing the ownership of SUVs, whose wide adoption had more than offset in the past decade the decarbonization gains resulting from the slow adoption of EVs.  Second, he wants us to “be agnostic about the distant future”, to admit the limits of our understanding, to “approach all planetary challenges with humility”, and to recognize no amount of planning can assure ultimate success.

Smil likes to build his argument around numbers and facts.  However, absorbing all the numbers can sometimes become such a mental burden that the reader may be distracted from the flow of the book.  Of course, this may be a feature rather than a bug; after all, Smil also wrote a popular book called Numbers Do Not Lie.  The chapter discussing risks and life expectancy seems a little baffling to me: it may be interesting in its own right, but a poor fit for the main theme.   That said, the book is a joy to read overall: Smil writes elegantly, his argument well-construed and his conclusions convincing.  Harsh as his critique of the climate modelers may be, it did resonate with me –– and I am a bit of a modeler myself.  However, I am probably not the kind of audience that Smil intends (need) to win over. To young liberals like AOC and Greta Thunberg, Smil’s even-headed message may be too conservative to swallow.  They might even find his lectures on “how the world really works” nerdy and old-fashioned, if not condescending and insulting.  Many a climate action enthusiast would probably never have time and patience to hear the old man out anyway, as they are so preoccupied by the continuous flow of new bad news that implore them to do something, anything, here and now, and at any cost if necessary.

The End of the World is just the Beginning

It’s a little embarrassing to admit that I was drawn to the book largely because of the provocative title. The “end of something” is one of my favorite genres – somehow part of me just cannot resist that whiff of fatalism.  In any case, if you crave for apocalypse, Peter Zeihan will not disappoint.

I should first clarify that the “End” spoken of here is not really the “world” itself, but rather the “Order”, the US-led, post-cold-war world order that centers on globalization.  Here is Zeihan’s verdict on the Order in his characteristically assertive tone:

“The globalization game is not simply ending. It is already over. Most countries will never return to the degree of stability or growth they experienced in 2019.”

Let me first walk you through why Zeihan thinks the game is doomed.

First and foremost, the Order is not normal. It was possible entirely because the only superpower on earth, the US, guarantees global security by suspending geopolitical competition.   Zeihan asserted our current era is “the most distorted moment in human history” and thus cannot be indefinitely sustained.

Second, globalization has been subsidized by America’s massive military spending and voluntary de-industrialization of her heartland. However, in the past five decades, this policy has squeezed the once mighty American middle class so hard that a major course correction seems inevitable.

Third, globalization went hand in hand with industrialization, urbanization, and women’s rights movement, which, while pulling billions out of poverty, has depressed birth rate below replacement levels in all but a handful of countries that “have managed a high degree of development”.  Where these processes were artificially accelerated thanks to rapid diffusion of technologies –– the so-called latecomer advantage –– populations also age at an artificially accelerated pace, fast approaching what Zeihan called “postindustrial demographic collapse”.   In fact, Zeihan claims that many countries have already passed the point of no-return, demographically.  The shrinking population will pull the rug out from under the consumption-based global economy.

To summarize Zeihan’s proposition, the Order is inherently unsustainable, can no longer be sustained as of today, and has already produced its own grave digger: the impending population crash.

Well, that explains the “end”. What about “the beginning” part, namely what is going to happen when the Order dissolves?

The first casualty is long-haul transportation.  According to Zeihan,  once the US   withdraws from policing the ocean surface, the global shipping industry will kiss goodbye to its most important asset: the impeccable safety record. Even a small uptick in the risk of losing cargo to pirates or rogue states will drastically increase transportation costs, in the form of rising insurance premiums, lost time, and disruptions to today’s hyper-efficient supply chains.  Without reliable and cheap transportation, moving raw materials and goods halfway around the world would make no economic sense.  As a result, every country must become less specialized and more self-sufficient –– growing all (or most) of one’s own food, rather than importing it from another continent, will become the new norm.  The countries that have selected (or been selected) to turn their entire economies into niche specialties at the behest of globalization will face upheavals, if not existential threats.   Unfortunately, not every country will make it.  Zeihan predicts the places that don’t have “the right geography to make a go of civilization” before the Order will experience not only depopulation –– a euphemism for mass starvation –– but also de-civilization (whatever that means).

The next victim is what Steven Pinker would call Long Peace.  Without effective law enforcement, the world will morph into the jungle that it once was. Under the rule of Darwinism, smaller nation states will have trouble protecting and feeding themselves.  A natural coping strategy is to coalesce around their regional hegemons to form military and economic alliances that would look disturbingly similar to the great powers of the past centuries. As these new empires begin to quarrel over resources and territories, violence ensues. Indeed, war has already returned to Europe when Putin’s Russia launched its bid to regain control over Ukraine about a year ago. Many people thought Putin had committed a huge blunder. However, if the future were to unfold as described in Zeihan’s book, the invasion may well be understood as a strategic imperative: grabbing “the granary of Europe” to ensure Russia can feed her own people when things go south.

While desolation will be widespread, not every country will suffer equally. Zeihan thinks the US and its neighbors will be doing just fine, because collectively they are endowed with rich natural resources, relatively young and still growing populations, and above all a powerful military that can secure industrial inputs and protect trade routes wherever needed.  America’s European allies, however, will not be so lucky.  The shockwave will break up Europe into small blocks led by the legacy powers – the likes of UK, France, Germany, and Turkey – who unfortunately can no longer count on colonialism and imperialism to get ahead like in the good old days.

That the biggest loser will be China Zeihan is absolutely certain.  The first and foremost problem for China is demography.  Most peoples in the world are getting older, but Chinese would allow no one to beat them at the game of speed, including aging.  Even according to official data, China’s population has already begun to shrink in 2022, with a birth rate standing at 1.3 and (most likely) still dropping.  Thanks in part to a ruthless but successful family planning scheme, China has become “the fastest-aging society in human history”, and at this point, her demographic collapse is inescapable and imminent. Second, China is highly specialized in low-value-add manufacturing to which long-haul transportation is indispensable.  This economic model must be completely restructured to cope with a post-Order world. However, the transformation will dramatically slow the economic growth, thereby undermining the foundation for legitimacy and stability of the Chinese polity.  Third, China could even lose full access to the resources essential to support her current population, including agriculture products and their inputs (fossil fuels and fertilizers), because she does not have a navy capable of projecting power a continent away.  In fact, as Zeihan remarked contemptuously, the Chinese navy “can’t make it past Vietnam, even in an era of peace.”

Specious as Zeihan’s doomsday theory might sound, he did attempt to back it up with witty geopolitical analysis and (re)interpretation of the history of technology and economics.  In fact, most pages of the book are filled with those contents, which, unlike the hysterical predictions, often make a more enjoyable read.   However, Zeihan’s central thesis is so preposterous that it hardly deserves a serious rebuttal.   History tells us doomsday predictions, especially something this extreme, rarely come true.  It is almost certain that the Order won’t end anytime soon, and when the end does come, won’t be in the same fashion imagined by Zeihan.

Zeihan is right about the formidable challenges posed by rapidly aging populations, and the unprecedented nature of the current demographic shift.  Older societies will grow more slowly because their people work and consume less on average. However, a slower accumulation of wealth does not have to trigger a panic stampede and tear the world apart in its wake.  Living in an older world could simply mean we must fix our deeply entrenched obsession for perpetual exponential economic growth.

Zeihan is right about America’s withering commitment to global security and leadership.  It may be true that the cost of upholding the Order has become too high to bear by any single country. However, it does not follow that the US and her allies would sit idly watching the Order collapse in front of their eyes.  If, as Zeihan prophesized, most countries will be so much worse off without the Order, why would they not fight with everything at their disposal to keep it alive?

Zeihan is also right about the worldwide retreat from globalization. The trend has been accelerated dramatically by COVID-19, which had exposed the startling vulnerability of the current system to large-scale disruptions, and forced many countries and cooperations to re-consider the premiums set for resilience and reliability.  However, this does not mean international criminals and thugs will come out overnight in droves, wipe out inter-continental commerce, and shatter the Earth Community into pieces.  Homo sapiens have seen better for far too long to willingly return to the dark ages.

Sometimes I doubt Zeihan actually believes his outlandish predictions. After all, he seems too smart to fall for the fallacies.   Maybe he thinks crying wolf gets the ears anyway, not only of ordinary readers like me, but also of politicians and even world leaders.  My other theory is that he was writing to vent his grievances.  To be sure, he pointedly denied this allegation, claiming in the epilogue that his book is not “a lamentation for the world that could have been”.  Yet, right after this disclaimer, he grumbled about America’s “lazy descent into narcissistic populism”.  He chastised the Europeans for their inability to come together for “a common strategic policy”.   His loathing of China and Russia feels strangely personal, and his harshest words and most vicious prophecies are always reserved for them, especially China.  Here is a remarkable paragraph he wrote at the end of the book.

“China and Russia have already fallen back on instinct, heedless of the lessons of their own long sagas. In the post–Cold War era, the pair benefited the most by far from American engagement, as the Order …created… the circumstances for the greatest economic stability they have ever known. Instead of seeking rapprochement with the Americans to preserve their magical moment, they instead worked diligently—almost pathologically—to disrupt what remained of global structures. Future history will be as merciless to them as their dark and dangerous pasts.”

In some sense China was indeed the largest beneficiary of the Order. However, this does not mean her incredible fortune will continue if she just promises to stay the course.  A geopolitical analyst like Zeihan should know strategic decisions are Markovian: they are always driven by the national interest in the future, not the rewards received in the past. Could China preserve her magical moment by simply “seeking rapprochement with the Americans”?  I doubt it.  Once China is deemed to have become too powerful for the Order to contain, she must either faithfully subscribe to the Order’s ideology or conspire to replace it with a new world order.  Judged by the recent developments, China has unequivocally rejected the first option.   Is her choice a stupid and fatal mistake, the lesser of two evils, or, as Toutiao (头条) News would make you believe, about to usher in the greatest era in the five thousand years of Chinese history?  The die has been cast; only time can answer the question.

Games without rules

Before August 2021 I knew almost nothing about Afghan history. Nor did I care.  As a country, Afghanistan seems neither interesting nor important, culturally or geopolitically.  Yes, it is famous for feverish Islamism, extreme poverty, and brutality against women; but there are plenty of such failed states to go around in the world.  Yes, it is nicknamed the “graveyard of empires”; but to most Chinese, there is nothing mysterious about burying empires in what Chairman Mao would call “boundless ocean of people’s war”.

Then, in April 2021, President Biden announced the plan to withdraw from Afghanistan by the end of August that year.  Shortly after, Taliban soldiers began to emerge from caves and tunnels. As they swept through the country with breathtaking speed, their opponents, more than 300,000 strong and trained, equipped, and paid for by NATO, simply melted away.  To be sure, Americans did not think highly of the Afghan legions procured with their money, initially predicting they could not hold off Taliban offense for more than a year.  Yet, they were still caught completely off guard when the Afghan government collapsed in Mid-August, well before the deadline of the planned withdrawal. If Americans had dreamed about a gracious if melancholy farewell from a country that they thought they had liberated and rebuilt, the dream had turned into a nightmare that will be remembered for generations to come.

Like most observers, I watched the events unfolding in Afghanistan that summer with shock, amusement, and confusion.  How could a poorly trained guerrilla force defeat a larger, better-equipped national army in just a few months? Why did not most Afghans fight harder to protect their political freedom, personal liberty, and women’s rights, the things that Americans insisted they should cherish the most? Even Biden seemed genuinely baffled at Afghans’ lack of will “to fight for their own future” despite Americans had given them “every tool they could need”.  These questions had prompted me to find answers in Afghan history.   The book I stumbled on was Games Without Rules by Tamim Ansary, an Afghan American author who was born in Kabul after WWII. Ansary covers the 250-year history of modern Afghanistan, starting from its legendary founder, Ahmad Shah Baba, and ending with the Islamic Republic in the 21st century.   An easy and enjoyable read, the book did not just answer most of my questions, it answered them head on, as if the author knew the questions would be asked ten years later.

First, a few things that surprised me.

I once thought that Afghans have always been living under a somewhat barbarous regime similar to Taliban, and that it was Americans who incidentally liberated them from the subjection by their antiquated institutions.   I was wrong.

Taliban movement was in fact a new phenomenon that bears little resemblance with most Afghan regimes that came before it.  The reign of Abdur Rahman Khan (1880-1901) ––also known as the Iron Amir––may be a close match in terms of brutality and religious rigidity, but he is also remembered by many as the king who united Afghanistan under one flag and set her on the path toward modernization.  Like many peoples that came in contact with the West in the past two centuries, Afghans had gone through, sometimes not under their own initiatives and terms, multiple iterations of modernization projects.  Amanullah Khan (1919 -1929), who fought for and won Afghan independence from the British Empire, was a radical reformer.  Among his daring edicts was a new law meant to replace Shari’a, which guaranteed many basic human rights, including freedom of religion and women’s rights – yes, a hundred years ago, Amanullah’s code already proclaimed no girls should be denied the right to education and no women should be required to wear burqa.   However, Amanullah’s reform was way ahead of its time.  Afghans rebelled and kicked him out of the country; he ended up in Italy as a refugee, where he spent the rest of his life working as a carpenter.  After a few years of turmoil, the reign of Zahir Shah (1933 – 1973) charted a more moderate and successful trajectory, which culminated in the enactment of the 1964 constitution.   By introducing free elections, a parliament, civil and political rights and universal suffrage––and effectively banning any members of royal family to hold high-level government offices––the constitution created a modern democratic state that is, in principle, similar to the Islamic Republic of 2000s. By early 1960s, Ansary wrote,

“in the big city of Kabul, women were beginning to appear in public showing not just their faces but their arms, their legs, even cleavage. Afghan girls of the elite technocratic class were beginning to cotton to Western fashions. They were wearing miniskirts and low-cut blouses. Nightclubs were popping up, which served beer and wine and whiskey—and not just to foreigners. Afghans were drinking and making no bones about it.”

So, how did Afghanistan descend from this lovely modern democracy to Taliban’s Islamic Emirate? Well, it had much to do with geopolitics.

Contrary to my naïve preconception, Afghanistan has been enormously important to the struggles of great powers, especially those between Russia and the West. In the 19th century, the Russians attempted to reach the Indian Ocean from the central Asia. Determined to protect their enormous trade interests in the region from Russian interferences, the British took Afghanistan as their protectorate by force.  If the objective was to stop Russians, the British succeeded.  However, their control of the country had always been fragile and treacherous.  According to Ansary, they had “won jurisdiction of every patch of Afghan territory their guns could cover—but not one inch more”. Eventually, after countless lives on both sides lost to violence and a world war that permanently weakened Europe, the British granted independence to Afghans. However, the domination of great power politics did not fade away. Instead, it morphed into a form that had briefly become a benefactor, when Russians and Americans, in their attempt to recruit Afghans to fight for their causes in the Cold War, offered extravagant aid packages.  In 1950s and 1960s, the two superpowers “constructed over twelve hundred miles of superb paved roads through some of the planet’s most difficult terrain”, which connected “all of Afghanistan’s major cities”. Unfortunately, this relatively peaceful and prosperous era was interrupted by the rise of the communist movement in the late 1960s.   Social unrest ensued, followed by three coup d’etat in the 1970s.  From the upheavals a deeply unpopular communist regime emerged in 1978, whose internal strife soon killed its pro-Soviet leader, Nur Mohammed Taraki, and forced his slayer and successor, Hafizullah Amin, to consider jumping ship to the Americans. The Soviet Union intervened, plunging into a 10-year war from which she would never recover.  Like the British in the 19th century, Russians soon discovered that their war machine could easily crash the Afghan army and state but not the Afghan people.  Frustrated by the tenacious opposition led by Mujahideen (Islam Jihadists), the Russians resorted to a scorched earth policy that aimed at depopulating rural Afghanistan. Their grotesque tactics did little to win the war but unleashed a humanitarian catastrophe of epic proportion.   According to Ansary, a million Afghans were killed and six million displaced in 1985 alone.

An entire generation of Afghan boys would grow up in the refugee camps and receive education in religious madrassas (schools).  Having suffered through the worst childhood on earth, they were “allowed to imagine that it might be their destiny to establish the community that would save the world”.   From the schools of these refugee camps would rise the loyal followers of Mullah Omar, the founder of the “student movement”, or Taliban (literally means students in Arab).   Under Omar’s leadership, Taliban would win a bloody civil war in 1990s, only to be dethroned a few years later in the wake of America’s anti-terrorist crusade.

The rest is history.

Let me get back to the questions that drew me to this book in the first place. Why didn’t the Afghan people fight harder for their freedom? The short answer is there were two Afghan peoples: westernized urban elites and common folks from the countryside. The “Afghan people” often spoken of in the western media might only refer to the former.  While the elites considered Taliban an archenemy, the masses did not see Taliban’s moral and religious imperatives conflict with theirs.  While the elites were supposedly in charge, they have never gained full control of the other Afghanistan.  Most importantly, when push comes to shove, they had no idea how to “fight the fight and win the war”.

Why is Afghanistan so deeply divided?  As a collection of tribes and ethnic groups that loosely coalesced around an Islamic culture over a tough terrain, Afghanistan is an inherently weak state. This made it very hard for anyone, even the most powerful country in the world, to penetrate through the layers of physical and cultural barriers that historically separate urban centers from rural communities. Without a strong state, most Afghans naturally turned to tribal and religious authorities for such basic state services as security, law enforcement and education. Ansary likened ruling Afghanistan through a puppet government to swinging a pot by grasping its handle: the foreign powers thought they could swing the pot however they wanted; yet, because the handle was never firmly attached to the pot, they often ended up shattering the pot while holding nothing but a useless handle.

The innate weakness of the Afghan state was further reinforced by the powerful legacy of Islam and the recurrent interventions by the West.  Unfortunately, the Islam and the West have long been at odds with each other, and the animosity had only grown stronger in the past century.  As a result, the head of the Afghan state faces a constant dilemma.  On the one hand, as they need the support of the West––money, permission, or both––to secure power and to modernize the country, they must subscribe, or at least pay lip service, to Western values.   On the other hand, they could not afford to alienate the masses who remain loyal to traditional values, or risk being thrown out of the palace like Amanullah.  The balance between the two acts is so delicate that few could make it work, not for a long time anyway.  As a result, modernization in Afghanistan, because it is “foreign” in name and in essence, had actually widened the cultural and wealth chasm between the elites who welcomed the western influences and the masses who continued to resist them.  Any attempt by a foreign power to correct course by direct intervention, regardless of methods or intention, only serves to pour fuel on the fire.

Seen from this light, the Bush plan to rebuild Afghanistan after the 2001 invasion was doomed from the beginning.   On display in that 20-year nation building project, largely funded by American taxpayers, is not so much America’s idealism as her arrogance and ignorance of history.  Biden was right to cut the loss as soon as he could.   In the end, Ansary told us Afghanistan would probably do okay, regardless of who was in charge, if only other countries are willing to leave her alone.   Let’s see if the world will heed his advice this time.