Category Archives: English writing

Theory of Moral Sentiments

A Theory of Moral Sentiments is Adam Smith’s first book. Compared to Wealth of Nations, his magnum opus, this book was less well-known.  Steve Dubner discussed it extensively in a Freakonomics series, which argued that Smith has been misread by modern economists like Milton Friedman, and that the real Adam Smith was in fact an “affable moral philosopher”, rather than “the patron saint of cutthroat capitalism”. The podcast piqued my interest in Adam Smith and his theory of moral sentiments. The book was not an easy read for me, as it took some time to adjust to the 18th-century writing style.   However, I think the time was well spent.

Central to Smith’s theory is the proposition that the perception of right and wrong comes from sense and feeling rather than reason.  Human happiness, according to Smith, chiefly “arises from the consciousness of being beloved”.  Because we desire to be loved by our brethren— taken to mean relatives, friends, neighbors, and countrymen—we seek their approval and avoid their disapprobation. It is through this pursuit of love and happiness humans acquire sympathy, the ability to share and approve the feelings or interests of another person.  However, to truly sympathize with another’s feelings—to empathize with them (although Smith never used this term)—we must first overcome our own selfishness.

To make this crucial point, Smith proposes a thought experiment, which imagines how “a man of humanity in Europe” would react to the news that a huge earthquake has suddenly destroyed China and all its people. He would, Smith wrote,

“express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general.”

However, after “all this fine philosophy was over”, the man would return to his regular life as if nothing had happened. Indeed, an accident of trivial scale—compared to that catastrophe in China—befallen on him, say the loss of a little finger, would cause him to lose more sleep than what he would over “the destruction of that immense multitude”. If this is so, Smith asks, would this person be willing to sacrifice the lives of all those Chinese to prevent that “paltry misfortune to himself”?   Smith claims humankind has never produced a villain that could be capable of entertaining such a horrific thought. On this point I disagree with him, though his faith in humanity is understandable. After all, Smith has never witnessed the world wars, heard of Holocaust, or met the infamous dictators of the 20th century.

Smith claims what prevents most people from placing their own interests above the greater interests of others is an impartial spectator that grows and resides within them.  The impartial spectator is “the great judge and arbiter of our conduct”, who teaches us that

“we are but one of the multitude, in no respect better than any other in it; and that when we prefer ourselves so shamefully and so blindly to others, we become the proper objects of resentment, abhorrence, and execration. It is from him only that we learn the real littleness of ourselves, and of whatever relates to ourselves, and the natural misrepresentations of self-love”.

Thus, to become a moral person is to forge and train this impartial spectator, and to be guided by him.  There is a subtle but crucial difference between a moral person and a virtuous one: the former merely follows the impartial spectator’s rules, whereas the latter adopts and embodies his moral sentiments. In some sense, the virtuous person becomes a proxy of the spectator, unified with him in both spirit and conduct, thereby entering a state of spiritual freedom, at which the bounds of moral constraints are no longer felt.

Impartiality is central to many theories of morality. For example, John Rawls’ “veil of ignorance” serves as an instrument of impartiality in his theory of justice. Smith’s impartial spectator also resembles what a Confucianist would call “inner sage” (内圣), or the “innate moral knowledge” (良知) in Wang Yangming’s Theory of Mind (心学).  The unifying state achieved by a virtuous person, I believe, is “知行合一” in the Theory of Mind, and the process through which to arrive at that state is called “致良知”.  Like Smith, Wang also emphasizes sympathy as the approach to morality.  In Instruction for Practical Living (传习录), he writes,

“世之君子惟务致其良知,则自能公是非,同好恶,视人犹己,视国犹家,而以天地万物为一体。”

Thus, with the help of the impartial spectator (良知), the virtuous person (君子) can be just (公是非) and have empathy (同好恶,视人犹己).

Smith believes moral norms first emerge to forbid actions that inflict pains on a person, such as endangering their life and body, depriving their possessions and property, and violating their rights to basic liberty.  This is because humans are disposed to sympathize with sorrow more strongly than with joy.  Moral norms are extremely important, as they form the laws of justice, without which human society cannot survive.  Yet, the sense of justice only enables people to behave with minimum propriety and decency.  To Smith, it is a mere “negative virtue” that does no real positive good.

Throughout much of the book, Smith explains the transition from adhering to basic moral norms to cultivating positive virtues. The mechanism is still sympathizing, and the secret is to overcome the less desirable aspects of human nature.

What makes us jealous of the success or good fortune of another person?  Again, the reason is that humans are generally more focused on avoiding pain than seeking happiness. As a result, it is more difficult for us to will the good of our brethren—i.e., to truly love them—than to avoid harm to their person and property.  The sentiment of envy is strongest when the person is regarded as an upstart.  As Smith notes,

“The man who, by some sudden revolution of fortune, is lifted up all at once into a condition of life, greatly above what he had formerly lived in, may be assured that the congratulations of his best friends are not all of them perfectly sincere.”

However, thanks to the impartial spectator, we are also ashamed of our own envy, and “often pretend, and sometimes really wish to sympathize with the joy of others”.  A man who fought and won this battle with our sentiment of envy is capable of that magnanimous act of willing the good of our brethren, loving them as much as we love ourselves. He may also learn to maintain prudence and humility no matter what stellar successes he has just achieved and how much he thinks he is entitled to boast about them.  Sympathy reminds him that, by overly displaying joy in his achievements, he could arouse among his brethren envy and jealousy, and the shame and self-pity that come with it.  Therefore, he always “endeavors, as much as he can, to smother his joy, and keep down that elevation of mind with which his new circumstances naturally inspire him.”

Smith was not utilitarian, despite being revered as the father of economics—which is built on the notion of utility-maximizing homo economicus—and invested as a god of capitalism.   As the book makes abundantly clear, Smith did not endorse, much less celebrate, cold-blooded self-interest. His famous “invisible hand” explains why society can work well despite, not because of its members being utterly self-interested.  Surprisingly, he made the same point in this book, which was first published in 1759, seventeen years before the Wealth of Nations. He writes that the rich,

“though they mean only their own conveniency… are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

If Smith believes that self-interest can be guided toward positive outcomes by the invisible hand, he clearly opposes such consequentialism in matters of morality. He was deeply troubled by the fact that “the world judges by the event, and not by the design”, which he called “the great discouragement of virtue” throughout the ages. Smith conceded that, in the realm of justice, punishment should be proportional to the consequences of our actions, rather than our intentions. However, he forcefully argues that the opposite should apply when assessing our own character and conduct.

In this regard, Smith is nearly a moral idealist. He believes we should strive for “exact propriety and perfection” rather than settle for the lower level “which is commonly attained” by most people. Smith argues that focusing on the inferior standard is what led many historical figures to become arrogant, presumptuous, and extravagantly self-admiring.  Self-admiration may be necessary for their success, as it drives the great men to pursue ventures that a more cautious mind would never consider.  “When crowned with success”, however, this presumption “has often betrayed them into a vanity that approached almost insanity and folly”, and “precipitated them into many rash and sometimes ruinous adventures”.  Somehow, Elon Musk’s face crossed my mind when I read the above passage.

Since to be loved by others generally means to receive their attention and praise, a great deal of human energy has been consumed by the struggle to stand out and be recognized.  Smith refers to this desire for attention and praise as “vanity”.  Although vanity is not inherently a vice, it becomes problematic when it is directed towards the wrong objects. Therefore, writes Smith,

“the great secret of education is to direct vanity to proper objects”.

Because a man sees wealth and power attract attention and submission, he is often compelled to pursue them. Similarly, observing that fame and glory earn respect and praise, he aspires to be famous and honored. Consequently, he mistakenly equates these pursuits with achieving love and happiness. Smith tells us that

“nature has endowed a man, not only with a desire of being approved of, but with a desire of being what ought to be approved of.”

Wealth, power, fame, and glory all signal approval from others, but not necessarily “what ought to be approved of”. To Smith, pursuing praise and pursuing what is praiseworthy are distinctly different. The former often leads us to chase misguided objects of vanity, while the latter inspires a genuine love of virtue.   A virtuous man derives little pleasure from praise where it is not due; instead, he often feels the greatest satisfaction in doing what is praiseworthy, even though “no praise is ever to be bestowed upon it”. Thus, “to be that thing which deserves approbation” is “an object of the highest” to him. If succeeded in this endeavor, he no longer needs approval from others.  He would become assured of “the perfect propriety of every part of his own conduct” and be content with his self-approbation, which, according to Smith, is virtue itself, the only thing which he can and should care about.

Smith’s emphasis on praise-worthiness rather than praise, and on self-approbation rather than approval by others, appears to be rooted in Stoicism.  Smith writes that the Stoics believes

 “human life…ought to be regarded but as a mere two-penny stake. …Our only anxious concern ought to be, not about the stake, but about the proper method of playing. If we placed our happiness in winning the stake, we placed it in what depended upon causes beyond our power, and out of our direction. We necessarily exposed ourselves to perpetual fear and uneasiness, and frequently to grievous and mortifying disappointments. If we placed it in playing well, in playing fairly, in playing wisely and skillfully; in the propriety of our own conduct in short; we placed it in what, by proper discipline, education, and attention, might be altogether in our own power, and under our own direction. Our happiness was perfectly secure, and beyond the reach of fortune.”

In a nutshell, to shield our happiness from the whims of fortune, we should remain as indifferent as possible to praise, recognition, and all the superficial allurements of vanity. This philosophy aligns with a precept I learned many years ago from a Chinese author: 但行好事,莫问前程 (Focus on doing the right thing, rather than on achieving the perfect outcome).  It also echoes my favorite quote from Daniel McFadden’s Nobel Prize autobiography (the emphasis is mine):

“My parents taught me that to lead a virtuous life, I should be modest, take my satisfaction from work done well, and avoid being drawn into competition for status and rewards.”

This idea is precisely what I have been trying to tell any of my doctoral students who would listen: To truly enjoy academia, you must find joy in the research itself, independent of any external rewards it might bring, whether that’s funding, awards, or even the opportunity to change the world.

Marco Nie

April 14, 2024, Evanston, IL.

Chip War

Chris Miller masterfully told the story of the spectacular rise of the semiconductor industry (the “chip”) and its ever-growing entanglement with geopolitics (the “war”).  It’s a fascinating narrative, filled with ups and downs, twists and turns, heroes and villains, and a cast of victors and losers — well worth reading for its own sake .  It is a must read if you want to understand the current U.S.-China relationship and the slow-moving crisis hanging over the Taiwan Strait.  Semiconductors have become central to the U.S-China relationship, with one side aggressively playing catch-up, and the other striving to maintain its waning lead.  Taiwan has the misfortune to be caught in the middle of this seemingly inevitable epic-clash, not so much because it offers a beacon of hope for “the free world”, as because it houses Taiwan Semiconductor Manufacturing Company (TSMC), the sole fabricator of the world’s most sophisticated chips.

As I read about the legends of semiconductors unfolding in the book, I came to realize my own ignorance about an industry that has profoundly transformed humanity.

I did not know Williman Shockley who, along with two other scientists at Bell Labs, discovered semiconductors and invented transistors. He also started a company called Shockley Semiconductors Laboratory that counted Gordon Moore (yes, that’s the Moore after which Moore’s law is named) and Robert Noyce among its first hires. The pair would rebel against Shockley later and go on to become the giants of the burgeoning industry. They first founded Fairchild Semiconductor that supplied the computing power to land men on the moon in the 1960s, and then Integrated Electronic, or Intel – a household name in today’s tech world.

I had never heard of Texas Instruments (TI) before I read the book.  But among TI’s earlier employees are Jack Kilby, who won a Nobel prize in physics in 2000 for inventing the integrated circuit (集成电路), Jay Lathrop, who created the first photolithograph (光刻) scanner, and Morris Chang, an immigrant from Mainland China and the founder of TSMC.

Nor could I distinguish between memory chips and logic chips, PC chips and smartphone chips, or deep ultraviolet (DUV) lithography and extreme ultraviolet (EUV) lithography.  What has struck me the most, however, is the incredible difficulty to keep up with Moore’s law, which posits that the number of transistors on a microchip doubles approximately every two years. Indeed, the cutting-edge chips have become so complex that TSMC is the only manufacturer in the world capable of fabricating them at scale.  TSMC does this with “ultra-pure silicon wafers and specialized gases from Japan” and machinery that “can etch, deposit, and measure layers of materials a few atoms thick”. Supplied by only five companies, these tools themselves took decades and an astronomical amount of money to develop, and their core technologies are closely guarded trade secrets.  Take the development of the EUV lithography for example. The project was launched in the early 1990s thanks to a $300-million investment from Intel.  However, it wasn’t until nearly 30 years and billions of dollars in spending later that the Dutch manufacturer ASML finally introduced EUV scanners to the market in 2018, at a price of 100 million apiece for an expected lifetime of four years. For a layman like me, it is mind-boggling to read just how the scanner produces enough EUV light needed for fabrication:

The best approach was to shoot a tiny ball of tin measuring thirty-millionths of a meter wide moving through a vacuum at a speed of around two hundred miles per hour. The tin is then struck twice with a laser, the first pulse to warm it up, the second to blast it into a plasma with a temperature around half a million degrees, many times hotter than the surface of the sun. This process of blasting tin is then repeated fifty thousand times per second to produce EUV light in the quantities necessary to fabricate chips.

It does sound like a miracle, as Miller put it, that something this delicate not only works, but “does so reliably enough to produce chips” that can make lots of money.

This sums up the history of chips. What about war?  The book describes three chip wars that took place between the U.S. and her rivals in different eras.

The war with the Soviet Union, fought mostly in the first half of the cold war, was won with relative ease. The USSR treated its semiconductor industry as a weapons program, similar to its treatment of nuclear and space technology.  In hindsight, this strategy was a huge mistake, as the immensely profitable civilian applications of semiconductors turned out to be such a strong driving force for innovations that no level of government spending could hope to rival.  Faced with the lack of progress, the Russians tried to copy the U.S. technology through espionage. Yet, this did not work either.  For one, even the most skilled spies cannot steal all the technical know-how involved in complex production processes. More crucially, the “copycat” mindset inevitably condemned Russians to a perpetual game of catch-up, rather than allowing them to lead the way.

Japan was a much greater threat.  Thanks to favorable technology transfer and trade policies that the U.S. willingly offered in exchange for the Japanese support of America’s global order, Japan’s semiconductor industry evolved from a niche player specializing in consumer electronics in the 1960s and 1970s to a formidable powerhouse in the 1980s. By 1985, Japan had begun to outspend the U.S. in capital investment for semiconductors, and by the end of that decade, it had become the dominant supplier of the world’s Dynamic Random-Access Memory (DRAM) chips (90% market share) and lithography equipment (70%). Trade disputes soon ensued.  The skirmish started with the U.S. accusing Japan of espionage, double-dealing, and dumping.  It escalated to the point where the U.S. openly threatened tariffs, ultimately compelling Japan to impose quotas on its exports of DRAM chips to the U.S. in 1986.  This did not help Silicon Valley recover their lost ground, however.  Eventually, nearly all American companies, including Intel, were driven out of the DRAM and lithography markets.

Carried away by their astonishing success, the Japanese began to dream about, in the words of Sony Founder Akio Morita, overcoming the United States economically and becoming “number one in the world”.  The U.S. was understandably frightened by the pent-up nationalism revealed in The Japan That Can Say No—which Morita co-authored—and the gloomy prospect of relying on a foreign country to maintain the most important edge of her military. In response, the U.S. launched a campaign to curtail Japan’s dominance in chip-making industry.  The core strategy involves mobilizing South Korea (Samsung), Taiwan (TSMC), and to a lesser extent, Mainland China, to erode Japan’s competitive advantages by enabling cut-throat competition against her companies.  It worked like magic.  In 1998 Japan’s share in the DRAM market fell to 20% from a near monopoly less than a decade ago, while South Korea dethroned Japan to become the largest producer of memory chips. Not only did Japanese firms suffer tremendous share loss in the DRAM market, but they also missed the emerging opportunities in the nascent PC market.  In what Miller dubbed as one of the greatest comebacks in industry history, Intel, under Andy Grove’s leadership, reinvented itself as the king of microprocessors for PCs.  For what seemed like an eternity in this fast-paced industry, Intel was literally the icon of the PC industry, as the blue trademark of its processors emerged as the most recognizable feature on most PC sold globally. Indeed, I remember the first PC I ever owned— which my five college roommates and I purchased in 1995 using pooled funds—simply as a 486, because it was powered by Intel’s 486 microprocessor.  According to Miller, that very computer chip was the first ever with over a million transistors!

This brings me to the latest, still on-going chip war with China.  On the surface, the plot of the Chinese edition bears resemblance to that of Japan: the wary incumbent hegemon, spooked by the rapid ascent of an upstart, is compelled into massive counteractions to neutralize the threat, real or imagined.   However, unlike Japan, China has never really overtaken the U.S. in any high-end technology areas of the semiconductor industry.  Not even close.  According to Miller, toward the end of the 2010s, China had less than 1% of the global chip design software tool market, about 2% of core intellectual property related to “the building blocks of transistor patterns”, 4% of silicon wafers, 1% of fabrication machinery, 5% of chip design, and 7% of fabrication concentrated in the non-cutting-edge chips.  If that is the case, has the U.S. overreacted with her heavy-handed sanctions and embargo against China’s tech sector?

Regarding this, Miller’s insights on the crackdown of Huawei were particularly enlightening. He acknowledged that the charges against Huawei, which included theft of intellectual property, ties with the Chinese military, and violation of U.S. sanctions on Iran, were “ultimately a sideshow” – basically a euphemism for made-up excuses.  The real issue was, Miller wrote,

That a company in the People’s Republic of China had marched up the technology ladder… Its annual R&D spending now rivaled American tech giants…, it was the most successful exporter [of all Chinese tech companies], giving it detailed knowledge of foreign markets. It not only produced hardware for cell towers, [but] it also designed cutting-edge smartphone chips. It had become TSMC’s second biggest customer, behind only Apple.

Therefore, the real question was: “Could the United States let a Chinese company like this succeed?” That is a rhetorical question in case you did not catch the drift. But why?

I can think of several reasons.  

First, unlike Japan, China was not a liberal democracy.  Judged by what was going on in the country since the early 2010s, China absolutely has no interest in becoming one anytime soon. To make things worse, under the current leader, China has repeatedly asserted that perhaps her system, rather than America’s, should be the model that the rest of the world admires, envies, and emulates.  Even when Morita lectured Americans about the superiority of the Japan system, it was seen in Washington as a serious provocation – and he wasn’t even talking about authoritarianism.

Second, unlike Japan, China has never pledged allegiance to the America-led world order.  In fact, in the past decade, China has decisively shifted from biding her time as a halfhearted participant of that order to openly flirting with the idea of challenging it, economically, technologically, and if necessary, militarily.

Third, China has increasingly embraced nationalism as a rallying cry for its people to coalesce around the current regime. However, the inherent logic of this political agenda requires the “unification” of the motherland, regardless of the cost. Whether this stance is a concrete national policy or merely a slogan to appease the little pinks on the internet remains to be seen. Yet it does place China on a collision course with Taiwan and the U.S.  When push comes to shove, the U.S. could find herself in a treacherous standoff with what she now regards as a “peer competitor”. The stakes are incredibly high. Retreating from the American commitment to Taiwan’s security would spell the end of the current global order, potentially plunging the world into chaos.   More importantly, losing Taiwan could hand China a golden opportunity to erode the America’s technological supremacy, which has been a cornerstone of her national security since at least World War II.

As of this writing, China has been denied access not only to high-end chip-making technology but also to the high-end chips themselves. Lacking essential tools (e.g., EUV scanners) and raw materials (e.g., pure silicon wafers), China’s semiconductor industry, as well as her tech sector in general,  is likely to fall behind. Indeed, it has already missed out on the latest gold rush in AI, particularly the triumph of large language models, partly because her access to computing power (GPUs) was severely restricted by sanctions.

Could China break this “neck-strangling” (卡脖子) situation entirely through its own initiatives? Before reading this book, I thought there must be a way, if the nation threw its entire weight behind the challenge.  I am much more pessimistic now. If there’s one thing I’ve learned from the book, it’s that the creation of cutting-edge chips can no longer be achieved within the borders of a single country, not even the U.S. Moreover, the pursuit of technological innovations as a nationalistic project may not be a sound strategy for long-term success, as demonstrated by the failure of the USSR.

Could the chip war have been averted had China chosen a different path in the early 2010s?  What impact will the current conflict have on the “China Dream” and the lives of 1.4 billion Chinese? No one knows the answer.  One can only hope that the war remains confined to the realm of chips and continues to be fought by scientists and engineers with their computers, rather than soldiers with guns and missiles.

 

Marco Nie

Wilmette, IL

3/3/2024

A letter to Henry Ford

As part of her social study homework, my daughter, Jolene,  wrote a letter to Henry Ford, imagining it from the perspective of Greta Thunberg.  I like the letter but thought it did not sound much like Greta.  Then it occurred to me that I could ask ChatGPT to rewrite it in Greta Thunberg’s style .    Intrigued by the idea, Jolene enthusiastically consented to this experiment. She has also given her permission for both the original letter and ChatGPT’s adaptation to be shared here.  I hope you have as much fun as I had reading these!


Jolene’s original letter

Herr Henry Ford,

Isn’t it interesting how one day can change your life, and you wouldn’t know? I remember that day like it was yesterday. I remember sitting down at my desk at school, not knowing that my story was just about to start. The teacher announced that we would be watching a film about the state of our environment. I remember how as it progressed, my feelings grew from curiosity to horror as pictures of endless mountains of garbage floating in the ocean flitted across the screen. I will be the first to admit it: I cried. I remember every vivid detail, though my classmates all forgot the atrocities they just witnessed mere minutes after the film ended, apparently deciding that discussing their weekend plans was more important than the outrageous amounts of garbage and smoke that pollute our planet. It’s a strange feeling, looking back and knowing that this film would be forever engraved in my mind. I’m sure you would feel the same about the assembly line, looking back at when you invented it, when you didn’t know you would revolutionize the automobile forever. I suppose that means I shouldn’t blame you for the polluting smog cars let out into the environment every day. Yet, as I sit here in my room, sun warming the paper I write on, I can hear the cars passing my home, and I can picture the cloud of smoke that lingers behind. Too much has been done to our Earth for me not to take a stand.

I understand that when you were innovating and popularizing the automobile, your intentions must have been good: to make people’s lives easier and better. If that was your goal, then you achieved it. At least, at the time you achieved it. On the surface level, it seemed perfect: more jobs were created that more workers were qualified to do, which made our country’s economy stronger. Cities became less crowded as people moved to neighboring towns to form suburbs since they could drive to work. It bridged a gap between different classes of people as it became a norm that men and women, rich and poor alike drove together on the roads. But at what cost?

In 2024, the global temperature has increased by about 1ºC since 1900, with the majority happening after 1970. As a result, the ocean has warmed as well, causing ice caps to melt and the sea level to rise. A main cause of this is the burning of greenhouse gasses. Only approximately 8% of Americans do not own a car. A glance out the window during a busy time of day will show streets packed with everything from fancy red convertibles to rusty pickup trucks. I have worked to decrease the damage our race causes to the Earth, yet it still rises. People are all for helping to stop climate change until it actually inconveniences them. Few are willing to give up constantly driving their car instead of walking, biking, or using public transportation, or to fly across the world a few less times. While in my mind I realize that not all of this is your fault, my heart protests against it, aching for the state of our world. Neither of us can help it if your life’s work happens to clash against mine.

All of this goes to say that, while I respect how hard you worked to make the automobile what it is today, I will work twice as hard to ensure that the damage is undone. I will not apologize for that.

Greta Thunberg


ChatGPT’s Version

Dear Mr. Henry Ford,

The day everything changed for me was just another day at school, until it wasn’t. We watched a film about our planet’s environmental crisis, and the images of pollution and destruction struck me to the core. It was a moment of awakening — a stark, horrifying realization that we are in the midst of a grave crisis. I remember the despair I felt, a sharp contrast to the indifference of my peers. Such moments define us, much like the day you pioneered the assembly line must have defined you, unknowingly setting off a chain of events that would lead us here.

Your innovation brought mobility and progress, but at a catastrophic environmental cost. We now face a planet warmed by over 1°C since 1900, mainly due to emissions post-1970. The burning of fossil fuels, a legacy of the automotive boom you ignited, has pushed our planet to the brink. Our ice caps are melting, our seas are rising, and our future is at stake.

I understand that the environmental crisis we face today cannot be solely laid at your feet. However, the industry you created has played a pivotal role in this unfolding tragedy. As I witness the relentless march of cars outside my window, each contributing to a cloud of pollution, I am reminded of the urgency of our situation.

Mr. Ford, while your intentions may have been to better the human condition, the reality is that we are now paying a heavy price for this ‘progress.’ I respect your ingenuity, but I cannot overlook the environmental devastation it has wrought. I am committed, more than ever, to fight for a sustainable future, to challenge the status quo, and to reverse the damage we have done to our Earth. This is not just a promise; it is an imperative.

We are at a crossroads, and I will not rest until we choose the path of sustainability, responsibility, and survival. I will advocate, protest, and work harder than ever to ensure a future where technology serves our planet, not destroys it. This is the mission of my generation, and I accept it wholeheartedly.

Yours in determination,
Greta Thunberg

Crusaders

Dan Jones is a great chronicler.  He knows how to turn dry events into vivid stories, which characters to focus on so his narrative always has a humanly anchor point, and when to make witty quips without being seen as overly opinionated.   Some writers have the talent to captivate their audience with no more than the charm of their language – I think Jones is one of them.

“Crusaders” covers nearly four centuries of medieval history, from the end of the eleventh century CE, when Pope Urban began to preach the Holy War against the infidels in the east, to the conquest of Jerusalem by the Ottman empire in 1517.   Officially, crusading met its calamitous end in 1291, when Acre, the last stronghold of the Christian kingdoms in the east, fell to the hands of Mamluks.   However, as a phenomenon, crusading continued until Columbus’s discovery of America––which was “full of things to trade or steal, and teeming with people to subjugate, convert or kill”—convinced Western Christendom that its future “lay to the west, not the east”.

Out from this eventful and bloody chapter of human history stand a few prominent and complicated characters that I think deserve some ink even in a brief book review.

Richard the Lionheart, the legendary king of England who spent most of his adult life in France, was the commander in chief in the Third Crusade.   Rumored to be a gay, Richard was famed for his martial prowess, courage and generosity. He also was a man of letters who loved lyric poetry and music and courted poets of High Middle Ages.  Under Richard’s leadership, crusaders retook Acre and delivered a string of humiliating blows to the army of the mighty sultan Saladin of Ayyubid Dynasty, but ultimately fell short of seizing Jerusalem itself.  The struggle ended with a negotiated truce that placed the coastal towns between Jaffa and Acre under the Christian rule, while allowing Christian pilgrims and merchants to access the Holy City.  Although the settlement helped stabilize the Kingdom of Jerusalem for decades to come, it forever transformed crusading from a religious imperative into an enterprise of colonization.

Like many powerful men of his age, Richard was often reprimanded in history books for being lustful, greedy, and cruel.  I suspect some of Richard’s vices were exaggerated by the clergymen who resented him for being forced to pay for his military adventures.  That said, the extent of Richard’s cruelty is indisputable.  The most notorious episode was the execution of 2600 unarmed and bound prisoners of war at Acre, as a retaliation against Saladin’s failure to fulfill his promise to “return the relic of the True Cross and pay his bounty”.   Be technically legal as it may, noted Jones, this despicable act of cruelty was “excessive even by the standards of the day”.  Little wonder Richard’s name has acquired such an infamy in the Muslim world that it was often invoked by impatient moms to calm their unruly children.

Enrico Dandolo, the doge of Venice, was the hero––or the villain, depending on who you ask––in the Fourth Crusade.  He took the cross at an incredibly advanced age of 95, having gambled his country on a military alliance according to which Venice would equip and supply the Fourth Crusade in exchange for 85,000 silver marks.  When Dandolo realized his airheaded partners could not pay their dues, he decided to save Venice from bankruptcy by what essentially amounted to organized robbery.   His first target was the city of Zara, a possession of King Emeric of Hungary who was not only a pious Chrisitan but also a fellow crusader.  Zara’s sacking infuriated Pope Innocent III as he had explicitly forbidden it.  As a result, all Venetian crusaders were “excommunicated”, i.e., officially expelled from the Catholic Church.  Dandolo couldn’t care less. He soon seized another opportunity that promised even more money, by injecting the crusaders into a conspiracy aimed at dethroning the Byzantine emperor.  There is no space to recall the entire drama – suffice to say that it led to the siege and fall of Constantinople in 1204.  Once again, Dandolo’s allies failed to hold their side of the bargain, so it seemed as if he almost had no choice but to help himself with what was promised to him.  For three days, the crusaders vandalized the richest city in the West.  The estimated total value of the loot amassed during their plundering is believed to be around 900,000 silver marks.  If this figure is accurate, then Venice’s investment in the Fourth Crusade yielded a staggering tenfold return.   Dandolo thus exemplified the notion of prospering by doing God’s bidding – a modern entrepreneur from Silicon Valley would recognize this as the medieval version of “doing well by doing good”.

At the time, many ancient and medieval Roman and Greek works were stolen and sent back to Venice. The most notable were the four bronze horse statues from the Hippodrome, believed to have been crafted in the second or third century CE.    When I visited Venice in the summer of 2023, a replica of these magnificent statues was indeed, as Jones teased, “still proudly displayed at Saint Mark’s Basilica.”  Our Venetian tour guide was careful not to dishonor what is considered a national treasure in her country. The horses, she told us, were “brought back” from Constantinople 800 years ago.

Dandolo died a year after the fall of Constantinople. He was 98 and had been visually impaired for more than three decades.  The crusaders understandably cheered what they had accomplished under the command of the aged and fragile man as a miracle.  To many a Christian, however, the brutal sacking of Constantinople was a dark and scandalous chapter in the history of their faith.   The cruel irony—a mission sanctioned by the Catholic papacy resulting in the destruction of the spiritual capital of the Eastern Orthodoxy—was simply beyond the pale.  Jones summarizes Dandolo’s controversial involvement in the crusade aptly:

“He had bravely defied his physical disability and his decrepitude, and his pragmatic leadership and dauntless personal valor were beyond question. Yet in the end Dandolo had turned his talents to a wholly disreputable end, playing a leading part in a dreadful episode that, even by the cruel standards of the crusading era, thoroughly deserved the epithet leveled against it by Choniatēs: “Outrageous.”

Another fascinating historical figure from this era is the leader of the Sixth Crusade, Frederick II, the emperor of the Holy Roman Empire.   His famed grandfather, Frederick I “Barbarossa”, drowned while attempting to cross a river during the Third Crusade.  About 750 years later, Adolf Hitler, in a seemingly ironic twist, named his ill-fated Russian campaign after the elder Frederick.  However, Frederick II succeeded where his progenitor faltered. Through an agreement reached with the Ayyubid sultan Al-Kamil, he regained control of Jerusalem in 1229, a feat that three costly crusades had failed to accomplish in four decades.  To be sure, Frederick II enjoyed good fortune, as Ayyubids were distracted by potential conflicts with its Muslim brethren in Syrian and Mesopotamia. However, there is no question that the emperor’s intelligence, personality, pollical acumen and breadth of knowledge also played a crucial role. Fredreick II was, in the words of Jones, “a shockingly liberal intellectual and a bluntly pragmatic ruler”.    He spoke six languages, including Arabic and Greek, boasting a reputation as a polymath.

Fredreick was a man with an insatiable curiosity about the natural world that extended far beyond the tenets of Christian Scripture. He loved natural sciences, astrology, logic, rhetoric, medicine, law, philosophy and mathematics…(and) surrounded himself with Latin, Greek, Muslim and Jewish tutors, advisers, poets, scholars and bureaucrats. Well into adulthood, he retained a personal Arab tutor in logic, and he corresponded with Jewish and Arab scholars in southern Spain.

In short, Frederick was a philosopher king in the Platonic ideal, reminiscent of figures like Marcus Aurelius of the Roman Empire and Kangxi of the Qing Dynasty in China.

Paradoxically, the “greatest and least bloody crusading victory” won by Fredreick was met with universal condemnation rather than exaltation among his fellow crusaders.  When the emperor left Acre, it was reported, he was “hated, cursed, and vilified”. Why? Ostensibly, the reason was that his participation in the Six Crusade was technically illegal because he had been excommunicated by the pope for allegedly failing to honor his previous crusading pledge.  However, his quarrels with the papacy ran deep and deteriorated following his triumph in the east.  Eventually the most successful crusader of his time would become himself the target of a crusade officially endorsed by the Catholic church.  Although Fredreick “could be infuriating, overbearing and self-serving”, concluded Jones, it is still difficult to “conceive of a greater perversion of the institutions and language of crusade than for such a war to be preached against” him.

Beneath the veneer of glory surrounding these crusading kings and generals lay unspeakable violence, horrific human suffering, and ferocious atrocities.  After all, as Jones noted, “there was precious little time for thoughts of human rights on either side” of the crusading divide.

When Baldwin II of Kingdom of Jerusalem laid siege to Aleppo in 1124––toward the end of his futile effort to break into the Syria interior—his army reportedly engaged in “elaborate rituals of depravity” against the Muslim residents.  According to Jones, the crusaders

“raided Muslim funeral chapels, took coffins to repurpose as storage chests for their camp, then goaded the citizens with the sight of their dead relatives’ corpses being grotesquely desecrated…Whenever the Franks captured an Aleppan Muslim, they cut off his hands and testicles.”

During the Fifth Crusade, Damietta, the third-largest city in Egypt, endured a siege lasting a year and a half.  Even the battle-hardened crusaders were apparently horrified by what they saw in the once-thriving city. It had transformed into a ‘fetid, disease-ridden graveyard, inhabited by mere skeletons and ghosts.’ The few survivors were overwhelmed, unable to bury the countless corpses that littered the streets, and the stench “was too much for most people to bear”.   Shocked as they might be, the crusaders showed little pity, much less remorse. Soon enough, wrote Jones, “Christian thieves” began to “run around taking what they could” and force starving Muslim children to undergo baptism.

When Jerusalem fell to the raid of a Khwarizmian (花刺子模) mercenary army of Ayyubid sultan in 1244—only 15 years after Fredrick’s diplomatic victory—it was utterly devastated. The Khwarizmians hunted down and slaughtered six thousand Christian civilians trying to flee the abandoned city. Then, on August 23,

the Khwarizmians entered the almost empty city of the Israelites and in front of the Sepulchre of the Lord they disemboweled all the remaining Christians who had sought refuge inside its church. … The marble around Christ’s tomb was either smashed or scavenged and the tombs of all the crusader kings of Jerusalem buried near Calvary were opened and their bones tossed away. Elsewhere other highly revered Christian churches and shrines received the same treatment: the priory at Mount Sion, the tomb of the Virgin Mary in the valley of Jehosophat and the Church of the Nativity in Bethlehem were all desecrated.

Ironically, the Khwarizmians were themselves victims at the hands of an even more formidable force. About 25 years earlier, the horde of Genghis Khan had besieged and pillaged Samarkand, the capital of their empire.   In some sense, he was indirectly responsible for the terrible losses of Christians in 1244, as the collapse of the Khwarizmians empire had left its jobless soldiers to scatter around, much like a deadly shock wave sweeping through the Middle East.  The Mongols, of course, did not discriminate between Christians and Muslims.  When they captured Baghdad, arguably “the most civilized of cities” at the time, they killed at least 100,000 Muslims.   Yet, their worst crime against humanity was probably destroying the great city’s House of Wisdom, a library that “contained the largest and most sophisticated collection of books on earth” – so many books were thrown into the Tigris, wrote Jones, “that the water was said to have flowed black with ink.”    

No medieval horror movie would be complete without mentioning the hideous crimes against Jews.  In fact, the First Crusade marked a tragic turn in the fortunes for Jewish diaspora in Western and Central Europe.

In 1096, even before leaving their own country for the First Crusade, French and German crusaders turned on local Jewish communities.  At Mainz, they stormed the residency of archbishop Ruthard where seven hundred Jews sheltered for his protection.  The indiscriminatory slaughtering by this mob was so appalling that many desperate Jews killed each other to avoid execution by the “weapons of the uncircumcised”.  Similar mass murders took place elsewhere.  In Cologne, according to Jones, “young men and women threw themselves into the Rhine and fathers killed their children rather than see them fall into the hands of the enemy”.   This “orgy of anti-Semantic violence”, collectively known as Rhineland massacres, is widely seen as a harbinger for what was coming for Jews in Europe in the next millennium.

About a hundred years later, the fervent zeal ignited by the Third Crusade engulfed the English populace. Months of riots against England’s Jews ensued.  During this period, it was not uncommon to witness mobs chasing and assaulting Jews in the streets, forcing them into coerced baptisms.  The worst incident occurred in York in March 1190, when hundreds of Jews, seeking refuge in the city’s castle, were either killed or forced to commit mass suicides.  The persecution of Jews in England would continue and culminate in 1290, when the country officially expelled its Jewish population and enacted a ban that would last nearly four centuries.

Shortly after I finished reading “Crusaders”, on October 7th, 2023, Hamas militants perpetrated the worst mass murdering of Jews since the Holocaust.  There is no need to recite the details of the crimes.  Anthony Blinken, the US Secretary of State, summed it up well: “depravity in the worst imaginable way”.   Viewing this incident in the context of crusade, however, I felt that I have seen the movie before. The latest version is set on the same stage and has a similar plot, though played by different actors.  In this movie, it was Jews, rather than Christians, who were the infidels that Muslims tried to expel from the land they believed was righteously theirs.  

History has never stopped projecting the conflicts in Palestine through the lens of the Crusades.  When British general Edmund Allenby marched into Jerusalem as a victor in 1917, ending the four-hundred-year control of the Holy City by the Ottoman Turks, he proclaimed, allegedly, that “the wars of the crusades are now complete”.   Whether he said it or not, the forecast was wrong. The British mandate of Palestine would give way to the rebirth of the Jewish state in what many Muslims saw as a continuation of the medieval crusades, only this time Jews and Christians were co-conspirators. Surely that was how Osama Bin Laden saw it. In the ‘Letter to the American People’, now widely circulated thanks to Tik-Tok, he wrote,

Palestine has been under occupation for decades, and none of your presidents talked about it until after September 11. … You should be aware that justice is the strongest army and security offers the best livelihood; you lost it by your own making when you supported the Israelis in occupying our land and killing our brothers in Palestine.

Likewise, President George W. Bush once likened the US response to the 9/11 attack to a crusade, warning the American people that “this crusade, this war on terrorism, is going to take a while”.  

Even the rhetoric sounds eerily similar, and it always invokes some version of a just war, i.e., the “violence that was regrettable but legitimate and even moral, so long as it was undertaken to protect the state and would ultimately serve to produce or restore peace.”  Bin Laden put it more bluntly, “it is a sin to kill a person without proper, justifiable cause, but terminating his killer is a right.”  What remains unsaid and perhaps unknowable, however, is who gets to decide what causes are proper and justifiable, and how far back in history one must trace them.

Hence, the life-and-death struggle for the Holy Land, waged in the name of that One True Faith, has never really ended. And the idea of crusading will perpetuate cycles of violence and plight as long as there are crusaders on Earth.

 

Marco Nie, Northwestern University

December 30, 2023

 

 

The song of Achilles

I read The song of Achilles about two years ago, wrote a short review then but never got the chance to post it here.  This is one of the few fiction books I have read cover to cover  since I turned 40 – thanks to my daughter’s recommendation.


My 11-year-old daughter fell in love with Greek mythology lately and has filled her bookshelf with the likes of Percy Jackson and Trials of Apollo.  Frustrated with my complete ignorance of the subject, she tried repeatedly to get me to read some of her books.  She marveled at The Song of Achilles all the time and insisted I must read the book because it is simply “too good” to pass over.  Eventually, I caved in despite my reluctance—novels have largely ceased to interest me, let alone a novel about Achilles, whose story has become a cultural cliché, even in China. Who could forget about the heels that his mom famously failed to wash in the magic spring?

It turns out I enjoyed the book more than I thought I could.  Madeline Miller made me constantly guess the theme of the book, but she managed to outwit me at every turn.  Initially, it seems that the book is about the love between two young men: Achilles and I the narrator (Patroclus). Then, I thought the focus is the insanity of the Trojan war, and how it transforms an innocent boy into a monstrous killing machine.   At one point, Miller mocked nationalism and advocated humanitarian principles, when she proclaimed through Chiron (a centaur) “nations were the most foolish of mortal inventions” and “no man is worth more than another, wherever he is from”. Eventually, I realize the central plot may be the ancient conflict between a jealous mother and her son’s spouse (a son-in-law in this case).  Achilles’s mom, Thetis, refused to endorse his relationship with Patroclus till the very end, even after they are buried together.   In the eyes of the jealous mom, Patroclus is an unattractive mortal unworthy of Achilles, a man who cannot bear an offspring for him, and above all someone who committed the unforgivable sin of sharing the love of her son.  But more fundamentally, Thetis and Patroclus fought hard to bring about a different Achilles in the book: Thetis wants a god-like, ruthless warrior, while Patroclus prefers an empathetic, creative human.  It seems to me this discrepancy, not the Prophecy, finally sealed the tragic fate of the couple.

Having finished the book, I must say I don’t quite understand why my daughter and her friends like it so much.  It is a book written for adults, with contents that I imagine some parents might find objectionable for kids of her age.  I know for a fact in my generation such a book would be considered off limits for 11-year-old. But, hey, we live in a different age, don’t we?

Solomon’s Ring

Legend has it that King Solomon’s ring, also known as the Seal of Solomon, conferred on him the ability to command the supernatural and to speak with animals.  Despite the enticing title, the book has nothing to do with King Solomon and his famous ring, or Jewish history, or the Israel-Palestine conflict (since this topic is on everyone’s mind these days…).  Instead, it consists of interesting stories about the animals that the author raised to observe their behaviors.  Widely considered “the father of ethology”, Konrad Lorenz won Nobel Prize in Medicine in 1973 for his foundational contributions to the study of non-human animal behaviors.  King Solomon’s Ring, published in 1949 and written for a popular audience, remains his best-known book.  Lorenz was a controversial figure due to his association with Nazism, which apparently came to light only after his death.  According to Wiki, not only was Lorenz a Nazi, but he served as a psychologist in the notorious Office of Racial Policy during the war.  In his application for the party membership, Lorenz pledged to devote “his whole scientific work to the ideas of the National Socialists”.  That said, I found no racial slurs, dog-whistles or anything that can be construed as remotely antisemitic or hateful in the book.   Quite the contrary, the book was a relaxing and enjoyable read that made me giggle more than any book in recent memory. Beyond fascinating facts about animals, the reader will also be confronted with thought-provoking questions concerning human nature and the relationship between men and animals.

Lorenz described many species of animals that he kept in and around his home, ranging from fish and birds to dogs and monkeys.  Notably, he did not keep these animals in captivity but instead let them – to the extent possible – freely wander around on his property, even in his office.  In some sense this was the mandate of his work, since only free ranging animals can “be themselves” and thereby reveal their natural behaviors. However, to Lorenz these animals were more than just a research subject.   He lived with them, bonded with them, and cherished their company.   He saw humanity in these animals – or animal traits in humans, depending on your perspective – because humans, in a quite literal sense, are their descendants.   As a result, his writing adores and humanizes them.

I was never a big fan of animals. Growing up in a small and poor city in China, where few families keep pets in their home, I was naturally disposed to be afraid of most animals, including dogs and cats.  Yet, I think even I would find the gaze of Lorenz’s beloved dog, named Tito, irresistible.  Tito was an Alsatian (or German Shepherd), famous for being “exaggeratedly faithful”.    Lorenz recalled that Tito would remain lying at his feet for hours and hours as he works at his desk, and

she was far too tactful to whine or to call attention to herself by the slightest sign. She just looked at me. And this gaze of the amber-yellow eyes in which was written the question “Are you ever going to take me out?”, was like the voice of conscience and easily penetrated the thickest walls.

Lorenz injected a delightful sense of humor into his storytelling that is truly infectious. I remember several instances when I laughed so loudly in my office that people in the hallway could probably hear me.   His vivid account of the territory-setting battle between two stickleback fish was a great example.  He wrote, describing how the distance from a male fish’s nest is a reliable predictor for the strength of not only his will, but also his actual ability to defeat his rival,

In the immediate neighborhood of his nest, even the smallest male will defeat the largest one…. The vanquished fish invariably flees homeward and the victor, carried away by his successes, chases the other furiously, far into its domain. The further the victor goes from home, the more his courage ebbs, while that of the vanquished rises in proportion. Arrived in the precincts of his nest, the fugitive gains new strength, turns right about and dashes with gathering fury at his pursuer. A new battle begins, which ends with absolute certainty in the defeat of the former victor, and off goes the chase again in the opposite direction.

On another occasion, Lorenz saw a father jewel fish accidentally swallow, at the same time, his own baby—a duty he routinely performs to save his children from drowning—and an earthworm, his favorite food. The father thus faced a dilemma, as in his mouth were two different things “of which one must go into the stomach and the other into the nest”. Lorenz recalled with amusement what unfolded next,

The fish stood stock still with full cheeks, but did not chew. If ever I have seen a fish think, it was in that moment! … For many seconds he stood riveted and one could almost see how his feelings were working. Then he solved the conflict in a way for which one was bound to feel admiration: he spat out the whole contents of his mouth: the worm fell to the bottom, and the little jewel fish, becoming heavy in the way described above, did the same. Then the father turned resolutely to the worm and ate it up, without haste but all the time with one eye on the child which “obediently” lay on the bottom beneath him. When he had finished he inhaled the baby and carried it home to its mother.

Using his jackdaw bird colony, Lorenz repeatedly explores what appears to be an important theme of the book: the similarities and differences between human and animal behaviors.

He observed how jackdaws teach their youth about the danger of the enemy by making a rattling sound in response to a dangling black object in sight. This is remarkably “human” for two reasons. First, knowledge is passed on to the next generation through “learning” rather than “inheritance”. Second, like jackdaws, humans also fall victim to such blind, instinctive reactions (the black object). I am certain Lorenz had his former Fuhrer in mind when he asked,

“Do not whole peoples all too often react with a blind rage to a mere dummy presented to them by the artifice of the demagogue?”

Lorenz observed that a “married” jackdaw couple would not only take each other to love and to cherish till death do they part, but also, apparently, maintain “the glowing fires of the first season of love” throughout their marriage.    Even after many years, he wrote, “the male still feeds his wife with the same solicitous care, and finds for her the same low tones of love, tremulous with inward emotion, that he whispered in his first spring of betrothal and of life”. At first glance such a relationship feels amazingly human; but if you pause and think again, you realize it is in fact quite nonhuman, if not superhuman.  Although humans may live in a life-long marital union, Lorenz lamented, they tend to forget “the thrilling enchantment of courtship’s phrases entirely” as time goes on, and only perform the ritual of their marriage “with the mechanical apathy common to other everyday practices”.

It is well known that a definite order – by which each animal is afraid of those above them in rank – exists in many social animals.  Lorenz’s jackdaw colony is no exception. The interesting twist is that a female jackdaw can acquire a higher rank by marrying a male who ranks above her – a social mobility that is, unfortunately, not available to a male (again, how very human this is!).  If the bird marries the king, she will be granted by every member of the colony the status of a queen.  When this happens, the news of the marriage, and hence the promotion of the wife, spreads quickly in the colony. The funniest part of the story is how the newly crowned queen, having suddenly risen far beyond her own station, would “conduct herself with the utmost vulgarity” when she encounters other jackdaws whom she must look up to only a few days earlier:

She lacked entirely that noble or even blasé tolerance which jackdaws of high rank should exhibit towards their inferiors. She used every opportunity to snub former superiors, and she did not stop at gestures of self-importance, as high-rankers of long standing nearly always do.

Establishing a pecking order is one way by which social animals resolve conflicts without suffering excessive casualties. Lorenz mentioned another mechanism that I shall call the surrender’s inhibition.  According to this law, a victor emerging from a bloody battle for dominance would be inexplicably “forbidden” from hurting the loser, as long as the latter surrenders, i.e., offering to his adversary the most vulnerable part of their body as a submissive gesture. Humans evidently have inherited the habit of making submissive gestures (e.g., kneeling and bowing) when facing a dominant aggressor. Unfortunately, such an appeal to mercy is not as failproof among humans as in the animal world. Homer’s heroes, noted Lorenz, often killed supplicants “without compunction”.  Bai Qi, a Qin Kingdom general, killed 400,000 surrendered soldiers after the Battle of Changping, a prelude to the kingdom’s brutal campaign to unite China under imperial rule.  Mongols, of course, had an abhorrent reputation for indiscriminately slaughtering entire cities of people when they faced even the slightest resistance during their conquests. Nor do we have to go back to primeval or medieval times for the evidence of our species’ sub-animal barbarity.  About three weeks ago, on October 7th, 2023, Hamas militants invaded Israel and killed more than 1,000 civilians, including many children and elderly – many of the victims, I imagine, would have begged for their lives, but to no avail. Why?

Lorenz argues that the surrender’s inhibition is a result of evolutionary adaptation.  That is, for a species to survive, it must develop a social inhibition to prevent the abuse of its lethal weapon which could endanger the existence of the species.  However, we humans make our weapons “of our own free will” rather than grow them on our bodies as dictated by nature.     Because human weaponry developed so rapidly relative to the time scale of evolution, our instincts could not keep up with it, leading to a lack of adequate inhibition in its usage.  There is a certain truth to this argument.  However, humans also have far more reasons to murder the members of their own species than the imperative of survival. Ideology, for example, offers a powerful motive for mass killing infidels, heretics, or those who happen to have an intolerable identity.  In the end, Lorenz expressed optimism that humans can learn from animals, that if anyone slaps us on the right cheek, we should, as Bible teaches us, turn to him the other cheek also.  This is not so that our enemy may strike us again, explained Lorenz, “but to make him unable to do it”.  I admire his faith in humanity and wish he was right, but I am deeply skeptical whether this age-old wisdom would have saved anyone who was killed by Hamas fighters on October 7th.

Is college worth it?

Today’s NYT’s Daily podcast reports some shocking statistics about Americans’ fading faith in higher education.  In 2010, about 98% American parents want their kids to go to college. Today, that number stands at roughly 50%. During the same period, the number of college enrollments dropped from about 18 million to less than 16 million, despite a 7% increase in total population.

So why is the dramatic change of heart? The podcast offers three reasons, if my memory serves me well.  First, college education no longer makes as much economic sense as it used to.  While college-goers in 1960s and 1970s enjoyed both wage premiums and wealth premiums, life-time wealth accumulation have significantly declined, in some cases disappeared altogether, for younger generations, partly because college has become so much more expensive. Second, colleges are increasingly perceived as liberal hotbeds that could barely tolerate conservatives, much less welcome them.  Third, higher education reinforced a deeply held belief that American institutions are rigged to favor elites, whose kids have dominated the campuses of the best colleges.  At the gate of these colleges, the children of the “working class” are forced to play a meritocracy game that they have no chance to win.  So, they quit.

Surprisingly, this well-reasoned analysis does not mention the intrinsic value of education. While college education does produce a piece of paper that certifies one’s worth in the job market, it also provides something that is hard to articulate and measure but is vital to the betterment of individual lives, as well as the functioning of civilized society at large.  For over a half century, the higher education system in the US has been very successful selling to the parents the notion that college is a wonder investment with only upsides.   The current revolt by the parents suggests it might have overplayed that marketing strategy.

Team of Rivals

Doris Goodwin’s ‘Team of Rivals’ was the first presidential biography I ever read.  Biography was not among my favorite genres, but I did have a desire to learn more about Abraham Lincoln.  He is widely considered the greatest American president. In fact, to many even that title seems an understatement.  Tolstoy once wrote that Lincoln ‘was bigger than his country—bigger than all the Presidents together…and as a great character he will live as long as the world lives’.    Like most people, I’ve heard about the highlights of Lincoln’s remarkable life as I passed through grade schools: the self-made lawyer and politician haunted by family tragedies, the epic struggle to end slavery while forging a truly United States of America, and the ultimate sacrifice for the cause at the zenith of his career.  Still, I am not quite sure how to make Tolstoy’s melodramatic assessment. The book partially solved the puzzle for me.

Goodwin’s narrative is constructed around, and often from the perspectives of, Lincoln’s key cabinet members who were once his rivals:  Salmon Chase (Secretary of Treasure), Henry Seward (Secretary of State), Edward Bates (Attorney General), and Edwin Stanton (Secretary of War). The first three men ran against him for the nomination of the Republican party, and Stanton, when serving with Lincoln as co-counsel in a lawsuit, not only questioned the then country lawyer’s legal expertise but openly ridiculed him as ‘a gorilla and an imbecile’.  As Goodwin follows Lincoln’s footsteps from the humble origins to the poignant end, she recounts many stories of these rivals, often quoting extensively from their public speeches and private letters.  This helps unlock the mystery in Lincoln’s persona that ‘led countless men, even old adversaries, to feel bound to him in admiration’.

Lincoln ‘possessed extraordinary empathy’ and a ‘melancholy temperament’, wrote Goodwin.  These qualities might be the result of the tragic losses he endured from an early age – at 26, he had already lost three women dearest to his life: his mother, his only sister, and his first love. Empathy can be a curse ‘in a world environed by cruelty and injustice’ because, as Goodwin noted, the fellow-feeling for the misery of others inevitably causes pain and suffering.  It also sometimes made him appear weak and lacking the will to do what must be done in difficult situations. His attorney general confided to a friend that Lincoln, despite ‘very near being a perfect man’, was ‘unfit to be entrusted with the pardoning power’, because he too easily succumbed to touching stories and women’s tears. Yet, empathy was a powerful tool for Lincoln to gain the respect, trust, and devotion of others through understanding their motives and desires.  It also rendered him a remarkably magnanimous man, demonstrating an incredible capacity to forgive even those who had opposed, wronged, and betrayed him.

Goodwin also lauded Lincoln’s `literary genius’ and his mastery of rhetorical power.   His ability to explain intricate concepts through storytelling, coupled with a sharp sense of humor, was unparalleled among his contemporaries.  In the strictest sense of the word, Lincoln might not be as great an orator as Seward, who could deliver stirring, completely improvised speeches to a crowd for hours.  Lincoln was much more careful with his words, but he perfected ‘a language of enduring clarity and beauty’ that made him an extremely persuasive and effective communicator.

Lincoln believed in ‘the better angels of our nature’, a term coined in his first inaugural address.  He once told a friend that he preferred to believe in the possibility of human perfection, when asked about whether George Washington was a perfect man.  His entire life may be seen as the pursuit of becoming that perfect, inspiring human being he envisioned. It is the unwavering conviction to ‘engrave his name in history’, Goodwin noted, that underscores Lincoln’s greatness, carrying him through the dreary childhood, the political failures, the personal tragedies, the disintegration of his beloved Union, as well as the devastating military defeats in the early phase of the Civil War.

But Lincoln was also a realist.  Unlike Chase and Seward, who had advocated for radical abolitionist policies on moral grounds, Lincoln carefully charted a moderate path confined within the limits set by public opinion on slavery. His famous Emancipation Proclamation was timed and configured to be perceived by the people of the North as an indispensable instrument to win the war and preserve the Union, rather than as a necessary step to end slavery once and for all.   Goodwin sees nothing wrong for politicians to go along with public opinion, even if that means they must slightly bend their moral compass.  If anything, that expediency made Lincoln ‘the most truly progressive man of the age’, because he neither ‘wasted strength in premature struggles’ with the public nor waited to be ‘dragged by the forces of events’.  To be sure, Lincoln did owe much of his success to his exceptional ability to read and follow the will of the people.  But that does not make him ‘the most truly progressive’.  Based on what I gathered from the book, Lincoln is more of a pragmatist, a shrewd politician, maybe even a ‘political genius’ (As Goodwin likes to call him). Yet, he does not seem to have the burning conviction to reshape the world in the image of his ideology that many a great man of history possesses.  That difference, I think, is precisely what sets Lincoln apart from (or above, depending on how much you love him) that league of great men.

I was always curious about Lincoln’s view on race.  According to the book, Lincoln was against slavery but did not believe in racial equality.  He said the physical difference between whites and blacks would ‘probably forever forbid their living together upon the footing of perfect equality.’  As a result, he was not in favor “of making voters or jurors of n****, nor of qualifying them to hold office, nor to intermarry.”  Nor did he just say these things to get the white people’s votes.  Lincoln was a passionate advocate for colonization, the idea of aiding freed slaves to establish a colony in Central America. To sell this proposal to the country, he even convened a conference of freed slaves at the White House, where he said in his opening remarks, “you and we are different races. We have between us a broader difference than exists between almost any other two races.”  By today’s standard, therefore, Lincoln is a textbook racist. Should harboring racism in 19th century diminish his greatness?  I imagine Tolstoy and Goodwin would dismiss such a thought as quintessential presentism. But many from today’s political left would probably disagree with them.

‘Team of Rivals’ is a thick book of nearly 1000 pages, of which about a quarter were notes.  It was meticulously researched and elegantly written, though at times, the lengthy quotes and extravagant details about the lives of the people in Lincoln’s outer orbit feel a bit excessive. If you don’t want to read the whole book, do not miss the last chapter, in which Goodwin describes how Lincoln met his destiny.  I finished that chapter on an airplane – I still remember having tears in my eyes that I had to hastily cover when a flight attendant asked me if I needed a drink. That rarely happened to me.   I shall end with a quote taken from the very end of the book.

“With his death, Abraham Lincoln had come to seem the embodiment of his own words—’With malice toward none; with charity for all’. The deathless name he sought from the start had grown far beyond Sangamon County and Illinois, reached across the truly United States, until his legacy, as Stanton had surmised at the moment of his death, belonged not only to America but to the ages—to be revered and sung throughout all time.”

Can Artificial General Intelligence ever be Human Compatible?

When I was in graduate school in the early 2000s, the phrase Artificial Intelligence, or AI, did not have the mesmerizing power it possesses today. The field might have been slowly recovering from the twilight of 1990s, but remained an obscure subject that did not exactly inspire enthusiasm among graduate students –– certainly not in my field of study.  I might be more biased against AI research than most in my cohort, having acquired a distaste for it from the Dreyfus brothers’ contentious book, Mind Over Machines, which I interpreted at the time, perhaps over simplistically, as a rebuke of AI aspiration.   Much has happened since then. In the past decade, AI has made breath-taking progress that enabled computers to navigate complex urban environments and beat the best human Go players.  The Dreyfus brothers would probably read the news of these developments with astonishment and disbelief, though they may still not be ready to withdraw their opposition. For me, the last straw was ChatGPT, the chatbot that demonstrates human- and superhuman-level performance in tasks that I never thought can be done by computers in my lifetime: write essays, produce arts, and even achieve top 1% scores in the GRE verbal test, all delivered instantly by conversing fluently in natural language.  I am convinced that I need to reassess my outdated opinions about AI.  This conviction led me to delve into Human Compatible, a book written by Stuart J. Russell in 2019, whose work I initially came across on Sam Harris’s Podcast.  Russell is a world-renowned AI researcher at UC Berkeley, where, ironically from my perspective, the Dreyfus brothers had spent most of their teaching careers.

Russell began by defining human intelligence loosely as the ability to achieve one’s objectives through actions.  He believed AI should be described and assessed similarly. Yet, he argued that the focus should not be the “strength” of that ability, but rather its “usefulness” to humanity.  In his words (the emphasis is mine), “machines are beneficial to the extent that their actions can be expected to achieve our objectives.”

Paradoxically, a machine that strives to achieve our goals could still be an eminent danger to us.  For one thing, humans do not always know their real objectives.  Steve Jobs famously said, “people don’t know what they want until you show them.” Russell quipped about the perils of “getting exactly what you wish for”, as everyone who has been granted three wishes by a god can relate to.  He calls this the King Midas problem, because the legendary Greek King demanded that everything he touched would turn into gold, only to later regret his ill-fated wish.  Second, a rigid, human-specified goal can often be best achieved by violating norms and values that we humans consider common sense.  In a thought experiment, Russell imagined a super-intelligent machine, being asked by its human masters to cure cancer, decided to deliberately induce tumors in human beings so that it may carry out medical trials of “millions of potentially effective but previously untested chemical compounds”.  Be the fastest cure as this strategy may, it is an abhorrent violation of the established ethical standards in the field of medicine. This is the infamous value alignment problem in AI research.

At this point, most readers would probably breathe a sigh of relief and dismiss these so-called dangers as the illusion of doomsayers.  Surely enough, no machines that we know of can grant us wishes or cure cancer without any human supervision, right? Russell warned such complacency is dangerous and irresponsible, given the rapidly improving competence of AI systems. Contrary to what Hollywood movies lead us to believe, a conscious machine is not necessarily dangerous even if it hates humans. But a highly competent one surely is.

When it comes to the future of AI competence, Russell can be described as a cautious optimist. Not only does he believe artificial general intelligence, or AGI, is possible, but he once predicted “it would probably happen in the lifetime of my children”. He reminded us, furthermore, he is “considerably more conservative” than most active AI researchers, adding that it is entirely possible that AGI could come much sooner than his humble forecast.  In part, Russell’s confidence stems from seemingly boundless computing power available to machines. At the time of his writing, the fastest computer on earth, the Summit machine at the Oak Ridge National Laboratory, has gained a raw processing capacity in par with human brain, roughly 1017 operations per second (ops).  But this is infinitesimal compared to what machines could acquire in theory: 1051 ops for a laptop-sized computer, according to an estimate “based on quantum theory and entropy”.

To be sure, faster does not mean more intelligent.  As Russell said, a faster machine may simply “give you the wrong answer more quickly”.   According to him, reaching AGI still awaits several conceptual breakthroughs that may be hard to come by, which include: (i) understanding and extracting information from natural language; (ii) cumulative learning and discovery, which is essential to advancing science; (iii) planning and executing activities hierarchically to achieve complex objectives (e.g., going to Mars); and (iv) becoming an autonomous thinker that can manage one’s own mental activity (i.e., knows what and when to think).

Russell asserted that natural language technology was “not up to the task of reading and understanding millions of books”, and even though the existing language models can “extract simple information from clearly stated facts”, they can neither “build complex knowledge structure from text” or engage in “chains of reasoning with information from multiple sources”.  That was four years ago.  Today it seems clear that our first line of defense against AGI has already begun to fall to the advent of ChatGPT.  While this entirely unexpected breakthrough may have caught Russell himself by surprise, it actually proves that he was right all along: we must embrace and prepare for a future in which AGI is an integral part, not in spite of, but precisely because of huge uncertainty.

Russel thinks a super-intelligent machine can understand the world far better and more quickly, cooperate with each other far more effectively, and look much further into the future with far greater accuracy, than any human could ever hope to do.  In a nutshell, in a world with AGI,

“there would be no need to employ armies of specialists in different disciplines, organized into hierarchies of contractors and subcontractors, in order to carry out a project. All embodiments of AGI would have access to all the knowledge and skills of the human race, and more besides.”

What does this extraordinary technological triumph mean for human society?

First, the omnipotent AGI would drive up factor productivity to such a level that scarcity and poverty would be eliminated. When “the pie is essentially infinite”, Russell asked, why fight each other for a larger share? If this utopia sounds familiar, it is because Karl Marx said the same thing about communist society.   This crown achievement, however, will come at the cost of shattering job losses. Russell believed few of us could keep our jobs. It is delusional to think AGI will create more new jobs than it renders obsolete or enhance workers rather than replace them.  His metaphor of “the worker in an online-shopping fulfillment warehouse” is as enlightening as it is frightening.  He wrote,

“She is more productive than her predecessors because she has a small army of robots bringing her storage bins to pick items from; but she is a part of a larger system controlled by intelligent algorithms that decide where she should stand and which items she should pick and dispatch. She is already partly buried in the pyramid, not standing on top of it. It’s only a matter of time before the sand fills the spaces in the pyramid and her role is eliminated.”

The implication seems clear: no matter how indispensable you think you are, there will come a time when you too will be replaced.   That said, Russell told us everything will be just fine, if only humans could, as Keynes had famously advised 90 years ago, cope with their permanent plight of joblessness by learning “the art of life itself”.

Second, we must solve the alignment problem before entrusting all human affairs to AGI and retiring to the purer pursuit of happiness.  The solution to the problem is Russell’s expertise and the essence of the book. Russell argued that AGI development must follow the “Principles for Beneficial Machines”, which state “(i) the machine’s only objective is to maximize the realization of human preferences; (ii) the machine is initially uncertain about what those preferences are and (iii) the ultimate source of information about human preferences is human behavior.”   In a nutshell, Russell’s machine would continuously learn and strive to fulfill the preferences of their human masters. Whenever in doubt, it always defers to them, pausing its actions and seeking permission before proceeding.

I am skeptical these principles would be enough to save us from an AGI apocalypse.  The last part of the book discusses extensively the imperfection of humans, which are “composed of nasty, envy-driven, irrational, inconsistent, unstable, computationally limited, complex, evolving, heterogeneous” individuals.   Given that our species leaves so much to be desired, it seems strange to insist AGI must learn from our behaviors and help advance our (often) ruinous self-interests. Also, history has shown, time and again, humans of ordinary intelligence are perfectly capable of wreaking havoc on earth and perpetuating horrific violence against each other.  It stands to reason that the scale of destruction they can inflict would be incomprehensible when armed with superintelligence.  Unfortunately, that infinite pie Russell promised won’t eradicate human conflicts, because humans fight and kill as much for differences and status as for survival.

To his credit, Russell did concede that AGI must mind the interest of others, as well as that of its own master.  Having reviewed the theories of ethics, he suggested that utilitarianism –– which advocates for maximizing the sum of everyone’s utilities while treating their preferences equally –– might work.  Comparing utilities across individuals is meaningful and doable, Russell reasoned, and therefore, machines can be trained to master the science of ethics by what he called inverse reinforcement learning.  What he did not elaborate, though, is what mechanisms will be used to reconcile the inevitable conflicts between private and public interests. Humans invented pluralistic politics to deal with this ancient and intricate problem. However, super-intelligent machines are likely to find such politics too messy, too stupid, and too ineffective for their taste. Instead, they may favor a top-down approach that promises to “optimize” everything for everyone.  Unfortunately, this very promise had been made and broken before, often with devastating consequences.

Even if Russell’s “beneficial principles” ensure AGI never evolve into a tyrant – a big IF – they are still vulnerable to the “wireheading” trap, which is “the tendency of animals to short-circuit normal behavior in favor of direct stimulation of their own reward system”. Once the machines learn about the shortcut – say, directly stimulating a human’s brain to release pleasure-inducing chemicals – they would exploit it relentlessly to maximize the “total happiness” of humanity.  This tactic is not in violation of Russell’s principles because simulated happiness is still happiness, and to many it is an authentic experience.  The reader may recall that, in the famous movie The Matrix, many people willingly choose that virtual experience (the blue pill) over the real one (the red pill). Even Pascal admitted, “the heart has its reasons, which reason does not know”.  How could you blame AGI for gleefully encouraging their human masters to want what their heart loves more than their reason does?

Perhaps the gravest concern for humanity in the era of AGI will be the potential loss of autonomy.  In order for our civilization to endure, Russell explained, we must recreate it “in the mind of new generations”.   With AGI, this is no longer necessary since machines can store our knowledge and essentially “run our civilization for us”.  What is the point for any individual to spend a significant portion of their life acquiring knowledge and skills that they have no use for, except for the purpose of preserving our collective autonomy? Sadly, human nature being what it is, this tragedy of the commons may trap us all for eternity.

Russell’s writing exhibits a delightful wit, and the breadth of his knowledge in social sciences is remarkable, especially considering he specializes in computer science.  The book would make a stimulating but comfortable read for anyone who has some basic understanding of game theory and machine learning. A reader without such a background may find some materials less accessible.  Nevertheless, if Russell wanted to assuage the public’s concerns about AI safety, he might have fallen short.  If anything, the book had rendered me more pessimistic about AGI’s human compatibility.  While the Dreyfus brothers may be wrong about the superiority of mind over machines, deep down, I still wish they were right after all. To end on a desperately needed positive note, allow me to indulge a favorite quote from their book (again, the emphasis is mine):

“The truth is that human intelligence can never be replaced with machine intelligence simply because we are not ourselves “thinking machines”. Each of us has, and use every day, a power of intuitive intelligence that enables us to understand, to speak, and to cope skillfully with our everyday environment. We must learn what this power is, how it works, where it fits into our lives, and how it can be preserved and developed.”

The fall of Affirmative Action

If I understand it correctly, the supreme court’s ruling yesterday did not demand color blindness in the college admission process.  Rather, it only says colleges should not blindly use skin color as a predictor for a student’s qualifications and fitness.   Nor did the ruling reject in any way the value of diversity, including racial diversity.  Rather, the court merely opined that continuing to pursue this value through Affirmation Action can no longer be justified, partly because it violates the equal-protection clause in the Constitution, and partly because it has injured other people, notably Asian students.

Will Asian students and their parents find it any easier to get into the elite colleges in a post-AA world? I doubt it.   For one thing, elite colleges have many reasons and tools to continue the pursuit of diversity, equity and inclusion. Not explicitly considering race does not mean a “pure” merit-based admission, in the narrow sense of the phrase many Asians have come to understand it.   Second, a post-AA world would still see a large number of admissions be slotted for the kids of alumnus, wealthy donors, and other powerful people on the dean’s mysterious list.   This favoritism, much more than AA ever did, has and will continue to squeeze the room of other applicants, including many Asians. Curiously, Americans seem to hold much less grudges about this injustice.  Finally, the expectation of an easier run would probably attract even more applications to the super competitive colleges, which I am afraid might further drive down the admission rate and, being so obsessively invested in education, Asians probably will feel it more acutely than other groups.