(by Niall Ferguson, Vanity Fair, December 2008)
This year we
have lived through something more than a financial crisis. We have witnessed
the death of a planet. Call it Planet Finance. Two years ago, in 2006, the
measured economic output of the entire world was worth around $48.6 trillion.
The total market capitalization of the world’s stock markets was $50.6
trillion, 4 percent larger. The total value of domestic and international bonds
was $67.9 trillion, 40 percent larger. Planet Finance was beginning to dwarf
Planet Earth.
Planet Finance
seemed to spin faster, too. Every day $3.1 trillion changed hands on
foreign-exchange markets. Every month $5.8 trillion changed hands on global
stock markets. And all the time new financial life-forms were evolving. The
total annual issuance of mortgage-backed securities, including fancy new
“collateralized debt obligations” (C.D.O.’s), rose to more than $1 trillion.
The volume of “derivatives”—contracts such as options and swaps—grew even faster,
so that by the end of 2006 their notional value was just over $400 trillion.
Before the 1980s, such things were virtually unknown. In the space of a few
years their populations exploded. On Planet Finance, the securities outnumbered
the people; the transactions outnumbered the relationships.
New
institutions also proliferated. In 1990 there were just 610 hedge funds, with
$38.9 billion under management. At the end of 2006 there were 9,462, with $1.5
trillion under management. Private-equity partnerships also went forth and
multiplied. Banks, meanwhile, set up a host of “conduits” and “structured
investment vehicles” (sivs—surely the most apt acronym in financial history) to
keep potentially risky assets off their balance sheets. It was as if an entire
shadow banking system had come into being. Then, beginning
in the summer of 2007, Planet Finance began to self-destruct in what the
International Monetary Fund soon acknowledged to be “the largest financial
shock since the Great Depression.” Did the crisis of 2007–8 happen because
American companies had gotten worse at designing new products? Had the pace of
technological innovation or productivity growth suddenly slackened? No. The
proximate cause of the economic uncertainty of 2008 was financial: to be
precise, a crunch in the credit markets triggered by mounting defaults on a
hitherto obscure species of housing loan known euphemistically as “subprime
mortgages.”
Central banks
in the United States and Europe sought to alleviate the pressure on the banks with
interest-rate cuts and offers of funds through special “term auction
facilities.” Yet the market rates at which banks could borrow money, whether by
issuing commercial paper, selling bonds, or borrowing from one another, failed
to follow the lead of the official federal-funds rate. The banks had to turn
not only to Western central banks for short-term assistance to rebuild their
reserves but also to Asian and Middle Eastern sovereign-wealth funds for equity
injections. When these sources proved insufficient, investors—and speculative
short-sellers—began to lose faith.
Beginning with
Bear Stearns, Wall Street’s investment banks entered a death spiral that ended
with their being either taken over by a commercial bank (as Bear was, followed
by Merrill Lynch) or driven into bankruptcy (as Lehman Brothers was). In
September the two survivors—Goldman Sachs and Morgan Stanley—formally ceased to
be investment banks, signaling the death of a business model that dated back to
the Depression. Other institutions deemed “too big to fail” by the U.S.
Treasury were effectively taken over by the government, including the mortgage
lenders and guarantors Fannie Mae and Freddie Mac and the insurance giant
American International Group (A.I.G.).
By September 18
the U.S.
financial system was gripped by such panic that the Treasury had to abandon
this ad hoc policy. Treasury Secretary Henry Paulson hastily devised a plan
whereby the government would be authorized to buy “troubled” securities with up
to $700 billion of taxpayers’ money—a figure apparently plucked from the air.
When a modified version of the measure was rejected by Congress 11 days later,
there was panic. When it was passed four days after that, there was more panic.
Now it wasn’t just bank stocks that were tanking. The entire stock market
seemed to be in free fall as fears mounted that the credit crunch was going to
trigger a recession. Moreover, the crisis was now clearly global in scale.
European banks were in much the same trouble as their American counterparts,
while emerging-market stock markets were crashing. A week of frenetic
improvisation by national governments culminated on the weekend of October
11–12, when the United
States reluctantly followed the British
government’s lead, buying equity stakes in banks rather than just their dodgy
assets and offering unprecedented guarantees of banks’ debt and deposits.
Since these
events coincided with the final phase of a U.S. presidential-election
campaign, it was not surprising that some rather simplistic lessons were soon
being touted by candidates and commentators. The crisis, some said, was the
result of excessive deregulation of financial markets. Others sought to lay the
blame on unscrupulous speculators: short-sellers, who borrowed the stocks of
vulnerable banks and sold them in the expectation of further price declines.
Still other suspects in the frame were negligent regulators and corrupt
congressmen. This hunt for scapegoats is futile. To understand the downfall of
Planet Finance, you need to take several steps back and locate this crisis in
the long run of financial history. Only then will you see that we have all
played a part in this latest sorry example of what the Victorian journalist
Charles Mackay described in his 1841 book, Extraordinary Popular Delusions
and the Madness of Crowds.
Nothing New
As long as
there have been banks, bond markets, and stock markets, there have been
financial crises. Banks went bust in the days of the Medici. There were
bond-market panics in the Venice of Shylock’s day. And the world’s first
stock-market crash happened in 1720, when the Mississippi Company—the Enron of
its day—blew up. According to economists Carmen Reinhart and Kenneth Rogoff,
the financial history of the past 800 years is a litany of debt defaults, banking
crises, currency crises, and inflationary spikes. Moreover, financial crises
seldom happen without inflicting pain on the wider economy. Another recent
paper, co-authored by Rogoff’s Harvard colleague Robert Barro, has identified
148 crises since 1870 in which a country experienced a cumulative decline in
gross domestic product (G.D.P.) of at least 10 percent, implying a probability
of financial disaster of around 3.6 percent per year.
If stock-market
movements followed the normal-distribution, or bell, curve, like human heights,
an annual drop of 10 percent or more would happen only once every 500 years,
whereas in the case of the Dow Jones Industrial Average it has happened in 20
of the last 100 years. And stock-market plunges of 20 percent or more would be
unheard of—rather like people a foot and a half tall—whereas in fact there have
been eight such crashes in the past century. The most famous
financial crisis—the Wall Street Crash—is conventionally said to have begun on
“Black Thursday,” October 24, 1929, when the Dow declined by 2 percent, though
in fact the market had been slipping since early September and had suffered a
sharp, 6 percent drop on October 23. On “Black Monday,” October 28, it plunged
by 13 percent, and the next day by a further 12 percent. In the course of the
next three years the U.S.
stock market declined by a staggering 89 percent, reaching its nadir in July
1932. The index did not regain its 1929 peak until November 1954.
That helps put
our current troubles into perspective. From its peak of 14,164, on October 9,
2007, to a dismal level of 8,579, exactly a year later, the Dow declined by 39
percent. By contrast, on a single day just over two decades ago—October 19,
1987—the index fell by 23 percent, one of only four days in history when the
index has fallen by more than 10 percent in a single trading session. This crisis, however, is about much more than
just the stock market. It needs to be understood as a fundamental breakdown of
the entire financial system, extending from the monetary-and-banking system
through the bond market, the stock market, the insurance market, and the
real-estate market. It affects not only established financial institutions such
as investment banks but also relatively novel ones such as hedge funds. It is
global in scope and unfathomable in scale.
Had it not been
for the frantic efforts of the Federal Reserve and the Treasury, to say nothing
of their counterparts in almost equally afflicted Europe ,
there would by now have been a repeat of that “great contraction” of credit and
economic activity that was the prime mover of the Depression. Back then, the
Fed and the Treasury did next to nothing to prevent bank failures from
translating into a drastic contraction of credit and hence of business activity
and employment. If the more openhanded monetary and fiscal authorities of today
are ultimately successful in preventing a comparable slump of output, future
historians may end up calling this “the Great Repression.” This is the
Depression they are hoping to bottle up—a Depression in denial.
To understand
why we have come so close to a rerun of the 1930s, we need to begin at the
beginning, with banks and the money they make. From the Middle Ages until the
mid-20th century, most banks made their money by maximizing the difference
between the costs of their liabilities (payments to depositors) and the
earnings on their assets (interest and commissions on loans). Some banks also
made money by financing trade, discounting the commercial bills issued by
merchants. Others issued and traded bonds and stocks, or dealt in commodities
(especially precious metals). But the core business of banking was simple. It
consisted, as the third Lord Rothschild pithily put it, “essentially of
facilitating the movement of money from Point A, where it is, to Point B, where
it is needed.”
The system
evolved gradually. First came the invention of cashless intra-bank and
inter-bank transactions, which allowed debts to be settled between account
holders without having money physically change hands. Then came the idea of
fractional-reserve banking, whereby banks kept only a small proportion of their
existing deposits on hand to satisfy the needs of depositors (who seldom wanted
all their money simultaneously), allowing the rest to be lent out profitably.
That was followed by the rise of special public banks with monopolies on the
issuing of banknotes and other powers and privileges: the first central banks.
With these
innovations, money ceased to be understood as precious metal minted into coins.
Now it was the sum total of specific liabilities (deposits and reserves)
incurred by banks. Credit was the other side of banks’ balance sheets: the
total of their assets; in other words, the loans they made. Some of this money
might still consist of precious metal, though a rising proportion of that would
be held in the central bank’s vault. Most would be made up of banknotes and
coins recognized as “legal tender,” along with money that was visible only in
current- and deposit-account statements.
Until the late
20th century, the system of bank money retained an anchor in the pre-modern
conception of money in the form of the gold standard: fixed ratios between
units of account and quantities of precious metal. As early as 1924, the
English economist John Maynard Keynes dismissed the gold standard as a
“barbarous relic,” but the last vestige of the system did not disappear until
August 15, 1971—the day President Richard Nixon closed the so-called gold
window, through which foreign central banks could still exchange dollars for
gold. With that, the centuries-old link between money and precious metal was
broken.
Though we tend
to think of money today as being made of paper, in reality most of it now
consists of bank deposits. If we measure the ratio of actual money to output in
developed economies, it becomes clear that the trend since the 1970s has been
for that ratio to rise from around 70 percent, before the closing of the gold
window, to more than 100 percent by 2005. The corollary has been a parallel growth
of credit on the other side of bank balance sheets. A significant component of
that credit growth has been a surge of lending to consumers. Back in 1952, the
ratio of household debt to disposable income was less than 40 percent in the United States .
At its peak in 2007, it reached 133 percent, up from 90 percent a decade
before. Today Americans carry a total of $2.56 trillion in consumer debt, up by
more than a fifth since 2000.
Even more
spectacular, however, has been the rising indebtedness of banks themselves. In
1980, bank indebtedness was equivalent to 21 percent of U.S. gross
domestic product. In 2007 the figure was 116 percent. Another measure of this
was the declining capital adequacy of banks. On the eve of “the Great
Repression,” average bank capital in Europe
was equivalent to less than 10 percent of assets; at the beginning of the 20th
century, it was around 25 percent. It was not unusual for investment banks’
balance sheets to be as much as 20 or 30 times larger than their capital,
thanks in large part to a 2004 rule change by the Securities and Exchange
Commission that exempted the five largest of those banks from the regulation
that had capped their debt-to-capital ratio at 12 to 1. The Age of Leverage had
truly arrived for Planet Finance.
Credit and
money, in other words, have for decades been growing more rapidly than
underlying economic activity. Is it any wonder, then, that money has ceased to
hold its value the way it did in the era of the gold standard? The motto “In
God we trust” was added to the dollar bill in 1957. Since then its purchasing
power, relative to the consumer price index, has declined by a staggering 87
percent. Average annual inflation during that period has been more than 4
percent. A man who decided to put his savings into gold in 1970 could have
bought just over 27.8 ounces of the precious metal for $1,000. At the time of
writing, with gold trading at $900 an ounce, he could have sold it for around
$25,000. Those few goldbugs who always
doubted the soundness of fiat money—paper currency without a metal anchor—have
in large measure been vindicated. But why were the rest of us so blinded by
money illusion?
Blowing Bubbles
In the
immediate aftermath of the death of gold as the anchor of the monetary system,
the problem of inflation affected mainly retail prices and wages. Today, only
around one out of seven countries has an inflation rate above 10 percent, and
only one, Zimbabwe ,
is afflicted with hyperinflation. But back in 1979 at least 7 countries had an
annual inflation rate above 50 percent, and more than 60 countries—including Britain and the United States —had inflation in
double digits.
Inflation has
come down since then, partly because many of the items we buy—from clothes to
computers—have gotten cheaper as a result of technological innovation and the
relocation of production to low-wage economies in Asia .
It has also been reduced because of a worldwide transformation in monetary
policy, which began with the monetarist-inspired increases in short-term rates
implemented by the Federal Reserve in 1979. Just as important, some of the
structural drivers of inflation, such as powerful trade unions, have also been
weakened.
By the 1980s,
in any case, more and more people had grasped how to protect their wealth from
inflation: by investing it in assets they expected to appreciate in line with,
or ahead of, the cost of living. These assets could take multiple forms, from
modern art to vintage wine, but the most popular proved to be stocks and real
estate. Once it became clear that this formula worked, the Age of Leverage
could begin. For it clearly made sense to borrow to the hilt to maximize your
holdings of stocks and real estate if these promised to generate higher rates
of return than the interest payments on your borrowings. Between 1990 and 2004,
most American households did not see an appreciable improvement in their
incomes. Adjusted for inflation, the median household income rose by about 6
percent. But people could raise their living standards by borrowing and investing
in stocks and housing.
Nearly all of
us did it. And the bankers were there to help. Not only could they borrow more
cheaply from one another than we could borrow from them; increasingly they
devised all kinds of new mortgages that looked more attractive to us (and
promised to be more lucrative to them) than boring old 30-year fixed-rate
deals. Moreover, the banks were just as ready to play the asset markets as we
were. Proprietary trading soon became the most profitable arm of investment
banking: buying and selling assets on the bank’s own account. Losing our shirt? The problem is that our banks
are also losing theirs. There was, however, a catch. The Age of Leverage was
also an age of bubbles, beginning with the dot-com bubble of the irrationally
exuberant 1990s and ending with the real-estate mania of the exuberantly
irrational 2000s. Why was this?
The future is
in large measure uncertain, so our assessments of future asset prices are bound
to vary. If we were all calculating machines, we would simultaneously process
all the available information and come to the same conclusion. But we are human
beings, and as such are prone to myopia and mood swings. When asset prices
surge upward in sync, it is as if investors are gripped by a kind of collective
euphoria. Conversely, when their “animal spirits” flip from greed to fear, the
bubble that their earlier euphoria inflated can burst with amazing suddenness.
Zoological imagery is an integral part of the culture of Planet Finance.
Optimistic buyers are “bulls,” pessimistic sellers are “bears.” The real point,
however, is that stock markets are mirrors of the human psyche. Like Homo
sapiens, they can become depressed. They can even suffer complete
breakdowns.
This is no new
insight. In the 400 years since the first shares were bought and sold on the
Amsterdam Beurs, there has been a long succession of financial bubbles. Time
and again, asset prices have soared to unsustainable heights only to crash
downward again. So familiar is this pattern—described by the economic historian
Charles Kindleberger—that it is possible to distill it into five stages:
(1) Displacement:
Some change in economic circumstances creates new and profitable opportunities.
(2) Euphoria, or overtrading: A feedback process sets in whereby
expectation of rising profits leads to rapid growth in asset prices. (3) Mania,
or bubble: The prospect of easy capital gains attracts first-time investors and
swindlers eager to mulct them of their money. (4) Distress: The insiders
discern that profits cannot possibly justify the now exorbitant price of the
assets and begin to take profits by selling. (5) Revulsion, or
discredit: As asset prices fall, the outsiders stampede for the exits, causing
the bubble to burst. The key point is
that without easy credit creation a true bubble cannot occur. That is why so
many bubbles have their origins in the sins of omission and commission of
central banks.
The bubbles of
our time had their origins in the aftermath of the 1987 stock-market crash,
when then novice Federal Reserve chairman Alan Greenspan boldly affirmed the
Fed’s “readiness to serve as a source of liquidity to support the economic and
financial system.” This sent a signal to the markets, particularly the New York banks: if
things got really bad, he stood ready to bail them out. Thus was born the
“Greenspan put”—the implicit option the Fed gave traders to be able to sell
their stocks at today’s prices even in the event of a meltdown tomorrow.
Having contained
a panic once, Greenspan thereafter had a dilemma lurking in the back of his
mind: whether or not to act pre-emptively the next time—to prevent a panic
altogether. This dilemma came to the fore as a classic stock-market bubble took
shape in the mid-90s. The displacement in this case was the explosion of
innovation by the technology and software industry as personal computers met
the Internet. But, as in all of history’s bubbles, an accommodative monetary
policy also played a role. From a peak of 6 percent in February 1995, the
federal-funds target rate had been reduced to 5.25 percent by January 1996. It
was then cut in steps, in the fall of 1998, down to 4.75 percent, and it
remained at that level until June 1999, by which time the Dow had passed the 10,000
mark.
Why did the Fed
allow euphoria to run loose in the 1990s? Partly because Greenspan and his
colleagues underestimated the momentum of the technology bubble; as early as
December 1995, with the Dow just past the 5,000 mark, members of the Fed’s Open
Market Committee speculated that the market might be approaching its peak.
Partly, also, because Greenspan came to the conclusion that it was not the
Fed’s responsibility to worry about asset-price inflation, only consumer-price
inflation, and this, he believed, was being reduced by a major improvement in
productivity due precisely to the tech boom.
Greenspan could
not postpone a stock-exchange crash indefinitely. After Silicon Valley’s
dot-com bubble peaked, in March 2000, the U.S. stock market fell by almost
half over the next two and a half years. It was not until May 2007 that
investors in the Standard & Poor’s 500 had recouped their losses. But the
Fed’s response to the sell-off—and the massive shot of liquidity it injected
into the financial markets after the 9/11 terrorist attacks—prevented the
“correction” from precipitating a depression. Not only were the 1930s averted;
so too, it seemed, was a repeat of the Japanese experience after 1989, when a
conscious effort by the central bank to prick an asset bubble had ended up
triggering an 80 percent stock-market sell-off, a real-estate collapse, and a
decade of economic stagnation. What was not
immediately obvious was that Greenspan’s easy-money policy was already
generating another bubble—this time in the financial market that a majority of
Americans have been encouraged for generations to play: the real-estate market.
The American Dream
Real estate is
the English-speaking world’s favorite economic game. No other facet of
financial life has such a hold on the popular imagination. The real-estate
market is unique. Every adult, no matter how economically illiterate, has a
view on its future prospects. Through the evergreen board game Monopoly, even
children are taught how to climb the property ladder. Once upon a
time, people saved a portion of their earnings for the proverbial rainy day,
stowing the cash in a mattress or a bank safe. The Age of Leverage, as we have
seen, brought a growing reliance on borrowing to buy assets in the expectation
of their future appreciation in value. For a majority of families, this meant a
leveraged investment in a house. That strategy had one very obvious flaw. It
represented a one-way, totally unhedged bet on a single asset.
To be sure,
investing in housing paid off handsomely for more than half a century, up until
2006. Suppose you had put $100,000 into the U.S. property market back in the
first quarter of 1987. According to the Case-Shiller national home-price index,
you would have nearly tripled your money by the first quarter of 2007, to
$299,000. On the other hand, if you had put the same money into the S&P
500, and had continued to re-invest the dividend income in that index, you
would have ended up with $772,000 to play with—more than double what you would
have made on bricks and mortar.
There is,
obviously, an important difference between a house and a stock-market index.
You cannot live in a stock-market index. For the sake of a fair comparison,
allowance must therefore be made for the rent you save by owning your house (or
the rent you can collect if you own a second property). A simple way to proceed
is just to leave out both dividends and rents. In that case the difference is
somewhat reduced. In the two decades after 1987, the S&P 500, excluding
dividends, rose by a factor of just over six, meaning that an investment of
$100,000 would be worth some $600,000. But that still comfortably beat housing.
There are three
other considerations to bear in mind when trying to compare housing with other
forms of assets. The first is depreciation. Stocks do not wear out and require
new roofs; houses do. The second is liquidity. As assets, houses are a great
deal more expensive to convert into cash than stocks. The third is volatility.
Housing markets since World War II have been far less volatile than stock
markets. Yet that is not to say that house prices have never deviated from a
steady upward path. In Britain
between 1989 and 1995, for example, the average house price fell by 18 percent,
or, in inflation-adjusted terms, by more than a third—37 percent. In London , the real decline
was closer to 47 percent. In Japan
between 1990 and 2000, property prices fell by more than 60 percent.
The recent
decline of property prices in the United States should therefore have
come as less of a shock than it did. Between July 2006 and June 2008, the
Case-Shiller index of home prices in 20 big American cities declined on average
by 19 percent. In some of these cities—Phoenix , San Diego , Los Angeles , and
Miami —the total
decline was as much as a third. Seen in international perspective, those are
not unprecedented figures. Seen in the context of the post-2000 bubble, prices
have yet to return to their starting point. On average, house prices are still
50 percent higher than they were at the beginning of this process.
So why were we
oblivious to the likely bursting of the real-estate bubble? The answer is that
for generations we have been brainwashed into thinking that borrowing to buy a
house is the only rational financial strategy to pursue. Think of Frank Capra’s
classic 1946 movie, It’s a Wonderful Life, which tells the story of the
family-owned Bailey Building & Loan, a small-town mortgage firm that George
Bailey (played by James Stewart) struggles to keep afloat in the teeth of the
Depression. “You know, George,” his father tells him, “I feel that in a small
way we are doing something important. It’s satisfying a fundamental urge. It’s
deep in the race for a man to want his own roof and walls and fireplace, and
we’re helping him get those things in our shabby little office.” George gets
the message, as he passionately explains to the villainous slumlord Potter
after Bailey Sr.’s death: “[My father] never once thought of himself.… But he
did help a few people get out of your slums, Mr. Potter. And what’s wrong with
that? … Doesn’t it make them better citizens? Doesn’t it make them better
customers?” There, in a
nutshell, is one of the key concepts of the 20th century: the notion that
property ownership enhances citizenship, and that therefore a property-owning
democracy is more socially and politically stable than a democracy divided into
an elite of landlords and a majority of property-less tenants. So deeply rooted
is this idea in our political culture that it comes as a surprise to learn that
it was invented just 70 years ago.
Fannie, Ginnie, and Freddie
Prior to the
1930s, only a minority of Americans owned their homes. During the Depression,
however, the Roosevelt administration created
a whole complex of institutions to change that. A Federal Home Loan Bank Board
was set up in 1932 to encourage and oversee local mortgage lenders known as
savings-and-loans (S&Ls)—mutual associations that took in deposits and lent
to homebuyers. Under the New Deal, the Home Owners’ Loan Corporation stepped in
to refinance mortgages on longer terms, up to 15 years. To reassure depositors,
who had been traumatized by the thousands of bank failures of the previous
three years, Roosevelt introduced federal
deposit insurance. And by providing federally backed insurance for mortgage
lenders, the Federal Housing Administration (F.H.A.) sought to encourage large
(up to 80 percent of the purchase price), long (20- to 25-year), fully
amortized, low-interest loans.
By
standardizing the long-term mortgage and creating a national system of official
inspection and valuation, the F.H.A. laid the foundation for a secondary market
in mortgages. This market came to life in 1938, when a new Federal National
Mortgage Association—nicknamed Fannie Mae—was authorized to issue bonds and use
the proceeds to buy mortgages from the local S&Ls, which were restricted by
regulation both in terms of geography (they could not lend to borrowers more
than 50 miles from their offices) and in terms of the rates they could offer
(the so-called Regulation Q, which imposed a low ceiling on interest paid on
deposits). Because these changes tended to reduce the average monthly payment
on a mortgage, the F.H.A. made home ownership viable for many more Americans
than ever before. Indeed, it is not too much to say that the modern United
States, with its seductively samey suburbs, was born with Fannie Mae. Between
1940 and 1960, the home-ownership rate soared from 43 to 62 percent.
These were not
the only ways in which the federal government sought to encourage Americans to
own their own homes. Mortgage-interest payments were always tax-deductible,
from the inception of the federal income tax in 1913. As Ronald Reagan said
when the rationality of this tax break was challenged, mortgage-interest relief
was “part of the American dream.” In 1968, to
broaden the secondary-mortgage market still further, Fannie Mae was split in
two—the Government National Mortgage Association (Ginnie Mae), which was to
cater to poor borrowers, and a rechartered Fannie Mae, now a privately owned
government-sponsored enterprise (G.S.E.). Two years later, to provide
competition for Fannie Mae, the Federal Home Loan Mortgage Corporation (Freddie
Mac) was set up. In addition, Fannie Mae was permitted to buy conventional as
well as government-guaranteed mortgages. Later, with the Community Reinvestment
Act of 1977, American banks found themselves under pressure for the first time
to lend to poor, minority communities.
These changes
presaged a more radical modification to the New Deal system. In the late 1970s,
the savings-and-loan industry was hit first by double-digit inflation and then
by sharply rising interest rates. This double punch was potentially lethal. The
S&Ls were simultaneously losing money on long-term, fixed-rate mortgages,
due to inflation, and hemorrhaging deposits to higher-interest money-market
funds. The response in Washington
from both the Carter and Reagan administrations was to try to salvage the
S&Ls with tax breaks and deregulation. When the new legislation was passed,
President Reagan declared, “All in all, I think we hit the jackpot.” Some
people certainly did.
On the one
hand, S&Ls could now invest in whatever they liked, not just local
long-term mortgages. Commercial property, stocks, junk bonds—anything was allowed.
They could even issue credit cards. On the other, they could now pay whatever
interest rate they liked to depositors. Yet all their deposits were still
effectively insured, with the maximum covered amount raised from $40,000 to
$100,000, thanks to a government regulation two years earlier. And if ordinary
deposits did not suffice, the S&Ls could raise money in the form of
brokered deposits from middlemen. What happened next perfectly illustrated the
great financial precept first enunciated by William Crawford, the commissioner
of the California Department of Savings and Loan: “The best way to rob a bank
is to own one.” Some S&Ls bet their depositors’ money on highly dubious
real-estate developments. Many simply stole the money, as if deregulation meant
that the law no longer applied to them at all.
When the
ensuing bubble burst, nearly 300 S&Ls collapsed, while another 747 were
closed or reorganized under the auspices of the Resolution Trust Corporation,
established by Congress in 1989 to clear up the mess. The final cost of the
crisis was $153 billion (around 3 percent of the 1989 G.D.P.), of which
taxpayers had to pay $124 billion. But even as the S&Ls were going
belly-up, they offered another, very different group of American financial
institutions a fast track to megabucks. To the bond traders at Salomon
Brothers, the New York
investment bank, the breakdown of the New Deal mortgage system was not a crisis
but a wonderful opportunity. As profit-hungry as their language was profane,
the self-styled “Big Swinging Dicks” at Salomon saw a way of exploiting the
gyrating interest rates of the early 1980s.
The idea was to
re-invent mortgages by bundling thousands of them together as the backing for
new and alluring securities that could be sold as alternatives to traditional
government and corporate bonds—in short, to convert mortgages into bonds. Once
lumped together, the interest payments due on the mortgages could be subdivided
into strips with different maturities and credit risks. The first issue of this
new kind of mortgage-backed security (known as a “collateralized mortgage
obligation”) occurred in June 1983. The dawn of securitization was a necessary
prelude to the Age of Leverage.
Once again,
however, it was the federal government that stood ready to pick up the tab in a
crisis. For the majority of mortgages continued to enjoy an implicit guarantee
from the government-sponsored trio of Fannie, Freddie, and Ginnie, meaning that
bonds which used those mortgages as collateral could be represented as virtual
government bonds and considered “investment grade.” Between 1980 and 2007, the
volume of such G.S.E.-backed mortgage-backed securities grew from less than
$200 billion to more than $4 trillion. In 1980 only 10 percent of the
home-mortgage market was securitized; by 2007, 56 percent of it was. These changes
swept away the last vestiges of the business model depicted in It’s a
Wonderful Life. Once there had been meaningful social ties between mortgage
lenders and borrowers. James Stewart’s character knew both the depositors and
the debtors. By contrast, in a securitized market the interest you paid on your
mortgage ultimately went to someone who had no idea you existed. The full
implications of this transition for ordinary homeowners would become apparent
only 25 years later.
The Lessons of Detroit
In July 2007, I
paid a visit to Detroit , because I had the
feeling that what was happening there was the shape of things to come in the United States
as a whole. In the space of 10 years, house prices in Detroit, which probably
possesses the worst housing stock of any American city other than New Orleans,
had risen by more than a third—not much compared with the nationwide bubble,
but still hard to explain, given the city’s chronically depressed economic
state. As I discovered, the explanation lay in fundamental changes in the rules
of the housing game. I arrived at
the end of a borrowing spree. For several years agents and brokers selling
subprime mortgages had been flooding Detroit
with radio, television, and direct-mail advertisements, offering what sounded
like attractive deals. In 2006, for example, subprime lenders pumped more than
a billion dollars into 22 Detroit Zip Codes.
These were not
the old 30-year fixed-rate mortgages invented in the New Deal. On the contrary,
a high proportion were adjustable-rate mortgages—in other words, the interest
rate could vary according to changes in short-term lending rates. Many were
also interest-only mortgages, without amortization (repayment of principal),
even when the principal represented 100 percent of the assessed value of the
mortgaged property. And most had introductory “teaser” periods, whereby the
initial interest payments—usually for the first two years—were kept
artificially low, with the cost of the loan backloaded. All of these devices
were intended to allow an immediate reduction in the debt-servicing costs of
the borrower.
In Detroit only a minority
of these loans were going to first-time buyers. They were nearly all
refinancing deals, which allowed borrowers to treat their homes as cash
machines, converting their existing equity into cash and using the proceeds to
pay off credit-card debts, carry out renovations, or buy new consumer durables.
However, the combination of declining long-term interest rates and ever more
alluring mortgage deals did attract new buyers into the housing market. By
2005, 69 percent of all U.S.
householders were homeowners; 10 years earlier it had been 64 percent. About
half of that increase could be attributed to the subprime-lending boom.
Significantly,
a disproportionate number of subprime borrowers belonged to ethnic minorities.
Indeed, I found myself wondering, as I drove around Detroit , if “subprime” was in fact a new
financial euphemism for “black.” This was no idle supposition. According to a
joint study by, among others, the Massachusetts Affordable Housing Alliance, 55
percent of black and Latino borrowers in Boston
who had obtained loans for single-family homes in 2005 had been given subprime
mortgages; the figure for white borrowers was just 13 percent. More than
three-quarters of black and Latino borrowers from Washington Mutual were
classed as subprime, whereas only 17 percent of white borrowers were. According
to a report in The Wall Street Journal, minority ownership increased by
3.1 million between 2002 and 2007.
Here, surely,
was the zenith of the property-owning democracy. It was an achievement that the
Bush administration was proud of. “We want everybody in America to own
their own home,” President George W. Bush had said in October 2002. Having
challenged lenders to create 5.5 million new minority homeowners by the end of
the decade, Bush signed the American Dream Downpayment Act in 2003, a measure
designed to subsidize first-time house purchases in low-income groups. Between
2000 and 2006, the share of undocumented subprime contracts rose from 17 to 44
percent. Fannie Mae and Freddie Mac also came under pressure from the
Department of Housing and Urban Development to support the subprime market. As
Bush put it in December 2003, “It is in our national interest that more people
own their own home.” Few people dissented.
As a business
model, subprime lending worked beautifully—as long, that is, as interest rates
stayed low, people kept their jobs, and real-estate prices continued to rise.
Such conditions could not be relied upon to last, however, least of all in a
city like Detroit .
But that did not worry the subprime lenders. They simply followed the trail
blazed by mainstream mortgage lenders in the 1980s. Having pocketed fat
commissions on the signing of the original loan contracts, they hastily resold
their loans in bulk to Wall Street banks. The banks, in turn, bundled the loans
into high-yielding mortgage-backed securities and sold them to investors around
the world, all eager for a few hundredths of a percentage point more of return
on their capital. Repackaged as C.D.O.’s, these subprime securities could be
transformed from risky loans to flaky borrowers into triple-A-rated
investment-grade securities. All that was required was certification from one
of the rating agencies that at least the top tier of these securities was
unlikely to go into default.
The risk was
spread across the globe, from American state pension funds to public-hospital
networks in Australia , to
town councils near the Arctic Circle . In Norway , for
example, eight municipalities, including Rana and Hemnes, invested some $120
million of their taxpayers’ money in C.D.O.’s secured on American subprime
mortgages. In Detroit the rise of
subprime mortgages had in fact coincided with a new slump in the inexorably
declining automobile industry. That anticipated a wider American slowdown, an
almost inevitable consequence of a tightening of monetary policy as the Federal
Reserve belatedly raised short-term interest rates from 1 percent to 5.25
percent. As soon as the teaser rates expired and mortgages were reset at new
and much higher interest rates, hundreds of Detroit households swiftly fell behind in
their mortgage payments. The effect was to burst the real-estate bubble,
causing house prices to start falling significantly for the first time since
the early 1990s. And the further house prices fell, the more homeowners found
themselves with “negative equity”—in other words, owing more money than their homes
were worth. The rest—the chain reaction
as defaults in Detroit and elsewhere unleashed huge losses on C.D.O.’s in
financial institutions all around the world—you know.
Drunk on Derivatives
Do you,
however, know about the second-order effects of this crisis in the markets for
derivatives? Do you in fact know what a derivative is? Once excoriated by
Warren Buffett as “financial weapons of mass destruction,” derivatives are what
make this crisis both unique and unfathomable in its ramifications. To understand
what they are, you need, literally, to go back to the future.
For a farmer
planting a crop, nothing is more crucial than the future price it will fetch
after it has been harvested and taken to market. A futures contract allows him
to protect himself by committing a merchant to buy his crop when it comes to
market at a price agreed upon when the seeds are being planted. If the market
price on the day of delivery is lower than expected, the farmer is protected. The earliest
forms of protection for farmers were known as forward contracts, which were
simply bilateral agreements between seller and buyer. A true futures contract,
however, is a standardized instrument issued by a futures exchange and hence
tradable. With the development of a standard “to arrive” futures contract,
along with a set of rules to enforce settlement and, finally, an effective
clearinghouse, the first true futures market was born.
Because they
are derived from the value of underlying assets, all futures contracts are
forms of derivatives. Closely related, though distinct from futures, are the
contracts known as options. In essence, the buyer of a “call” option has the
right, but not the obligation, to buy an agreed-upon quantity of a particular
commodity or financial asset from the seller (“writer”) of the option at a
certain time (the expiration date) for a certain price (known as the “strike
price”). Clearly, the buyer of a call option expects the price of the
underlying instrument to rise in the future. When the price passes the agreed-upon
strike price, the option is “in the money”—and so is the smart guy who bought
it. A “put” option is just the opposite: the buyer has the right but not the
obligation to sell an agreed-upon quantity of something to the seller of the
option at an agreed-upon price.
A third kind of
derivative is the interest-rate “swap,” which is effectively a bet between two
parties on the future path of interest rates. A pure interest-rate swap allows
two parties already receiving interest payments literally to swap them,
allowing someone receiving a variable rate of interest to exchange it for a
fixed rate, in case interest rates decline. A credit-default swap (C.D.S.),
meanwhile, offers protection against a company’s defaulting on its bonds. There was a
time when derivatives were standardized instruments traded on exchanges such as
the Chicago Board of Trade. Now, however, the vast proportion are custom-made
and sold “over the counter” (O.T.C.), often by banks, which charge attractive
commissions for their services, but also by insurance companies (notably
A.I.G.). According to the Bank for International Settlements, the total
notional amounts outstanding of O.T.C. derivative contracts—arranged on an ad
hoc basis between two parties—reached a staggering $596 trillion in December
2007, with a gross market value of just over $14.5 trillion.
But how exactly
do you price a derivative? What precisely is an option worth? The answers to
those questions required a revolution in financial theory. From an academic
point of view, what this revolution achieved was highly impressive. But the
events of the 1990s, as the rise of quantitative finance replaced preppies with
quants (quantitative analysts) all along Wall Street, revealed a new truth:
those whom the gods want to destroy they first teach math.
Working closely
with Fischer Black, of the consulting firm Arthur D. Little, M.I.T.’s Myron
Scholes invented a groundbreaking new theory of pricing options, to which his
colleague Robert Merton also contributed. (Scholes and Merton would share the
1997 Nobel Prize in economics.) They reasoned that a call option’s value
depended on six variables: the current market price of the stock (S),
the agreed future price at which the stock could be bought (L), the time
until the expiration date of the option (t), the risk-free rate of
return in the economy as a whole (r), the probability that the option
will be exercised (N), and—the crucial variable—the expected volatility
of the stock, i.e., the likely fluctuations of its price between the time of
purchase and the expiration date (s). With wonderful mathematical wizardry, the
quants reduced the price of a call option to the Black-Scholes formula:
Feeling a bit
baffled? Can’t follow the algebra? That was just fine by the quants. To make
money from this magic formula, they needed markets to be full of people who
didn’t have a clue about how to price options but relied instead on their
(seldom accurate) gut instincts. They also needed a great deal of computing
power, a force which had been transforming the financial markets since the
early 1980s. Their final requirement was a partner with some market savvy in
order to make the leap from the faculty club to the trading floor. Black, who
would soon be struck down by cancer, could not be that partner. But John
Meriwether could. The former head of the bond-arbitrage group at Salomon
Brothers, Meriwether had made his first fortune in the wake of the S&L
meltdown of the late 1980s. The hedge fund he created with Scholes and Merton
in 1994 was called Long-Term Capital Management.
In its brief,
four-year life, Long-Term was the brightest star in the hedge-fund firmament,
generating mind-blowing returns for its elite club of investors and even more
money for its founders. Needless to say, the firm did more than just trade
options, though selling puts on the stock market became such a big part of its
business that it was nicknamed “the central bank of volatility” by banks buying
insurance against a big stock-market sell-off. In fact, the partners were
simultaneously pursuing multiple trading strategies, about 100 of them, with a
total of 7,600 positions. This conformed to a second key rule of the new
mathematical finance: the virtue of diversification, a principle that had been
formalized by Harry M. Markowitz, of the Rand Corporation. Diversification was
all about having a multitude of uncorrelated positions. One might go wrong, or
even two. But thousands just could not go wrong simultaneously.
The mathematics
were reassuring. According to the firm’s “Value at Risk” models, it would take
a 10-s (in other words, 10-standard-deviation) event to cause the firm to lose
all its capital in a single year. But the probability of such an event,
according to the quants, was 1 in 10,24—or effectively zero. Indeed,
the models said the most Long-Term was likely to lose in a single day was $45
million. For that reason, the partners felt no compunction about leveraging
their trades. At the end of August 1997, the fund’s capital was $6.7 billion,
but the debt-financed assets on its balance sheet amounted to $126 billion, a
ratio of assets to capital of 19 to 1.
There is no
need to rehearse here the story of Long-Term’s downfall, which was precipitated
by a Russian debt default. Suffice it to say that on Friday, August 21, 1998,
the firm lost $550 million—15 percent of its entire capital, and vastly more
than its mathematical models had said was possible. The key point is to
appreciate why the quants were so wrong.
The problem lay
with the assumptions that underlie so much of mathematical finance. In order to
construct their models, the quants had to postulate a planet where the
inhabitants were omniscient and perfectly rational; where they instantly
absorbed all new information and used it to maximize profits; where they never
stopped trading; where markets were continuous, frictionless, and completely
liquid. Financial markets on this planet followed a “random walk,” meaning that
each day’s prices were quite unrelated to the previous day’s, but reflected no
more and no less than all the relevant information currently available. The
returns on this planet’s stock market were normally distributed along the bell
curve, with most years clustered closely around the mean, and two-thirds of
them within one standard deviation of the mean. On such a planet, a “six
standard deviation” sell-off would be about as common as a person shorter than
one foot in our world. It would happen
only once in four million years of trading.
But Long-Term
was not located on Planet Finance. It was based in Greenwich , Connecticut ,
on Planet Earth, a place inhabited by emotional human beings, always capable of
flipping suddenly and en masse from greed to fear. In the case of Long-Term,
the herding problem was acute, because many other firms had begun trying to
copy Long-Term’s strategies in the hope of replicating its stellar performance.
When things began to go wrong, there was a truly bovine stampede for the exits.
The result was a massive, synchronized downturn in virtually all asset markets.
Diversification was no defense in such a crisis. As one leading London hedge-fund manager
later put it to Meriwether, “John, you were the correlation.”
There was,
however, another reason why Long-Term failed. The quants’ Value at Risk models
had implied that the loss the firm suffered in August 1998 was so unlikely that
it ought never to have happened in the entire life of the universe. But that
was because the models were working with just five years of data. If they had
gone back even 11 years, they would have captured the 1987 stock-market crash.
If they had gone back 80 years they would have captured the last great Russian
default, after the 1917 revolution. Meriwether himself, born in 1947, ruefully
observed, “If I had lived through the Depression, I would have been in a better
position to understand events.” To put it bluntly, the Nobel Prize winners knew
plenty of mathematics but not enough history.
One might
assume that, after the catastrophic failure of L.T.C.M., quantitative hedge
funds would have vanished from the financial scene, and derivatives such as
options would be sold a good deal more circumspectly. Yet the very reverse
happened. Far from declining, in the past 10 years hedge funds of every type
have exploded in number and in the volume of assets they manage, with
quantitative hedge funds such as Renaissance, Citadel, and D. E. Shaw emerging
as leading players. The growth of derivatives has also been spectacular—and it
has continued despite the onset of the credit crunch. Between December 2005 and
December 2007, the notional amounts outstanding for all derivatives increased
from $298 trillion to $596 trillion. Credit-default swaps quadrupled, from $14
trillion to $58 trillion.
An intimation
of the problems likely to arise came in September, when the government takeover
of Fannie and Freddie cast doubt on the status of derivative contracts
protecting the holders of more than $1.4 trillion of their bonds against
default. The consequences of the failure of Lehman Brothers were substantially
greater, because the firm was the counter-party in so many derivative
contracts. The big question is whether
those active in the market waited too long to set up some kind of clearing
mechanism. If, as seems inevitable, there is an upsurge in corporate defaults
as the U.S.
slides into recession, the whole system could completely seize up.
The China Syndrome
Just 10 years
ago, during the Asian crisis of 1997–98, it was conventional wisdom that
financial crises were more likely to happen on the periphery of the world
economy—in the so-called emerging markets of East Asia and Latin America. Yet
the biggest threats to the global financial system in this new century have
come not from the periphery but from the core. The explanation for this strange
role reversal may in fact lie in the way emerging markets changed their
behavior after 1998.
For many
decades it was assumed that poor countries could become rich only by borrowing
capital from wealthy countries. Recurrent debt crises and currency crises
associated with sudden withdrawals of Western money led to a rethinking,
inspired largely by the Chinese example. When the Chinese wanted to attract foreign
capital, they insisted that it take the form of direct investment. That meant
that instead of borrowing from Western banks to finance its industrial
development, as many emerging markets did, China got foreigners to build
factories in Chinese enterprise zones—large, lumpy assets that could not easily
be withdrawn in a crisis.
The crucial
point, though, is that the bulk of Chinese investment has been financed from China ’s own
savings. Cautious after years of instability and unused to the panoply of
credit facilities we have in the West, Chinese households save a high
proportion of their rising incomes, in marked contrast to Americans, who in
recent years have saved almost none at all. Chinese corporations save an even
larger proportion of their soaring profits. The remarkable thing is that a
growing share of that savings surplus has ended up being lent to the United States .
In effect, the People’s Republic of China
has become banker to the United
States of America .
The Chinese
have not been acting out of altruism. Until very recently, the best way for China to employ its vast population was by
exporting manufactured goods to the spendthrift U.S. consumer. To ensure that those
exports were irresistibly cheap, China had to fight the tendency for
its currency to strengthen against the dollar by buying literally billions of
dollars on world markets. In 2006, Chinese holdings of dollars reached 700
billion. Other Asian and Middle Eastern economies adopted much the same
strategy.
The benefits
for the United States
were manifold. Asian imports kept down U.S. inflation. Asian labor kept
down U.S.
wage costs. Above all, Asian savings kept down U.S. interest rates. But there was
a catch. The more Asia was willing to lend to the United States , the more Americans
were willing to borrow. The Asian savings glut was thus the underlying cause of
the surge in bank lending, bond issuance, and new derivative contracts that
Planet Finance witnessed after 2000. It was the underlying cause of the
hedge-fund population explosion. It was the underlying reason why
private-equity partnerships were able to borrow money left, right, and center
to finance leveraged buyouts. And it was the underlying reason why the U.S.
mortgage market was so awash with cash by 2006 that you could get a 100 percent
mortgage with no income, no job, and no assets.
Whether or not China is now sufficiently “decoupled” from the United
States that it can insulate itself from our credit crunch remains to be seen.
At the time of writing, however, it looks very doubtful.
Back to Reality
The modern
financial system is the product of centuries of economic evolution. Banks
transformed money from metal coins into accounts, allowing ever larger
aggregations of borrowing and lending. From the Renaissance on, government
bonds introduced the securitization of streams of interest payments. From the
17th century on, equity in corporations could be bought and sold in public
stock markets. From the 18th century on, central banks slowly learned how to
moderate or exacerbate the business cycle. From the 19th century on, insurance
was supplemented by futures, the first derivatives. And from the 20th century
on, households were encouraged by government to skew their portfolios in favor
of real estate.
Economies that combined
all these institutional innovations performed better over the long run than
those that did not, because financial intermediation generally permits a more
efficient allocation of resources than, say, feudalism or central planning. For
this reason, it is not wholly surprising that the Western financial model
tended to spread around the world, first in the guise of imperialism, then in
the guise of globalization.
Yet money’s
ascent has not been, and can never be, a smooth one. On the contrary, financial
history is a roller-coaster ride of ups and downs, bubbles and busts, manias
and panics, shocks and crashes. The excesses of the Age of Leverage—the deluge
of paper money, the asset-price inflation, the explosion of consumer and bank
debt, and the hypertrophic growth of derivatives—were bound sooner or later to
produce a really big crisis.
It remains
unclear whether this crisis will have economic and social effects as disastrous
as those of the Great Depression, or whether the monetary and fiscal authorities
will succeed in achieving a Great Repression, averting a 1930s-style “great
contraction” of credit and output by transferring the as yet unquantifiable
losses from banks to taxpayers. Either
way, Planet Finance has now returned to Planet Earth with a bang. The key
figures of the Age of Leverage—the lax central bankers, the reckless investment
bankers, the hubristic quants—are now feeling the full force of this planet’s
gravity.
But what about
the rest of us, the rank-and-file members of the deluded crowd? Well, we shall
now have to question some of our most deeply rooted assumptions—not only about
the benefits of paper money but also about the rationale of the property-owning
democracy itself. On Planet Finance it
may have made sense to borrow billions of dollars to finance a massive
speculation on the future prices of American houses, and then to erect on the
back of this trade a vast inverted pyramid of incomprehensible securities and
derivatives. But back here on Planet Earth it suddenly seems like an
extraordinary popular delusion.
No comments:
Post a Comment