copic faults.  And the faults in bad
software can be so subtle as to be practically theological.

     If you want a mechanical system to do something
new, then you must travel to where it is, and pull pieces
out
of it, and wire in new pieces.  This costs money.  However,
if you want a chip to do something new, all you have to do
is change its software, which is easy, fast and dirt-cheap.
You don't even have to see the chip to change its program.
Even if you did see the chip, it wouldn't look like much.  A
chip with program X doesn't look one whit different from a
chip with program Y.

     With the proper codes and sequences, and access to
specialized phone-lines, you can change electronic
switching systems all over America from anywhere you
please.

     And so can other people.  If they know how, and if
they want to, they can sneak into a  microchip via the
special phonelines and diddle with it, leaving no physical
trace at all.  If they broke into the operator's station and
held Leticia at gunpoint, that would be very obvious.  If
they broke into a telco building and went after an
electromechanical switch with a toolbelt, that would at
least leave many traces.  But people can do all manner of
amazing things to computer switches just by typing on a
keyboard, and keyboards are everywhere today.  The
extent of this vulnerability is deep, dark, broad, almost
mind-boggling, and yet this is a basic, primal fact of life
about any computer on a network.

     Security experts over the past twenty years have
insisted, with growing urgency, that this basic
vulnerability
of computers represents an entirely new level of risk, of
unknown but obviously dire potential to society.   And they
are right.

     An electronic switching station does pretty much
everything Letitia did, except in nanoseconds and on a
much larger scale.  Compared to Miss Luthor's ten
thousand jacks, even a primitive 1ESS switching computer,
60s vintage,  has a 128,000 lines.   And the current AT&T
system of choice is the monstrous fifth-generation 5ESS.

      An Electronic Switching Station can scan every line
on its "board" in a tenth of a second, and it does this over
and over, tirelessly, around the clock.  Instead of eyes, it
uses "ferrod scanners" to check the condition of local lines
and trunks.  Instead of hands, it has "signal distributors,"
"central pulse distributors," "magnetic latching relays,"
and "reed switches," which complete and break the calls.
Instead of a brain, it has a "central processor."   Instead
of
an instruction manual, it has a program.   Instead of a
handwritten logbook for recording and billing calls, it has
magnetic tapes. And it never has to talk to anybody.
Everything a customer might say to it is done by punching
the direct-dial tone buttons on your subset.

     Although an Electronic Switching Station can't talk, it
does need an interface, some way to relate to its, er,
employers.   This interface is known as the "master control
center."  (This interface might be better known simply as
"the interface," since it doesn't actually "control" phone
calls directly.  However, a term like "Master Control
Center" is just the kind of rhetoric that telco maintenance
engineers  -- and hackers -- find particularly satisfying.)

     Using the master control center, a phone engineer
can test local and trunk lines for malfunctions.  He (rarely
she) can check various alarm displays, measure traffic on
the lines, examine the records of telephone usage and the
charges for those calls, and change the programming.

     And, of course, anybody else who gets into the master
control center by remote control can also do these things,
if he (rarely she) has managed to figure them out, or, more
likely, has somehow swiped the knowledge from people
who already know.

     In 1989 and 1990, one particular RBOC, BellSouth,
which felt particularly troubled, spent a purported $1.2
million on computer security.   Some think it spent as
much as two million, if you count all the associated costs.
Two million dollars is still very little compared to the
great
cost-saving utility of telephonic computer systems.

     Unfortunately, computers are also stupid.  Unlike
human beings, computers  possess the truly profound
stupidity of the inanimate.

      In the 1960s, in the first shocks of spreading
computerization, there was much easy talk about the
stupidity of computers -- how they could "only follow the
program" and were rigidly required to do "only what they
were told."   There has been rather less talk about the
stupidity of computers since they began to achieve
grandmaster status in chess tournaments, and to manifest
many other impressive forms of apparent cleverness.

       Nevertheless, computers *still* are profoundly
brittle and stupid; they are simply vastly more subtle in
their stupidity and brittleness.   The computers of the
1990s are much more reliable in their components than
earlier computer systems, but they are also called upon to
do far more complex things, under far more challenging
conditions.

     On a basic mathematical level, every single line of a
software program offers a chance for some possible
screwup.   Software does not sit still when it works; it
"runs,"
it interacts with itself and with its own inputs and
outputs.
By analogy, it stretches like putty into millions of
possible
shapes and conditions, so many shapes that they can
never all be successfully tested, not even in the lifespan
of
the universe.  Sometimes the putty snaps.

     The stuff we call "software" is not like anything that
human society is used to thinking about.  Software is
something like a machine, and something like
mathematics, and something like language, and
something like thought, and art, and information....  but
software is not in fact any of those other things.   The
protean quality of software is one of the great sources of
its
fascination.  It also makes software very powerful, very
subtle, very unpredictable, and very risky.

     Some software is bad and buggy.  Some is "robust,"
even "bulletproof."  The best software is that which has
been tested by thousands of users under thousands of
different conditions, over years.  It is then known as
"stable."   This does *not* mean that the software is now
flawless, free of bugs.  It generally means that there are
plenty of bugs in it, but the bugs are well-identified and
fairly well understood.

      There is simply no way to assure that software is free
of flaws.  Though software is mathematical in nature, it
cannot by "proven" like a mathematical theorem; software
is more like language, with inherent ambiguities, with
different definitions, different assumptions, different
levels of meaning that can conflict.

      Human beings can manage, more or less, with
human language because we can catch the gist of it.

     Computers, despite years of effort in "artificial
intelligence," have proven spectacularly bad in "catching
the gist" of anything at all.  The tiniest bit of semantic
grit
may still bring the mightiest computer tumbling down.
One of the most hazardous things you can do to a
computer program is try to improve it -- to try to make it
safer.  Software "patches" represent new, untried un-
"stable" software, which is by definition riskier.

     The modern telephone system has come to depend,
utterly and irretrievably, upon software.  And the System
Crash of January 15, 1990, was caused by an
*improvement* in software.  Or rather, an *attempted*
improvement.

     As it happened, the problem itself -- the problem per
se  --  took this form.  A piece of telco software had been
written in C language, a standard language of the telco
field.  Within the C software was a long "do... while"
construct.  The "do... while" construct contained a "switch"
statement.  The "switch" statement contained an "if"
clause.  The "if" clause contained a "break."  The "break"
was *supposed* to "break" the "if clause."  Instead, the
"break" broke the "switch" statement.

     That was the problem, the actual reason why people
picking up phones on January 15, 1990, could not talk to
one another.

     Or at least, that was the subtle, abstract,
cyberspatial
seed of the problem.  This is how the problem manifested
itself from the realm of programming into the realm of
real life.

     The System 7 software for AT&T's 4ESS switching
station, the "Generic 44E14 Central Office Switch
Software," had been extensively tested, and was
considered very stable.   By the end of 1989, eighty of
AT&T's switching systems nationwide had been
programmed with the new software.  Cautiously, thirty-
four stations were left to run the slower, less-capable
System 6, because AT&T suspected there might be
shakedown problems with the new and unprecedently
sophisticated System 7 network.

     The stations with System 7 were programmed to
switch over to a backup net in case of any problems.  In
mid-December 1989, however, a new high-velocity, high-
security software patch was distributed to each of the 4ESS
switches that would enable them to switch over even more
quickly, making the System 7 network that much more
secure.

     Unfortunately, every one of these 4ESS switches was
now in possession of a small but deadly flaw.

      In order to maintain the network, switches must
monitor the condition of other switches -- whether they are
up and running, whether they have temporarily shut down,
whether they are overloaded and in need of assistance,
and so forth.  The new software helped control this
bookkeeping function by monitoring the status calls from
other switches.

     It only takes four to six seconds for a troubled 4ESS
switch to rid itself of all its calls, drop everything
temporarily, and re-boot its software from scratch.
Starting over from scratch will generally rid the switch of
any software problems that may have developed in the
course of running the system.   Bugs that arise will be
simply wiped out by this process.  It is a clever idea.
This
process of automatically re-booting from scratch is known
as the "normal fault recovery routine."   Since AT&T's
software is in fact exceptionally stable, systems rarely
have
to go into "fault recovery" in the first place;  but AT&T
has
always boasted of its "real world" reliability, and this
tactic
is a belt-and-suspenders routine.

     The 4ESS switch used its new software to monitor its
fellow switches as they recovered from faults.   As other
switches came back on line after recovery, they would
send their "OK" signals to the switch.   The switch would
make a little note to that effect in its "status map,"
recognizing that the fellow switch was back and ready to
go, and should be sent some calls and put back to regular
work.

     Unfortunately, while it was busy bookkeeping with
the status map, the tiny flaw in the brand-new software
came into play.  The flaw caused the 4ESS switch to
interacted, subtly but drastically, with incoming telephone
calls from human users.  If -- and only if -- two incoming
phone-calls happened to hit the switch within a hundredth
of a second,  then a small patch of data would be garbled
by the flaw.

     But the switch had been programmed to monitor
itself constantly for any possible damage to its data.
When the switch perceived that its data had been
somehow  garbled, then it too would go down, for swift
repairs to its software.  It would signal its fellow
switches
not to send any more work.  It would go into the fault-
recovery mode for four to six seconds.  And then the switch
would be fine again, and would send out its "OK, ready for
work" signal.

     However, the "OK, ready for work" signal was the
*very thing that had caused the   switch to go down in the
first place.*  And *all* the System 7 switches had the same
flaw in their status-map software.  As soon as they stopped
to make  the bookkeeping note that their fellow switch was
"OK," then they too would become vulnerable to the slight
chance that two phone-calls would hit them within a
hundredth of a second.

     At approximately 2:25 p.m. EST on Monday, January
15, one of AT&T's 4ESS toll switching systems in New York
City had an actual, legitimate, minor problem.  It went into
fault recovery routines, announced "I'm going down," then
announced, "I'm back, I'm OK."   And this cheery message
then blasted throughout the network to many of its fellow
4ESS switches.

     Many of the switches, at first, completely escaped
trouble.  These lucky switches were not hit by the
coincidence of two phone calls within a hundredth of a
second.   Their software did not fail -- at first.  But
three
switches -- in Atlanta, St. Louis, and Detroit --  were
unlucky, and were caught with their hands full.  And they
went down.  And they came back up, almost immediately.
And they too began to broadcast the lethal message that
they, too, were "OK" again, activating the lurking software
bug in yet other switches.

     As more and more switches did have that bit of bad
luck and collapsed, the call-traffic became more and more
densely packed in the remaining switches, which were
groaning to keep up with the load.   And of course, as the
calls became more densely packed, the switches were
*much more likely* to be hit twice within a hundredth of a
second.

     It only took four seconds for a switch to get well.
There was no *physical* damage of any kind to the
switches, after all.   Physically, they were working
perfectly.
This situation was "only" a software problem.

     But the 4ESS switches were leaping up and down
every four to six seconds, in a virulent spreading wave all
over America,  in utter, manic, mechanical stupidity.  They
kept *knocking*  one another down with their contagious
"OK" messages.

     It took about ten minutes for the chain reaction to
cripple the network.  Even then, switches would
periodically luck-out and manage to resume their normal
work.  Many calls -- millions of them -- were managing to
get through.  But millions weren't.

     The switching stations that used System 6 were not
directly affected.  Thanks to these old-fashioned switches,
AT&T's national system avoided complete collapse.  This
fact also made it clear to engineers that System 7 was at
fault.

     Bell Labs engineers, working feverishly in New
Jersey, Illinois, and Ohio, first tried their entire
repertoire
of standard network remedies on the malfunctioning
System 7.  None of the remedies worked, of course,
because nothing like this had ever happened to any
phone system before.

     By cutting out the backup safety network entirely,
they were able to reduce the frenzy of "OK" messages by
about half.  The system then began to recover, as the
chain reaction slowed.   By 11:30 pm on Monday January
15, sweating engineers on the midnight shift breathed a
sigh of relief as the last switch cleared-up.

     By Tuesday they were pulling all the brand-new 4ESS
software and replacing it with an earlier version of System
7.

     If these had been human operators, rather than
computers at work, someone would simply have
eventually stopped screaming.  It would have been
*obvious* that the situation was not "OK," and common
sense would have kicked in.   Humans possess common
sense -- at least to some extent.   Computers simply don't.

     On the other hand, computers can handle hundreds
of calls per second.  Humans simply can't.   If every single
human being in America worked for the phone company,
we couldn't match the performance of digital switches:
direct-dialling, three-way calling, speed-calling, call-
waiting, Caller ID, all the rest of the cornucopia of
digital
bounty.   Replacing computers with operators is simply not
an option any more.

     And yet we still, anachronistically,  expect humans to
be running our phone system.   It is hard for us to
understand that we have sacrificed huge amounts of
initiative and control to senseless yet powerful machines.
When the phones fail, we want somebody to be
responsible.  We want somebody to blame.

     When the Crash of January 15 happened, the
American populace was simply not prepared to
understand that enormous landslides in cyberspace, like
the Crash itself, can happen, and can be nobody's fault in
particular.   It was easier to believe, maybe even in some
odd way more reassuring to believe, that some evil person,
or evil group, had done this to us.  "Hackers" had done it.
With a virus.   A trojan horse.  A software bomb.  A dirty
plot of some kind.   People believed this, responsible
people.  In 1990, they were looking hard for evidence to
confirm their heartfelt suspicions.

     And they would look in a lot of places.

     Come 1991, however, the outlines of an apparent new
reality would begin to emerge from the fog.

     On July 1 and 2, 1991, computer-software collapses in
telephone switching stations disrupted service in
Washington DC, Pittsburgh, Los Angeles and San
Francisco.   Once again, seemingly minor maintenance
problems had crippled the digital System 7.  About twelve
million people were affected in the Crash of July 1, 1991.

     Said the New York Times Service:  "Telephone
company executives and federal regulators said they were
not ruling out the possibility of sabotage by computer
hackers, but most seemed to think the problems stemmed
from some unknown defect in the software running the
networks."

     And sure enough, within the week, a red-faced
software company, DSC Communications Corporation of
Plano, Texas, owned up to "glitches" in the "signal transfer
point" software that DSC had designed for Bell Atlantic
and Pacific Bell.  The immediate cause of the July 1 Crash
was a single mistyped character:  one tiny typographical
flaw in one single line of the software.  One mistyped
letter, in one single line, had deprived the nation's
capital
of phone service.  It was not particularly surprising that
this tiny flaw had escaped attention: a typical System 7
station requires *ten million* lines of code.

     On Tuesday, September 17, 1991, came the most
spectacular outage yet.   This case had nothing to do with
software failures -- at least, not directly.  Instead, a
group
of AT&T's switching stations in New York City had simply
run out of electrical power and shut down cold.  Their
back-up batteries had failed.  Automatic warning systems
were supposed to warn of the loss of battery power, but
those automatic systems had failed as well.

     This time, Kennedy, La Guardia, and Newark airports
all had their voice and data communications cut.   This
horrifying event was particularly ironic, as attacks on
airport computers by hackers had long been a standard
nightmare scenario, much trumpeted by computer-
security experts who feared the computer underground.
There had even been a Hollywood thriller about sinister
hackers ruining airport computers -- *Die Hard II.*

     Now AT&T itself had crippled airports with computer
malfunctions  -- not just one airport, but three at once,
some of the busiest in the world.

     Air traffic came to a standstill throughout the Greater
New York area, causing more than 500 flights to be
cancelled, in a spreading wave all over America and even
into Europe.  Another 500 or so flights were delayed,
affecting, all in all, about 85,000 passengers.  (One of
these
passengers was the chairman of the Federal
Communications Commission.)

     Stranded passengers in New York and New Jersey
were further infuriated to discover that they could not
even manage to make a long distance phone call, to
explain their delay to loved ones or business associates.
Thanks to the crash, about four and a half million
domestic calls, and half a million international calls,
failed
to get through.

     The September 17 NYC Crash, unlike the previous
ones, involved not a whisper of "hacker" misdeeds.  On the
contrary,  by 1991, AT&T itself was suffering much of the
vilification that had formerly been directed at hackers.
Congressmen were grumbling.  So were state and federal
regulators.  And so was the press.

     For their part, ancient rival MCI took out snide full-
page newspaper ads in New York, offering their own long-
distance services for the "next time that AT&T goes down."

     "You wouldn't find a classy company like AT&T using
such advertising," protested AT&T Chairman Robert
Allen, unconvincingly.  Once again, out came the full-page
AT&T apologies in newspapers, apologies for "an
inexcusable culmination of both human and mechanical
failure."   (This time, however, AT&T offered no discount
on later calls.  Unkind critics suggested that AT&T were
worried about setting any precedent for refunding the
financial losses caused by telephone crashes.)

     Industry journals asked  publicly if AT&T was "asleep
at the switch."   The telephone network, America's
purported marvel of high-tech reliability,  had gone down
three times in 18 months.  *Fortune* magazine listed the
Crash of September 17 among the "Biggest Business
Goofs of 1991,"  cruelly parodying AT&T's ad campaign in
an article entitled "AT&T Wants You Back (Safely On the
Ground, God Willing)."

     Why had those New York switching systems simply
run out of power?  Because no human being had attended
to the alarm system.  Why did the alarm systems blare
automatically, without any human being noticing?
Because the three telco technicians who *should* have
been listening were absent from their stations in the
power-room, on another floor of the building -- attending a
training class.  A training class about the alarm systems
for
the power room!

     "Crashing the System" was no longer
"unprecedented" by late 1991.   On the contrary, it no
longer even seemed an oddity.   By 1991, it was clear that
all the policemen in the world could no longer "protect"
the phone system from crashes.   By far the worst crashes
the system had ever had, had been inflicted, by the
system, upon *itself.*  And this time nobody was making
cocksure statements that this was an anomaly, something
that would never happen again.   By 1991 the System's
defenders had met their nebulous Enemy, and the Enemy
was -- the System.


        PART TWO:  THE DIGITAL UNDERGROUND


     The date was May 9, 1990.  The Pope was touring
Mexico City.   Hustlers from the Medellin Cartel were
trying to buy black-market Stinger missiles in Florida.  On
the comics page, Doonesbury character Andy was dying of
AIDS.   And then.... a highly unusual item whose novelty
and calculated rhetoric won it headscratching attention in
newspapers all over America.

     The US Attorney's office in Phoenix, Arizona, had
issued a press release announcing a nationwide law
enforcement crackdown against "illegal computer hacking
activities."  The sweep was officially known as "Operation
Sundevil."

     Eight paragraphs in the press release gave the bare
facts:  twenty-seven search warrants carried out on May 8,
with three arrests, and a hundred and fifty agents on the
prowl in "twelve" cities across America.   (Different counts
in local press reports yielded "thirteen," "fourteen," and
"sixteen" cities.)   Officials estimated that criminal
losses
of revenue to telephone companies "may run into millions
of dollars."   Credit for the Sundevil investigations was
taken by the US Secret Service, Assistant US Attorney Tim
Holtzen of Phoenix, and the Assistant Attorney General of
Arizona,  Gail Thackeray.

       The prepared remarks of Garry M. Jenkins,
appearing in a U.S. Department of Justice press release,
were of particular interest.  Mr. Jenkins was the Assistant
Director of the US Secret Service, and the highest-ranking
federal official to take any direct public role in  the
hacker
crackdown of 1990.

      "Today, the Secret Service is sending a clear message
to those computer hackers who have decided to violate
the laws of this nation in the mistaken belief that they can
successfully avoid detection by hiding behind the relative
anonymity of their computer terminals.(...)
     "Underground groups have been formed for the
purpose of exchanging information relevant to their
criminal activities.  These groups often communicate with
each other through message systems between computers
called 'bulletin boards.'
     "Our experience shows that many computer hacker
suspects are no longer misguided teenagers,
mischievously playing games with their computers in their
bedrooms.  Some are now high tech computer operators
using computers to engage in unlawful conduct."

     Who were these "underground groups" and "high-
tech operators?"  Where had they come from?  What did
they want?  Who *were*   they?  Were they
"mischievous?"  Were they dangerous?  How had
"misguided teenagers" managed to alarm the United
States Secret Service?  And just how widespread was this
sort of thing?

     Of all the major players in the Hacker Crackdown:
the phone companies, law enforcement, the civil
libertarians, and the "hackers" themselves -- the "hackers"
are by far the most mysterious, by far the hardest to
understand, by far the *weirdest.*

      Not only are "hackers"  novel in their activities, but
they come in a variety of odd subcultures, with a variety of
languages, motives and values.

     The earliest proto-hackers were probably those
unsung mischievous telegraph boys who were summarily
fired by the Bell Company in 1878.

     Legitimate "hackers," those computer enthusiasts
who are independent-minded but law-abiding, generally
trace their spiritual ancestry to  elite technical
universities,
especially M.I.T. and Stanford, in the 1960s.

     But the genuine roots of the modern hacker
*underground* can probably be traced most successfully
to a now much-obscured hippie anarchist movement
known as the Yippies.   The  Yippies, who took their name
from the largely fictional "Youth International Party,"
carried out a loud and lively policy of surrealistic
subversion and outrageous political mischief.  Their basic
tenets were flagrant sexual promiscuity, open and copious
drug use, the political overthrow of any powermonger over
thirty years of age, and an immediate end to the war in
Vietnam, by any means necessary, including the psychic
levitation of the Pentagon.

     The two most visible Yippies were Abbie Hoffman
and Jerry Rubin.  Rubin eventually  became a Wall Street
broker.  Hoffman, ardently sought by federal authorities,
went into hiding for seven years, in Mexico, France, and
the United States.   While on the lam, Hoffman continued
to write and publish, with help from sympathizers in the
American anarcho-leftist underground.   Mostly, Hoffman
survived through false ID and odd jobs.  Eventually he
underwent facial plastic surgery and adopted an entirely
new identity as one "Barry Freed."   After surrendering
himself to authorities in 1980, Hoffman  spent a year in
prison on a cocaine conviction.

     Hoffman's worldview grew much darker as the glory
days of the 1960s faded.  In 1989, he purportedly
committed suicide, under odd and, to some, rather
suspicious circumstances.

     Abbie Hoffman is said to have caused the Federal
Bureau of Investigation to amass the single largest
investigation file ever opened on an individual American
citizen.  (If this is true, it is still questionable whether
the
FBI regarded Abbie Hoffman a serious public threat  --
quite possibly, his file was enormous simply because
Hoffman left colorful legendry wherever he went).   He
was a gifted publicist, who regarded electronic media as
both playground and weapon.  He actively enjoyed
manipulating network TV and other gullible, image-
hungry media,  with various weird lies, mindboggling
rumors, impersonation scams, and other sinister
distortions, all absolutely guaranteed to upset cops,
Presidential candidates, and federal judges.    Hoffman's
most famous work was a book self-reflexively known as
*Steal This Book,* which publicized a number of methods
by which young, penniless hippie agitators might live off
the fat of a system supported by humorless drones.  *Steal
This Book,* whose title urged readers to damage the very
means of distribution which had put it into their hands,
might be described as a spiritual ancestor of a computer
virus.

     Hoffman, like many a later conspirator, made
extensive use of pay-phones for his agitation work -- in his
case, generally through the use of cheap brass washers as
coin-slugs.

     During the Vietnam War, there was a federal surtax
imposed on telephone service; Hoffman and his cohorts
could, and did,  argue that in systematically stealing
phone service they were engaging in civil disobedience:
virtuously denying tax funds to an illegal and immoral war.

      But this thin veil of decency was soon dropped
entirely.  Ripping-off the System  found its own
justification in deep alienation and a basic outlaw
contempt for  conventional bourgeois values.  Ingenious,
vaguely politicized varieties of rip-off, which might be
described as "anarchy by convenience," became very
popular in Yippie circles, and because rip-off was so
useful, it was to survive the Yippie movement itself.

     In the early 1970s, it required fairly limited
expertise
and ingenuity to cheat payphones, to divert "free"
electricity and gas service, or to rob vending machines and
parking meters for handy pocket change.   It also required
a conspiracy to spread this knowledge, and the gall and
nerve actually to commit petty theft, but the Yippies had
these qualifications in plenty.  In June 1971, Abbie
Hoffman and a telephone enthusiast sarcastically known
as "Al Bell"  began publishing a newsletter called *Youth
International Party Line.*  This newsletter was dedicated
to collating and spreading Yippie rip-off techniques,
especially of phones, to the joy of the freewheeling
underground and the insensate rage of all straight people.

     As a political tactic, phone-service theft ensured that
Yippie advocates would always have ready access to the
long-distance telephone as a medium, despite the Yippies'
chronic lack of organization, discipline, money, or even a
steady home address.

     *Party Line* was run out of Greenwich Village for a
couple of years, then "Al Bell" more or less defected from
the faltering ranks of Yippiedom, changing the
newsletter's name to *TAP* or *Technical Assistance
Program.*  After the Vietnam War ended, the steam
began leaking rapidly out of American radical dissent.
But  by this time, "Bell" and his dozen or so core
contributors  had the bit between their teeth, and had
begun to derive tremendous gut-level satisfaction from
the sensation of pure *technical power.*

     *TAP* articles, once highly politicized, became
pitilessly jargonized and technical, in homage or parody to
the Bell System's own technical documents, which *TAP*
studied closely, gutted, and reproduced without
permission.   The *TAP* elite revelled in gloating
possession of the specialized knowledge necessary to beat
the system.

        "Al Bell" dropped out of the game by the late 70s,
and "Tom Edison" took over; TAP  readers (some 1400 of
them, all told) now began to show more interest in telex
switches and the growing phenomenon of computer
systems.

     In 1983, "Tom Edison" had his computer stolen and
his house set on fire by an arsonist.  This was an
eventually
mortal blow to *TAP* (though the legendary name was to
be resurrected in 1990 by a young Kentuckian computer-
outlaw named "Predat0r.")

                         #


     Ever since telephones began to make money, there
have been people willing to rob and defraud phone
companies.   The legions of petty phone thieves vastly
outnumber those "phone phreaks" who  "explore the
system" for the sake of the intellectual challenge.   The
New York metropolitan area  (long in the vanguard of
American crime) claims over 150,000 physical attacks on
pay telephones every year!  Studied carefully, a modern
payphone reveals itself as a little fortress, carefully
designed and redesigned over generations,  to resist coin-
slugs, zaps of electricity, chunks of coin-shaped ice,
prybars, magnets, lockpicks, blasting caps.  Public pay-
phones must survive in a world of unfriendly, greedy
people,  and a modern payphone is as exquisitely evolved
as a cactus.

     Because the phone network pre-dates the computer
network, the scofflaws known as "phone phreaks" pre-date
the scofflaws known as "computer hackers."   In practice,
today, the line between "phreaking" and "hacking" is very
blurred, just as the distinction between telephones and
computers has blurred.  The phone system has been
digitized, and computers have learned to "talk" over
phone-lines.   What's worse -- and this was the point of the
Mr. Jenkins of the Secret Service -- some hackers have
learned to steal, and some thieves have learned to hack.

     Despite the blurring, one can still draw a few useful
behavioral distinctions between "phreaks" and "hackers."
Hackers are intensely interested in the "system" per se,
and enjoy relating to machines.  "Phreaks" are more
social,  manipulating the system in a rough-and-ready
fashion in order to get through to other human beings,
fast, cheap and under the table.

     Phone phreaks love nothing so much as "bridges,"
illegal conference calls of ten or twelve chatting
conspirators, seaboard to seaboard, lasting for many hours
-- and running, of course, on somebody else's tab,
preferably a large corporation's.

     As phone-phreak conferences wear on, people drop
out (or simply leave the phone off the hook, while they
sashay off to work or school or babysitting), and new
people are phoned up and invited to join in, from some
other continent, if possible.  Technical trivia, boasts,
brags,
lies, head-trip deceptions, weird rumors, and cruel gossip
are all freely exchanged.

     The lowest rung of phone-phreaking is the theft of
telephone access codes.   Charging a phone call to
somebody else's stolen number is, of course, a pig-easy
way of stealing phone service, requiring practically no
technical expertise.  This practice has been very
widespread, especially among lonely people without much
money who are far from home.  Code theft has flourished
especially in college dorms, military bases, and,
notoriously, among roadies for rock bands.   Of late, code
theft has spread very rapidly among Third Worlders in the
US, who pile up enormous unpaid long-distance bills to
the Caribbean, South America, and Pakistan.

     The simplest way to steal phone-codes is simply to
look over a victim's shoulder as he punches-in his own
code-number on a public payphone.  This technique is
known as "shoulder-surfing," and is especially common in
airports, bus terminals, and train stations.  The code is
then sold by the thief for a few dollars.  The buyer abusing
the code has no computer expertise, but calls his Mom in
New York,  Kingston or Caracas and runs up a huge bill
with impunity.  The losses from this primitive phreaking
activity are far, far greater than the monetary losses
caused by computer-intruding hackers.

     In the mid-to-late 1980s, until the introduction of
sterner telco security measures, *computerized* code
theft worked like a charm, and was virtually omnipresent
throughout the digital underground, among phreaks and
hackers alike.   This was accomplished through
programming one's computer to try random code
numbers over the telephone until one of them worked.
Simple programs to do this were widely available in the
underground; a computer running all night was likely to
come up with a dozen or so useful hits.  This could be
repeated week after week until one had a large library of
stolen codes.

     Nowadays, the computerized dialling of hundreds of
numbers can be detected within hours and swiftly traced.
If a stolen code is repeatedly abused, this too can be
detected within a few hours.  But for years in the 1980s,
the
publication of stolen codes was a kind of elementary
etiquette for fledgling hackers.   The simplest way to
establish your bona-fides as a raider was to steal a code
through repeated random dialling and offer it to the
"community" for use.   Codes could be both stolen, and
used, simply and easily from the safety of one's own
bedroom, with very little fear of detection or punishment.

     Before computers and their phone-line modems
entered American homes in gigantic numbers, phone
phreaks had their own special telecommunications
hardware gadget, the famous "blue box."  This fraud
device (now rendered increasingly useless by the digital
evolution of the phone system) could trick switching
systems into granting free access to long-distance lines.
It
did this by mimicking the system's own signal, a tone of
2600 hertz.

     Steven Jobs and Steve Wozniak, the founders of
Apple Computer, Inc., once dabbled in selling blue-boxes
in college dorms in California.  For many, in the early days
of phreaking, blue-boxing was scarcely perceived as
"theft," but rather as a fun (if sneaky) way to use excess
phone capacity harmlessly.  After all, the long-distance
lines were *just sitting there*....   Whom did it hurt,
really?
If you're not *damaging* the system, and  you're not
*using up any tangible resource,* and if nobody *finds
out* what you did, then what real harm have you done?
What exactly *have* you "stolen," anyway?   If a tree falls
in the forest and nobody hears it, how much is the noise
worth?  Even now this remains a rather dicey question.

     Blue-boxing was no joke to the phone companies,
however.  Indeed, when *Ramparts* magazine, a radical
publication in California, printed the wiring schematics
necessary to create a  mute box in June 1972, the
magazine was seized by police and Pacific Bell phone-
company officials.   The mute box, a blue-box variant,
allowed its user to receive long-distance calls free of
charge to the caller.  This device was closely described in
a
*Ramparts* article wryly titled "Regulating the Phone
Company In Your Home."  Publication of this article was
held to be in violation of Californian State Penal Code
section 502.7, which outlaws ownership of wire-fraud
devices and the selling of "plans or instructions for any
instrument, apparatus, or device intended to avoid
telephone toll charges."

     Issues of *Ramparts* were recalled or seized on the
newsstands, and the resultant loss of income helped put
the magazine out of business.  This was an ominous
precedent for free-expression issues, but the telco's
crushing of a radical-fringe magazine passed without
serious challenge at the time.  Even in the freewheeling
California 1970s, it was widely felt that there was
something sacrosanct about what the phone company
knew; that the telco had a legal and moral right to protect
itself by shutting off the flow of such illicit information.
Most telco information was so "specialized" that it would
scarcely be understood by any honest member of the
public.   If not published, it would not be missed.   To
print
such material did not seem part of the legitimate role of a
free press.

     In 1990 there would be a similar telco-inspired attack
on the electronic phreak/hacking "magazine" *Phrack.*
The *Phrack* legal case became a central issue in the
Hacker Crackdown, and gave rise to great controversy.
*Phrack* would also be shut down, for a  time, at least, but
this time both the telcos and their law-enforcement allies
would pay a much larger price for their actions.  The
*Phrack* case will be examined in detail, later.

     Phone-phreaking as a social practice is still very
much alive at this moment.  Today, phone-phreaking is
thriving much more vigorously than the better-known and
worse-feared practice of "computer hacking."  New forms
of phreaking are spreading rapidly, following new
vulnerabilities in sophisticated phone services.

     Cellular phones are especially vulnerable; their chips
can be re-programmed to present a false caller ID and
avoid billing.   Doing so also avoids police tapping, making
cellular-phone abuse a favorite among drug-dealers.
"Call-sell operations" using pirate cellular phones can, and
have, been run right out of the backs of cars, which move
from "cell" to "cell" in the local phone system, retailing
stolen long-distance service, like some kind of demented
electronic version of the neighborhood ice-cream truck.

      Private branch-exchange phone systems in large
corporations can be penetrated; phreaks dial-up a local
company, enter its internal phone-system, hack it, then
use the company's own PBX system to dial back out over
the public network, causing the company to be stuck with
the resulting long-distance bill.  This technique is known
as "diverting."  "Diverting"  can be very costly, especially
because phreaks tend to travel in packs and never stop
talking.   Perhaps the worst by-product of this "PBX fraud"
is that victim companies and telcos have sued one another
over the financial responsibility fo