The Dot-Coms Were Better Than Facebook

Twenty years ago, another high-profile tech executive testified before Congress. It was a more innocent time.

Bill Gates and Mark Zuckerberg testify before Senate committees, 1998 and 2018
Bill Gates and Mark Zuckerberg testify before Senate committees, 1998 and 2018 (AP / Joe Marquette / Reuters / Leah Millis )

Twenty years and a month ago, Bill Gates, then chairman and CEO of Microsoft, made his first appearance before Congress. In testimony before the Senate Judiciary Committee, Gates defended against the accusation that his company was a monopoly.

Antitrust investigations into the company had been ongoing for almost a decade by then, since the George H.W. Bush administration. The ubiquity of Microsoft’s operating systems had raised initial concern, but Internet Explorer, Microsoft’s entry into the web-browser marketplace, had stoked further worry. Given Microsoft Windows’ 90 percent market penetration at the time, government and industry worried that the company would use that position to charge fees or otherwise control access to the internet, an important new tool for personal and business uses.

The hearings, hopelessly distant in technological time, are worth revisiting today. Utah senator Orrin Hatch chaired the committee at the time of the Microsoft testimony; two decades later, Hatch interrogated Mark Zuckerberg on Facebook’s business model, eliciting the CEO’s smirksome answer, “Senator, we run ads.”

Much has changed since 1998. Then, the monopoly power of a computer-software giant was on trial, rather than a global notice board that let loose enough personal data to undermine democracy. At issue back then wasn’t just reining in Microsoft on antitrust grounds, but also ensuring that computing’s future couldn’t be governed by one corporation, particularly on the matter of how the internet was going to impact ordinary life.

But the role of computing has changed since Gates’s testimony. The computer ceased to be a servant of human life and began to be the purpose for which that life is conducted. That’s the heart of the problem with the technology industry today, and it’s a problem that data-privacy regulation alone has no hope of fixing.


It wasn’t just members of Congress upset with Microsoft, back then. The heads of competing tech firms also took issue with the company’s size. Netscape Communications, makers of the first popular web browser, were among those asserting Microsoft’s monopoly, as was Sun Microsystems, a computer-server company that developed the Java platform in the early 1990s. Sun’s CEO, Scott McNealy, offered this testimony:

We think, left unchecked, Microsoft has a monopoly position that they could use to leverage their way into banking, newspapers, cable, and broadcasting, Internet service providers, applications, data bases browsers. You name it.

McNealy was right to worry about consolidation in tech, but wrong to peg Microsoft as its perpetrator. Fast forward twenty years, and Zuckerberg was asked similar questions: Is Facebook a media company? A financial services company? An application company? Google, which was still just a Stanford lab experiment when Gates testified, provides broadband services and owns YouTube, a new kind of broadcaster. Sun, by contrast, no longer exists. It was absorbed into Oracle in 2009. Facebook now occupies the site of its former campus.

But in the ’90s, it was hard to foresee that the dominance McNealy feared Microsoft might entrench would be realized instead by a few start-ups that didn’t exist yet. In 1994, the Justice Department demanded that Microsoft not use its dominant position in operating systems to quash competition. Some opponents considered the details a wrist-slap, but that didn’t stop the government from intervening in Microsoft’s plans to bundle internet-service software, including the Internet Explorer browser, in versions of Windows. After years of rulings and appeals in the antitrust case—including a failed attempt to break up the company—the government and Microsoft settled in 2001, although it took until 2004 for various state appeals to run their course.

Something else happened during those years. The period between 1994 and 2001 witnessed the rise of the commercial internet. Netscape Navigator was released in December 1994. By 1999, hundreds of tech companies had gone public, some enjoying sevenfold increases on their first day of trading. By March of 2000, the bubble had started to burst, and by the end of the year, the sector had lost $1.7 trillion in value. Then came 9/11, followed by the Enron and Worldcom scandals. By September 2002, the NASDAQ, where most technology stocks were traded, was down almost 77 percent from its 2000 high.

A few long-term survivors scraped through, among them Amazon, eBay, and Yahoo! (until recently, anyway). But this period is most often associated with excess and folly. Companies like Pets.com and eToys.com raised and spent huge sums of money to sell commodity goods online. Content portals like Lycos, GeoCities, and Broadcast.com enjoyed multibillion-dollar acquisitions. Convenience sites like Webvan and Kozmo.com offered delivery of groceries and snacks within minutes—Kozmo would even deliver a single ice-cream bar for free. Dot-com profligacy included multimillion-dollar parties, decadent offices, and expensive Super Bowl advertisements, all paid for by enormous net operating losses.


But despite its infamous excess, the dot-com boom was a wellspring of mundane success, too. During these years, businesses, organizations, governments, and communities went online. E-commerce is old hat now, but the idea of shopping on computers found its feet during this period. Information about commercial, cultural, or civic activities became accessible from home. Signing up for cable service or paying a utility bill became possible on the web, along with booking plane tickets and hotel rooms. For all this to happen, big institutions with complex legacy systems had to be mated to the new server and browser infrastructure that ran the web.

For those of us who worked on such solutions during this period, the effort was profoundly different from building and maintaining an app or website today. Most of the work involved figuring out how to make the internet work with, and for, the people and systems that preceded it.

Before the publishing workflows converged, content-management software for newspapers and magazines had to work with older systems for copy and layout, which couldn’t be disrupted for some newfangled website. Reservation, warranty, and warehousing systems had to interface with services running on mainframes, some running software that was decades old. Even simple “brochure-ware” websites required translating the standards of print and broadcast communication and marketing to the capacities of the much more limited web.

When web services went deeper than the page, they became more like technology consulting operations than dot-com fever-dreams. Database schema had to be negotiated with in-house administrators; their designs might work well for low-volume, internal use but not for real-time, high-load situations. Financials systems had to be retrofitted for secure and reliable access. Clueless executives, middle managers, regional operators, and customers had to be assuaged and serviced. Web consultancies like Razorfish were just as profligate as Pets.com, but those organizations really did solve problems for businesses, governments, and organizations. Their job was to make computers work with and for the world’s existing infrastructure. Despite the occasional parties and the even more occasional stock options, internet work was largely service work.

The Y2K affair, which overlapped with the dot-com era, reinforced that value. On the past several New Year’s Eves, Twitter users have exchanged sneers over an old Best Buy warning sticker placed on new computers sold in 1999. “Remember: Turn off your computer before midnight,” it read. The whole thing seems idiotic today. But in the years leading up to the Year 2000 Problem, thousands of software developers, many older ones recruited based on their days as COBOL programmers, worked to make sure that critical systems in almost every organization didn’t fail. If successful, none of their work would be seen or heard from again—which it wasn’t. Was Y2K a racket to boost the tech-consulting industry? Maybe, but not entirely. No matter the case, the way people responded to it—with serious planning, investment, and concern—couldn’t be more different from how people tend to think about global-technology issues today, including the way Facebook handled personal data on its platform. If it happened now, Y2K would be seen as an opportunity to discard the stupid chaff of the past, not to retrofit it for ongoing service and security.

All this work still takes place, of course. Your bank and electric company and airline all still have to get their internal systems working together with websites and apps. Old mainframes still have to be updated to work with new systems. But that kind of labor is not at the center of technological progress, esteem, aspiration, or wealth. Instead, old ways seek disruption and replacement through new approaches, in which the technology largely serves itself. Facebook and Google and Instagram and Uber and the like are big, complex offerings that require huge effort to build and keep running. But they are also organizations that strive to decouple from or reinvent the world rather than operating in collaboration with it. Facebook’s current scandal is a crisis of data trust and integrity. But more broadly, it is a crisis of provincialism. The company didn’t bother to evaluate how it might be used in the world, beyond its own expectations, and to design its systems accordingly, and defensively.


When old-timers lament the loss of the web of the 1990s and early 2000s, they often cite the openness, freedom, and individualism that internet’s decentralized infrastructure was supposed to bring to civic life. No gatekeepers! No middlemen! Anybody with a web server could become a captain of their chosen industry. Even Gates parroted this idea in his 1998 senate testimony, “The openness of the internet is inherent in its architecture.”

Those ideals were always gravely flawed, for the moment anyone can reach anyone else directly, danger, deceit, and exploitation are sure to follow. To some extent, that mistake led to Facebook’s current crisis, since an organization that only thinks it can do good just by “connecting people” can never imagine that its intentions might lead to harm.

But it might be even more important to lament the loss of computation as a servant, rather than a master of the human condition. That’s the big thing that changed between Gates’s appearance before Orrin Hatch and the other members of the Senate Judiciary in 1998 and Zuckerberg’s in 2018. Back then, computers were already partly entangled with worldly deeds, enough that their potential to become even more important in those acts online was clear. Clearing the road for innovation by ensuring competition wasn’t a bad idea at the time, although Microsoft endured little impact from the antitrust settlements, and the innovators that emerged after it became more dangerous and just as anti-competitive as Microsoft anyway. The tech world still relies on Microsoft, but it also sees the company as an old-time software and business-services supplier whose best days are behind it.

Meanwhile, many of the old dot-com failures—including the most comical ones—were reanimated as successes in the decades that followed. Instacart took over where Webvan had failed. Kozmo got swapped out for Uber Eats and its ilk. Google, of course, replaced Lycos and Inktomi and AltaVista and even Yahoo!. Spotify and Pandora and the rest reinvigorated the idea behind Broadcast.com, the internet-radio service that made Mark Cuban a dot-com billionaire. MySpace and Friendster found their stride in Facebook and Twitter.

When looking back at their precursors, some rue that it was just too early, back then. To be sure, the scale and infrastructure required to operate those services was more difficult then, maybe impossibly so. But even so, to conclude that the replacement of any given human activity by another mediated mostly or entirely by computers gets things backward. It presumes that everything melts down into exchanges with computers eventually, and that human effort has no option but to cede ground to them. If “software is eating the world,” as Netscape co-founder and venture capitalist Marc Andreessen put it, that engorgement is considered a good thing, a marker of wealth and power and therefore progress.

Even Congress seems resigned to the matter. The Zuckerberg hearings indicate that lawmakers might ponder, at most, consumer-data privacy regulation for companies like Facebook and Google. That’s a good idea that the United States is shamefully behind in realizing. But it also assumes that Facebook is already the right answer for citizens of the United States or the world—an organization too big to fail, whose best option is ever further growth and entrenchment. Who can even think of life without Facebook, without Google, without Amazon, without Uber? Computing no longer hopes to integrate the processes of organizations with the goals of people by means of networks that connect the two. Now it aims to set the terms on which those goals and processes can be conducted. To replace the world with computers.

The irony is that Facebook, and Google too, are still in the business of integrating the tidy, online world with the messy, offline one. But now they do so mostly by giving ordinary folks a tiny perch from which to compete hopelessly against technology, and one another. As Zuckerberg put it, they sell ads. They sell them to millions of businesses and organizations trying to make a swing of it out beyond their walled gardens, beyond the app stores and the server farms, where people scramble to make their daily lives operate. Even in the shadow of the technology industry and its wealth, the busy lot of families and offices and cities and airports and all the rest still whirrs on, coupling to endless materials well beyond the phones and servers used to buy them, sell them, or to exchange symbols about them.

We’ve long since stopped calling that sphere “the real world,” as if it were somehow separate from the virtual, computational one that intersects with and undergirds it. Maybe that was a mistake.

Ian Bogost is a contributing writer at The Atlantic.