EPISODE 1: SETTING THE STAGE

Segments:

Guests:

In this four-part podcast miniseries we explore the alternative histories of open source software through the voices of many of the people who lived through its rise. The central question is “Was open source inevitable?” Not necessarily in the particulars but in the macro.

We’ll let VP of Developer Relations at the Linux Foundation, Chris Aniszczyk, sum up the case for inevitability.

[0:59]

Open source inherently, I think was definitely inevitable because there really is no better way to collaborate and build software. And it’s not like it’s a new thing. What I mean is even before open source was a thing, companies and organizations always found ways to collaborate in the early days, the SHARE days, People would share code and other things. Academics have forever been doing this, To share and collaborate with others to innovative what you're working on. Open source just really codified that for software and kinda put guardrails on how to do it in many different ways.

What’s interesting is open source definitely targeted and sorta started with kind of the tech savvy companies and what we're seeing now is completely new industries embracing open source because maybe maybe they see the value of  what they're working on, and maintaining is not actually what the company is selling or making money.

[2:34]

But in this series we consider the possibility of timelines in which open source software plays a much different or much reduced role? Given different events or individual actions could open source have failed to become the engine for collaboration and innovation that it is today?

We’re going to probe at the ”What ifs.” So this is perhaps more accurately about counterfactual history. Whatever you call it though it’s a look at points of divergence that can provide insight into where the past could have forked. Which may in turn tell us something about what could have played out differently in the past as a way to help discern how similar patterns could play out in the future.

If this isn’t simply going to be a parlor game, the what ifs have to be plausible against the broader background of history and technology.

Some listeners may be familiar with Jared Diamond’s Guns, Germs, and Steel: The Fate of Human Societies. Diamond argues that forces as fixed from a human perspective as mountain ranges and which plants and animals could be domesticated preordained that civilization would first emerge on the Hilly Flanks of today’s Middle East. And from that head start the role of the West in human history was set. Others dispute that history was quite so inevitable but it’s one view.

With respect to computer technology and open source software, it seems hard to argue that the integrated circuit, CMOS process scaling, and binary logic wouldn’t have arisen. Other aspects of computer history are perhaps less certain but the closer we get to major technological and economic departures, the harder it gets to entertain a plausible counterfactual.

At the other extreme, you have the hero or the great man idea. The theory is primarily attributed to the Scottish philosopher and essayist Thomas Carlyle who gave a series of lectures on heroism in 1840, later published as On Heroes, Hero-Worship, and The Heroic in History. Carlyle stated that "The history of the world is but the biography of great men", reflecting his belief that heroes shape history through both their personal attributes and divine inspiration.

We’ll consider some what ifs in this vein. There are some individuals who had outsized impacts on the history of open source. But you’ll also hear from guests skeptical that the course of open source could have been significantly perturbed as easily as Linus Torvalds deciding that computer science wasn’t his thing, and therefore never writing Linux.

Let’s now talk about some of the things that were inevitable and therefore serve as the backdrop against which open source software has played out. These may seem perhaps more like simplifying assumptions lest the scope of this series gets out of hand. However, for the most part, even if one can plausibly imagine differences in the details or the timeframes, it’s hard to imagine this overall backdrop taking a wholly different form.

First is the great arcs of technology, perhaps most of all Moore’s Law which has underpinned so many of the technology advances in the computer industry and elsewhere over the past few decades. Moore’s Law is as much a statement about economics as technology. But it served as something of a self-fulfilling prophecy for the continual advancement of the chips powering computer systems.

It was a major force in transforming the industry from vertical silos to a more horizontal structure in the mid-1990s. Here’s former Intel CEO Andy Grove  at a 1996 MIT lecture discussing the 10X force that came about because technology was now permitting what had previously been the function of many chips to fit on a single chip in a much smaller area at much lower cost.

[6:16]

I called it a 10x force. A 10 times increase relative to the other forces. What happens when there is such, one of the forces that determine the well-being of a business really grows out of bounds and so big that it can distort the picture.

And what tends to happen at times like that is that the very nature of the industry changes. Not the business, not the competitive well-being of the business alone, but the whole business model changes. And I give you an example that comes right from our own industry on this one. On the left of the diagram you would have maybe the business model of the computing industry as it used to be in the '60s, '70s, '80s. Computers were sold as vertically integrated devices, proprietary platforms, proprietary operating systems, applications developed for that proprietary operating system, and a whole stack of things sold by a sales force that was unique to that enterprise.

Then came the personal computer, which was in fact a genuine almost quantitatively correctly describable 10x force in that its cost effectiveness relative to mainframes or mini-computers was about 10-fold right from the inception on. And it was clearly a toy. And it was clearly unreliable. And it couldn't do the software that was pertinent to business processes. But it was so cost effective that it represented a 10x force.

And in front of our eyes in the course of the decade of the '80s, the entire structure of the industry changed from the vertical towers of computer products competing with each other to a completely horizontal industry in which microprocessors compete with other microprocessors, operating systems compete with other operating systems, applications software in a packaged form like a bunch of commodity goods, like a bunch of detergents, compete for deck shelf space with other application software. And direct sales goes out as a means of distribution.

[9:01]

Another key element that probably needed to be in place for open source to thrive was the Internet. While collaboration of various types has taken place throughout human history, the Internet (along with, subsequently, the Web) were largely created to enable better data sharing. Here’s Irving Wladawsky-Berger, who led IBM’s Internet initiative in the 1990s.

[9:27]

I strongly believe that if it hadn't been for the internet, I'm not sure if IBM or the world for that matter would have been as aggressive in embracing open source. Just to give you an example. In 1995 or so IBM had developed its own HTTP stack, which is the underlying software supporting web servers. But then we realized that Apache, which was an open source project developing an HTTP stack, and that was actually quite a bit better. So we abandoned our own proprietary HTTP stack and joined the Apache community. And over the years, Apache became a fundamental part of one of IBM's most successful internet products WebSphere, the web application server. And there were a number of other internet based offerings that IBM was developing with the open source community.

Remember, the internet started as a research project, to allow researchers in government labs, in universities and even in industry labs, to communicate with each other, and to exchange information with each other. And the whole notion of sharing information, communicating... That's really quite common in the research community, the research community has always been wide open. People publish papers that everybody can read. Of course, they get patents and so on and so on. The Internet came out of the research community was developed within that community for a couple of decades. And then don't forget the World Wide Web came out of the same research community developed by Tim Berners. Lee, when he was at CERN, the European high energy physics lab to help physicists share information with each other. And so I feel that given how valuable Linux, email, the web was seen to be for the research community, I think it was inevitable It will jump over and be embraced by the commercial world. I think so.

[12:49]

As Irving suggests, lots of sharing went on in research communities. But was that necessarily inevitable? It’s hard to see how not. With respect to software specifically, the sharing of human-readable source code was widespread in the early days of computing. A lot of computer development took place at universities and in corporate research departments like AT&T’s Bell Labs where Unix was created. They had long-established traditions of openness and collaboration, with the result that even when code wasn’t formally placed into the public domain, it was widely shared.

My Red Hat colleague Harish Pillay who is the head of community architecture and leadership in our Singapore office argues that sharing is the default.

[13:38]

It's not so much inevitable. It was the default at the start. It became proprietary, because of business constraints of a very large giant company that felt threatened because they were giving away their software for their mainframes. And when a competitive mainframe hardware was made available, that same software ran on that machine, and so they decided, oops, this is no good, we need to close it up. And so that became a challenge. So, but it was default. So and then the trajectory of it is, you know, when proprietary to a large extent, but then again, because of the creative juices of people... people don't write code because I'm going to be paid for it. If you tangentially draw an arc from 1950, for example to 2050, the proprietary software portion of it will be a small glitch. The rest of it will just continue to be an open source or free software for that matter. I prefer free software than open source because I think it adds to the additional value of the conversation. So, yeah, it was any, it's not so much in the inevitable it was there from the beginning. It was a sidetrack that we were it was a fork of the Free Software idea to become proprietary and then now we are back to corrected that fork and now we're back to being normal. So we have been normalized. So we have won.

[15:12]

Another colleague, Jan Wildeboer describes how sharing software just came along naturally in Europe.

[15:19]

I live in Germany, I'm born Dutch. We have kind of a pragmatic approach to what to do with software. So sharing software and working together on this stuff, not only in the academic field, but also in the business field is more or less quite natural. Hardcore competitive business, of course, also exists. But there was a deep feeling of solidarity and sharing that it would simply help. Then we had the universities who were quite well connected. So there was a breeding ground for this to evolve. And I remember very well, when, when the whole Linux story started, I was studying computer science at the University of Paderborn. And we were sitting in front of Sun SPARCstations, where we could download the first web browsers and we started downloading source code for kernels and all that kind of stuff. And it was kind of a feeling like yeah, this, you know, this is the way we should do it.

And then we had a thing that was extremely helpful. I think it was a group of six students in Brussels who decided in the year 2000 you know what, all of these these open source and free software people, why not get them together? So they asked the professor at the university in Brussels, Hey, can we have a few rooms and you know, just invite some people and the professor was like, yeah, sure, why not? This is now called FOSDEM. And it attracts around about 7000 developers. Now it's going in its 20th year. And fortunately this year it could still happen. Was before the Coronavirus broke out. And that also created a kind of a, you know, attraction of people to join, talk and do stuff.

So, from my perspective, being introduced to it when I was quite young, it didn't feel that special. It was like yeah, of course, this is how we do software. And what has grown from that. What you can see the political field is is of course, also heavily influenced by stuff like the European Union. We're always looking for projects to share across Europe. In that sense, open source and free software also made a lot of sense very early on. A lot of people in the political field saw the potential, and maybe didn't really understand the philosophy or the ultimate goals that many people had. But the practical use of sharing code in a productive way was obvious to a lot of people. So that's why it was growing. And it's growing until today.

[17:49]

In the second half of this episode, let’s take a little time to briefly delve into the history of open source. This may be old hat to some of you but we wanted to offer a quick primer for everyone so as to have some context for the “what ifs” we’re going to dive into in upcoming episodes.

Early on, a lot of computer development took place at universities and in corporate research departments like AT&T’s Bell Labs. As we heard from Harrish Pillay, they had long-established traditions of openness and collaboration—although some of that would admittedly later break down.  

Indeed, as we’ll get into later, it wasn’t even clear early on if software was even protected intellectual property and users often had to write or modify software they needed. For example, the first operating system for the IBM 704 computer was written in the mid-1950s by programmers at GM and North American Aviation.

In addition to the Internet, which was almost certainly a critical catalyst and enabler for what we now call open source, another important technology thread was the aforementioned Unix.

 It came out of Bell Labs and allowed users to “port” it to different types of computers. Of course, to make the needed modifications, you needed the source code—the text file with the original programmer’s instructions to the computer—which AT&T was willing to supply.

AT&T was also very open, at the time, to shipping source code for another reason. After the Sixth Edition was released in 1975, AT&T began licensing Unix to universities and commercial firms, as well as the United States government.  But the licenses did not include any support or bug fixes, because to do so would have been “pursuing software as a business,” which AT&T did not believe that it had the right to do under the terms of the agreement by which it operated as a regulated telephone service monopoly.  Whereas, providing the source code let licensees make their own fixes and “port” Unix to new systems.

However, in 1982, AT&T entered into a consent decree with the US Federal Trade Commission providing for the spin-off of the regional Bell operating companies.  Among other things, the decree freed AT&T to enter the computer industry.  And shortly thereafter, AT&T began development of a commercial version of Unix.

This would lead, over the course of about the next decade, to the messy “Unix Wars” as AT&T Unix licensees developed and shipped proprietary Unix versions that were all incompatible with each other to greater or lesser degrees. Let’s just say it was an extremely complicated and multi-threaded history, and leave it at that.

Part of this complicated history took place at the Computer Systems Research Group at UCal Berkeley, one of AT&T’s educational licensees. Over time, it modified and added features to its licensed version of Unix and, in 1978, began shipping those add-ons as BSD, Berkeley Software Distribution. Over time it added significant features, involving the outright re-architecting and re-writing of many key subsystems, and the addition of many wholly new components. As a result of its extensive changes and improvements,  BSD was increasingly seen as an entirely new, even better, strain of Unix; many AT&T licensees would end up incorporating significant amounts of BSD code into their own Unix versions. Berkeley continued developing BSD to incrementally replace most of the standard Unix utilities that were still under AT&T licenses.  This eventually culminated in the June 1991 release of Net/2, a nearly complete operating system that was ostensibly freely redistributable under what we now call a permissive license which places minimal constraints on the subsequent use and redistribution of the code.

AT&T promptly got into a legal fight over this version of Unix. Which it mostly lost in a 1994 settlement. Shortly thereafter, BSD development at Berkeley ended but a number of variants maintained by others spun out.

 In the context of open source, both the legal uncertainties associated with the AT&T and Berkeley legal fight and the subsequent fragmentation of BSD loom large.

Meanwhile, on the other side of the country at MIT’s AI Lab in Cambridge, Massachusetts, another drama was playing out.

AI was commercializing, especially in the form of machines designed to efficiently run LISP, the programming workhorse of AI research. One member of the AI Lab, Richard Stallman, liked the associated splintering of the Lab community not one bit. He found other aspects of increasingly proprietary software equally unpleasant. In a widely told story about Stallman’s genesis as a free software advocate, he was refused access to the source code for the software of a newly installed laser printer, the Xerox 9700, which kept him from modifying the software to send him notifications as he had done with the Lab’s previous laser printer.

Stallman responded by starting to write on operating system based on the Unix model, which is to say that it was to consist of modular components like utilities and the C language compiler that was to build a working system. The project began in 1984. Although he never completed the operating system kernel, the program at the core that controls everything else in the system, he did complete many other components.  These included, critically, the parts needed to build a functioning operating system from source code and to perform fundamental system tasks from the command line.

However, equally important from the perspective of open source’s origins, was the GNU—that’s G-N-U, a recursive acronym for GNU’s Not Unix—Manifesto that followed in 1985, the Free Software Definition in 1986, and the GNU Public License, the GPL, in 1989, which formalized principles for preventing software distributors from restricting the freedoms that define free software for him. The license stipulated that if you pass a program on—whether by selling it or otherwise—you have to provide the source code as well, whether or not you make any changes. This so-called “copyleft” approach contrasted with more permissive licenses like Berkeley’s and reflected Stallman’s belief in the free sharing of software while giving enhancements back to the commons.

ALSO, at roughly the same time—1991 to be precise—a Finnish university student by the name of Linus Torvalds posted in a Usenet newsgroup that he was starting to work on a free operating system in the Unix mold as a hobby. This OS would come to be called Linux and be licensed under the GPL. It made use of Stallman’s GNU components, including his C compiler—which was necessarily to build the system from source code. For a kernel, he didn’t start from scratch. Rather, he worked on and was inspired by MINIX, a version of Unix initially created by Andrew Tanenbaum licensed only for educational purposes at the time.

Linux is a member of the Unix-like family of operating systems. The distinction between Unix and Unix-like is complicated, unclear, and, frankly, not very interesting. Hence, all of these threads tie together in a highly interrelated way. That’s what makes unraveling the inevitability (or not) of open source—a term coined by Christine Peterson in 1998—such a fascinating puzzle.

By the close of the 1990s, Linux and open source more generally were not yet the dominant force and influence that they are today. But the market share of Linux was already eclipsing Unix on x86 servers. It was running some of the largest supercomputers in the world on the TOP500 list. It was the basis for many of the infrastructure products like “server appliances” sold during the dot-com boom. And even just by the year 2000, it had attracted thousands of developers from all over the world. The open source model was working, both for users and developers.

The following two decades made it a key part of the fabric of the software world. Not everywhere, but even historical opponents began to participate in open source projects, some more than others, some grudgingly, some enthusiastically. Today, it underpins much of computing and heavily influences how much of software development happens.

But did it have to happen this way?

In the next three episodes of this series, we’ll look at six possible points of departure.

You’ve heard how wrapped around each other the histories of Unix and open source are. Could events of the 1970s or 1980s have sent that history off in a different direction?

How important were the idealogical underpinnings that Richard Stallman and the Free Software Foundation brought to what would later be called open source software?

Open source is, in important ways, a creature of the legal system within which it exists. What differences can we reasonably imagine?

Today, open source pervades ever broader categories of software. But what if that Finnish university student hadn’t written Linux? Would we have had too invent it anyway or would the timeline diverge?

For a time, many viewed Microsoft’s victory in the OS wars and elsewhere as an inevitability. Microsoft CEO Steve Ballmer called Linux a cancer. Could Microsoft have played a different game or could Microsoft have otherwise “won” in a different timeline?

Finally, was open source pre-ordained to be commercially interesting? What if IBM hadn’t embraced Linux as strategic in 2000 among other factors such as enterprise Linux distributions that led to Linux and open source commercialization generally?

Thus concludes the first episode of “Was open source inevitable?” Check out the next episode as a wide range of guests tackles the counterfactuals. Did open source need to develop the way it did and what does that mean for the future?

[29:58]


EPISODE 2: THE FOUNDATIONS OF A MOVEMENT

Segments:

Guests:

In the last episode, we considered some of the aspects of open source’s rise that were probably inevitable. Open source has been shaped by certain large technology and economic tides, such as the advance of microprocessors, that seem likely to have been inevitable even if the exact timing and details were not. Similarly, some level of sharing and collaboration has happened throughout human history and carried forward to the development of computers even before there was an industry as such. Finally, the Internet was the product of developments happening in many different places, especially in the US and Europe. While there were some specific seminal events and inventions, and we could debate some technical and governance specifics, it’s hard to see a worldwide network for data transmission not coming into existence.

We also heard the barest sketch of a complex history of Unix and open source software that can (and indeed has) filled books.

But let’s get to the counterfactuals—starting with Unix. Was this operating system that’s so intertwined with the history of open source inevitable? Or at least something like it—whether from AT&T’s Bell Labs (where Unix was birthed) or elsewhere?

One relevant question to ask is where Unix fit within the development of operating systems generally. Was it an outlier or a logical progression?

Here’s what Steven Weber wrote in his book, The Success of Open Source:

If a computer could multitask for one user, there is no reason why it could not multitask for many users. This realization was the genesis of time sharing. The concept was simple though writing the software to implement it was not. The MIT computation center developed one of the first systems CTSS in the early 1960s. CTSS could host about 30 users at one time, each connected to the computer by modem. In 1964, researchers at MIT began a joint project with colleagues at Bell Labs and General Electric to build a second generation time sharing system called Multics. The Multics project was too ambitious for the state of technology at the time. It did not help that three project collaborators had different goals for the system, and they were hamstrung by an awkward structure for decision making.

The project quickly ran into severe problems.

In the spring of 1969, Bell Labs announced it was withdrawing from Multics. Bell Labs had spent substantial resources on the project and its failure left behind the strong management bias against research on operating systems. Ken Thompson and Dennis Ritchie, two ambitious Bell researchers on the Multics project were left with little direction and even less enthusiasm in the organization for their work. But, Richie and Thompson along with several colleagues felt they've learned important lessons from the Multics experiment, lessons that could be used in architecting a new and simpler operating system.

Weber goes on to describe how, in the summer of 1969, Ken Thompson took advantage of a quiet four weeks while his wife took their new baby to visit the grandparents in California to write an operating system for a PDP-7, one of a new class of relatively low cost computers. The PDP line was developed by Digital Equipment Corporation, colloquially DEC. Thompson named his rudimentary operating system U-N-I-C-S (uniplexed information and computing services), an intentional pun on Multics. This would later morph into U-N-I-X, Unix. Because these minicomputers were less powerful than the big systems of the time (and because Thompson was working quickly), he had to, as Dennis Ritchie, the creator of the C programming language subsequently told it it, “build small neat things instead of grandiose ones.”

Thus, we can perhaps sum things up by observing that, while Unix had its roots in pre-existing operating system research, it was born of a different design philosophy. Furthermore, as we heard previously, Unix was rewritten in the early 1970s in Ritchie’s C programming language.  This new version of Unix, created by Thompson, Ritchie, and others, could be modified to work on other machines relatively easily; that is, it was “portable.” This was truly unusual for a time when the norm was to write a new operating system and set of supporting utilities and application software for each new hardware platform. It was thus, in important ways, an operating system for a new era.

It should also be evident from the Unix origin story that its creation at that specific place and time was at least somewhat serendipitous, a skunkworks project of sorts at a rather unique organization, Bell Labs. Had something broken the seemingly fragile chain of events that led to Unix, might something like Unix have never come into existence—or come into existence much later?

My colleague and co-author, Red Hat Distinguished Engineer William Henry argues that an operating system “born for the network” would have happened anyway.

[6:23]

it gets back to that spirit of collaboration and understanding. Now you could argue whether it was going to be Unix or something like it or not. But I think that the people involved back then all understood the inevitability of a more networked environment for a start. And that, therefore security would need to have to come into it.

And I think that's where, as I said to you before, how I felt like Windows failed in that Windows was a personal computer operating system that everybody tried to tag networking on to with, with things like NT and all that sort of stuff. And so it became this sort of horrible mess of a personal operating system, trying to be networked. To be dealing with sort of security flaws and issues and blue screens of death and all that sort of stuff.

Whereas Unix was coming out of this... We understood there’s a network coming, like a universal one, something that's big, where we're going to be sharing with people that we don't always collaborate with every day. And that was happening in the educational sort of community. So I think that something was inevitable because there was so much collaboration. It was the idea that almost everyone was a hacker, right? They were all hackers back then. Everyone was trying to get things done. Everybody was trying to work together. People were trying to collaborate. Yes, there might have been forces on the commercial side that were trying to keep things siloed and all that but the pressure of the people that involved the types of minds that were being attracted to the environment. Where people that want to hack more to share more to show off more maybe, and say, hey, look what I've done. Oh, wow, you've done that. Well, let me add to that. And so in some ways that was going to bubble up into something anyway. Again, it may not have been Unix, Linux. But we were just fortunate that that was our.. maybe we would never know what the alternative was. And maybe it would have been better. But it certainly wasn't Windows. But the point  is that I think that spirit of international collaboration was already happening at the time. It was great minds, looking at this all over the world, people jumping on it. The new network was there, new security was going to be there as an issue once you bring that network into it. And so for me, I just felt I guess... I always assumed I remember even back then thinking, gosh, isn't it great? Open source hadn’t even been invented. But isn't it great, how much collaboration is done and how I can get this stuff from those guys.

Dave Neary of the Red Hat Open Source Program Office agrees. He argues that something in the Unix vein would have come together in some form or other.

[9:06]

Was Unix inevitable? I would say that at some point, some kind of academia-initiated, collaborative, open source, operating collaborative operating system was inevitable. That was something that had been happening already since the 60s.

 [9:30]

It’s possible to imagine perhaps that a minicomputer company like DEC could have filled a void left by an absent Unix by opening up one of their operating systems—perhaps a legacy one—in some form and to some degree. But anyone deeply familiar with the history and the culture of the minicomputer would have trouble making that leap. The minicomputer was too siloed, too proprietary, and too apart from the proto-standards emerging on the ARPANET, the Internet’s predecessor. Consider how, during this same era, Data General—the minicomputer maker that was the subject of Tracy Kidder’s Soul of a New Machine—lost a US Court of Appeals ruling for refusing to even license one of its, by then older, operating systems to run on a competitor’s hardware. It’s hard to see DEC or Data General or one of their competitors not viewing open source even in the most diluted form as several bridges too far.

But let’s stipulate that there’s a Unix or something that looks a lot like Unix in important ways—including adherence to open standards. We tend to think of open source and open standards as joined at the hip. Today we often even talk about taking an open source code first approach to standards. But it doesn’t have to be that way.

As my colleague Red Hat Chief Security Architect Mike Bursell puts it:

[11:05]

Standards are easier if you think of the world in open source, but you don't have to do it that way. You can have closed source around standards.

And even the availability of system code doesn’t automatically lead to any sort of a free software movement. Which brings us to our next query. How important is Richard Stallman and the free software movement to where open source is today.

[11:33]

In the 1980s, the computer industry was commercializing in ways that were increasingly eroding some of the sharing ethos that had pervaded the field since the beginning in many places. As we learned in the last episode, one of those places was the MIT AI Lab where the formation of two commercial Lisp companies, Symbolics and Lisp Machines Inc., had ended up as a messy and acrimonious process that led to much reduced open collaboration and widespread departures from the Lab.  

Richard Stallman was part of the lab community and had previously written the widely-used Emacs editing program.Stallman’s experiences with the effects of proprietary code in the Symbolics versus LMI war led him to decide to develop a free and portable operating system,. His work on the GNU project would be instrumental in Linus Torvald’s ability to assemble a complete operating system around the kernel that he wrote. But equally important were the principles—and perhaps the license—that Stallman wrapped around the code.

From Dave Neary’s perspective, today’s open source success can be directly traced back in part to Stallman’s actions.

[13:02]

What if Richard Stallman had not been at the AI lab? Or, you know, what if the particular printer software that is the  kind of kickoff point for the free software movement, had not been proprietary at that time. Five years later would things have been different? That's, that's one of the ones that I think is particularly interesting to think about.

There is a general tendency to collaborate on software, particularly in the academic area. But I think as a political rather than as a technical movement. The idea of copyleft and the GPL. Really, for me, made the collaborative creation of software, something like almost a political act. It turned it from something which was very, very niche into a small but growing movement. And I think, absent the GPL. I don't know if any of the BSD projects would have gone mainstream in the 1990s the way Linux did. I don’t know.

And it is worth thinking that, you know, absent  GNU and the Free Software Foundation, we had sendmail with the University of Washington license, which was not free software, but which was, you know, clearly very, very popular. We had pine; we had DNS, bind. All of the backbone software of the internet was, you know, free software, open source. So, there was certainly a place for open source without the GPL My question is, would it have grown to the commercial success that we see today? And I'm, I'm not sure.

[14:56]

Richard Fontana, a lawyer at Red Hat who specializes in open source legal issues also argues that Stallman and the FSF likely influenced how open source played out. Especially insofar as copyleft and GNU were something of political acts and their absence may have resulted in a rather different landscape.

[15:21]

One of the key developments historically was Richard Stallman, his decision to start the GNU project, and his invention of the copyleft policy as implemented in the GPL has had a obviously a far reaching effect on the development of open source and eventually Linux and open source licensing.

Of course, this decision, this invention of copyleft, to some degree, was based on another development in the law, which was after shortly after, it became clear that software was copyrightable. There were court cases that concluded in the US and I assume in other countries as well, that object code was copyrightable. So prior to that time, there was some doubt about whether even if source code was copyrightable, whether you could actually enforce a copyright as against a transformation of source code into object code that was kind of settled by the early 1980s.

In the US, just around the same time as Stallman starts the GNU project and a couple of years before he kind of formulates copyleft. So, if he hadn't come around and done that, would we, you know, I think we already had the beginnings of what we call permissive licenses by that time, or shortly after that time, in the 1980s. In a world without the GPL, a world where Richard Stallman doesn't get involved in starting the GNU project and doesn't think about licenses. There's no Free Software Foundation, perhaps he doesn't set up a Free Software Foundation if none of those things happen. We still very well might have permissive licenses because we did that for most of life. And they might become you know what Open Source licenses are to the exclusion of copyleft licenses,

I think there would have been in that case, there would have been perhaps some sort of political void, if you will, or cultural void that would have been filled by something else, or perhaps we would have seen a more vibrant community developed in the 1980s and 1990s around software licenses that prohibited commercial use. There were always licenses like that there was, you know, in the 1980s 1990s, there were small communities formed around software under licenses that prohibited commercial use. I understand that in portions of the gaming community there were sort of software sharing communities built around sort of non commercial licenses, so it's kind of easy to imagine some of the energy that ended up kind of going in the direction in support of GNU projects like GCC. And then later on Linux, when Linus Torvalds decides to adopt the GPL for Linux. Maybe some of that energy is sort of redirected to communities with anti commercial sort of attitudes, because there there there was a kind of anti commercial sentiment that that Stallman was kind of responding to and that the the early kind of enthusiasts around GPL license software, were responding to it despite the fact that Stallman himself made clear that that you know, free software was compatible with commercialization. So that that kind of positive view of commercialization of free software maybe was not was not inevitable or would have been sort of limited to the kind of permissive license context, we might have had a lot more sort of maybe more of a mixed permissive license, and then proprietary ecosystem corresponding to what we actually see today.

And maybe that's not so different from where we ended up today. You know, given that there there is this view that the GPL has become somewhat obsolete because of its model of a kind of object code distribution trigger. In a world where Software as a Service service and cloud and web are sort of dominant  modes of interaction and software delivery. A kind of object code distribution based model is not as pertinent. And that's one of the reasons why people have spoken of there being kind of a decline of copyleft or declining interest in copyleft over the past eight or nine years. So maybe that's relevant, as well.

[20:08]

We’ll return to the topic of commercialization in a couple episodes, but for this one we’ll stay on the theme of how the legal system and open source have intersected over time.

Before we take leave of Richard Stallman and the Free Software Foundation, it’s worth re-emphasizing that it was really Stallman who elevated licenses to be an essential component of what would become open source software. It wasn’t that there weren’t permissive licenses; BSD had one. But they weren’t rigorous or political.

Consider how the XConsortium or X11 license (which would become the MIT license—another popular permissive license) came about. Project Athena was a joint project of MIT, DEC, and IBM to produce a campus-wide distributed computing environment for educational use. Launched in 1983, it gave rise to important software that would end up being used broadly, including the X Window System and Kerberos. X was originally under a proprietary license but, according to Keith Packard, one of the project’s participants, what we would now call an open source license was added to X version 6 in 1985. According to another participant, Jim Gettys, "Distributing X under license became enough of a pain that I argued we should just give it away." However, it turned out that just placing it into the public domain wasn't an option. "IBM would not touch public domain code (anything without a specific license). We went to the MIT lawyers to craft text to explicitly make it available for any purpose. I think Jerry Saltzer probably did the text with them. I remember approving of the result," Gettys added. There's some ambiguity about when exactly the early license language stabilized; as Gettys writes, "we weren't very consistent on wording."

While the GPL went through changes over time as well, it was always intended to embody a specific set of principles implemented through a license. It wasn’t created just because some license was needed. The focus the GPL brought to using licenses as a tool still has ramifications today which we see, for example, in ongoing arguments about using ethical licenses to enforce who may use a given open source project.

Open source lawyer and co-founder of Tidelift, Luis Villa:

[23:20]

if Richard doesn't hate his printer... He rolls out of bed that day and doesn't need to print anything. I wonder if some of this ethical stuff doesn't come up earlier. Right? Because forget about the day he rolled out of bed and said he didn't like his printer. What about the day that he rolled out of bed and said actually ethical restrictions on software are bad. Some of this discussion might have happened a lot earlier. Somebody pointed out to me this morning and it was a great insight, that for years, we said the licenses were the only acceptable way to to legislate behavior. We said that we didn't like codes of conduct. We said we didn't like kicking people out of our communities. And so we left ourselves with only licensing as the tool. And I think to some extent, that's an artifact of Richard and the FSF and as whoever pointed that out to me this morning that's not necessarily a healthy one, right? We have really lashed ourselves to the mast of licensing. And maybe we wouldn't be having so many of these discussions today if we said hey, codes of conduct are also important and how we behave with each other as peers and friends and human beings. If we'd extracted that from the licenses earlier, maybe we wouldn't be having some of the arguments that we're having today.

[24:40]

But let’s step back. Software licenses rest on an assumption that software is copyrightable, that it can be owned. This is something that we take for granted today but there was a period during which this wasn’t at all clear. This question was only settled when the US Congress explicitly explicitly granted copyright protection to computer programs in 1980. Various court decisions provided additional clarification.

Richard Fontana again on how copyright developed and how things might be different for open source had software never become subject to copyright.

[25:26]

Software as a kind of product of human ingenuity got going during a time period when the legal status of software was very unclear. If there was any sort of consensus, it was that software was a thing that wasn't susceptible to being owned. It wasn't a form of property. Although the reality is, almost from the beginning, the situation was more confusing than that may sound. It was just not very clear. There were differing viewpoints about the legal situation and about whether it would be good policy or not for the legal system to allow software to be something that you can own. And, for business related reasons, the question was, for a long time, not super critical because there wasn't a software industry as such. And part of the complexity also is that it was changes in the law, particularly around copyright law, which to some degree enabled development of a software industry that was separate from a hardware industry. So there's sort of a chicken and egg issue here as with some of the other issues around this, this overall question.

But the key development was that in the late 1970s, certainly by 1980. It's clear in the United States and other countries that that software source code at least is something that you can have copyright on. And therefore, if you develop software, it's something you can own and if you can own it, then you can license and if you could license it, you can make money from controlling the scarcity. And that happens after a long period of time when there were was collaboration among communities of programmers. Not 1990s 2000 let alone 2000s era style collaboration but collaboration nonetheless across different institutions, universities, and industry last and so forth. And so, we often think of open source as coming out of this, this kind of conflict that arose when the legal system recognized as copyrightable, something that had been treated by software professionals has something that was outside the scope of property or ownership and, and that conflict, among other things led to the invention of what we'd now call open source licenses, and all their different varieties.

So, if you can imagine a world where software never became susceptible to copyright, I think it's interesting to ask whether we'd nonetheless have something that looks like open source and open source development today. I think we might, and that's because I think in the end, in a world without software copyright, what is the legal obstacle to collaboration? You know, we could talk about patents being a potential obstacle. But leaving aside patents... Open source licensing is, as my colleague, Scott Peterson likes to say, something that helps get copyright out of the way to make collaboration possible. If copyright isn't in the way to begin with, you don't need anything that looks like open source licenses.

Maybe open source licenses develop anyway, to promote certain other types of policies. It's not clear how effective they would be. Maybe they'd be more like what we now have in open source projects around codes of conduct, maybe maybe sort of aspirational policy statements that don't have much binding effect. And in the end, aren’t really necessary to promote collaboration.

This wraps up episode 2 of “Was open source inevitable.”

We’ll continue with our counterfactuals in the next and penultimate episode of this series.

We’ll first consider what might have happened had Linux Torvalds decided to take up ice sculpture instead of writing Linux.

Then, we’ll consider whether Microsoft could reasonably have played its hand differently to parlay its dominance on the desktop together with Windows NT into a position that stopped or coopted open source.

[30:30]


EPISODE 3: OPERATING SYSTEMS FOR A HORIZONTAL STACK

Segments:

Guests:

In the last episode, we started to examine some of the counterfactuals, the what might have beens, associated with open source. Did the Unix operating system, from which so much of the history of open source in our timeline derived, have to happen? How important was Richard Stallman and the Free Software Foundation which made free software—which many would later call open source—a movement that went beyond collegial academic sharing? What other changes around licensing and copyright law could have altered open source’s trajectory?

In this penultimate episode of the series, we’ll play out the great Linux vs. Windows rivalry of the 1990s and 2000s.

We’ll first consider what might have happened had Linux Torvalds decided to take up ice sculpture instead of writing Linux.

Then we’ll take a look at whether Microsoft could reasonably have played its hand differently to parlay its dominance on the desktop together with Windows NT into a position that stopped or co-opted open source.

In our timeline, a Finnish university student by the name of Linus Torvalds made a 1991 post to a Usenet newsgroup that he was starting to work on a free operating system in the Unix mold as a hobby. This OS would come to be called Linux and be licensed under the GPL. It made use of Richard Stallman’s GNU components, including his C compiler—which was necessarily to build the system from source code. For a kernel, he didn’t start from scratch. Rather, he worked on and was inspired by MINIX, a version of Unix initially created by Andrew Tanenbaum licensed only for educational purposes at the time.

But what if Linus Torvalds had decided that computer science wasn’t his thing and decided to take up ice sculpture instead? That was the question I posed to long-time technology journalist and CBS Interactive contributing editor Steven Vaughn-Nichols and former Sun engineer and now co-founder of Oxide Computer Bryan Cantrill. The whole debate is lively and informative. It’s published on this podcast and the link will be in the show notes. But here’s how they kicked things off.

[3:13]

[Steven Vaughn-Nichols]

Okay, so Linus has decided that he is going to take his chisel off into the far north and start carving ice sculptures. Fine. What happens from here as far as the operating system world is concerned? That's a darn good question. I think that the Internet takes a different path. And it's going to be based primarily on the BSD Unixes and SunOS. And yes, I know there's some argument that really aren't you talking about all the same thing, but we won't get into that right now. I think it's going to be a much slower process–the internet that is getting off the ground. I think that it's possible that here we are in 2020, almost thirty years later and we will be running God help us windows 2020 as both our desktop and our server operating system. Open source is stalled out, doesn't exist at all really. We still have free software thanks to RMS but I don't see that ever really engaging in the business world. Same thing with the BSDs and the BSD license. They are around. They're important probably on the Internet as a server operating system. But it's a very different and very proprietary world and I think open source in general as a concept that really never catches fire. Very different world indeed.

[Bryan Cantrill]

I think that's nuts. I think that is absolutely insane. So, alright, let's take us from the beginning. First of all, if Torvalds doesn't do Linux, the rise of the Internet is unaffected. Let's just get that out there. The rise of the Internet which happens really starting in 93. Certainly the rise of HTML is far more important than the rise of the internet, than Linux which had logical equivalence in other systems and having been a... I graduated from university in 1996. So this is definitely bullseye for me. And the rise of the Internet through 93, 94, 95. The rise of Java in 95 is very important. And then the internet explodes in 95, 96, 97. Yahoo didn't run on Linux; it ran on FreeBSD.

It was really the workstation companies that exploded and in particular, and part of the reason I went to work for Sun Microsystems in 1996 is the other historic Unix workstation companies or server companies had all decided that Windows was the future. And it was only Sun that had decided to stand by Unix and it was Unix that exploded with the Internet. But the Internet was going to explode. I mean that had, I think, nothing to do really with with Linux. I think that what ended up happening over  the late 90s and into the 2000s... Linux does not become really truly deeply relevant until the microprocessor that it was welded to, the x86, begins to surpass the RISC microprocessors. And if you want the fastest microprocessor on the planet, in 2000, 2001, 2002, that is increasingly an x86 based part and not a POWER based part, SPARC based part, a MIPS based part, or a a PA-RISC based part. And then I think that another variable In terms of the rise of Linux, one of the things that's very important is the commoditization of the x86 and the bust, frankly. So the bust from 2000, 2001, 2002, by the time you hit 2002 to 2003, we're in a nuclear winter of an economic bust. And so it is the economics of x86 and the de facto Unix on x86, which was Linux. So I've got a totally different read on history. I think if it's not Linux, it would have been one of the BSD variants would have been the de facto Unix on x86, but it is the rise of the Internet and it is the the rise of SMP to a lesser degree, and then the the rise of a commodity microprocessors as highest performing microprocessors that Linux grabbed a ride, drafted on those economic mega trends but did not really contribute to them.

[7:59]

This captures the fundamental debate nicely. On the one hand, the BSDs, while popular in Internet infrastructure, hadn’t really taken off—at least for some definitions of taken off. Microsoft, a topic we’ll return to in a bit, was poised to eclipse all in the minds of many. In this view, there really was a critical catalyst like Linux needed in the mid to late 1990s to produce the open source world we see today.

On the other hand, there was nothing especially new and innovative about Linux. It was the product of the great economic and technological forces which we heard about at the very beginning of this podcast series. If Linux hadn’t existed, the collective we would have had to invent it.

The BSD question is an interesting (and possibly pivotal) one. BSD’s ultimate, at least relative, failure is also one that doesn’t lend itself to a clear historical understanding even with the benefit of hindsight.

Lingering concerns stemming from the AT&T lawsuit we covered in the first episode of this series may have been one factor.

Brian Proffit of Red Hat’s Open Source Program Office identifies the lack of a strong developer community and fragmentation as another possible factor. Certainly these were serious issues in the proprietary Unix world and could arguably have played on it a similar vein with open source Unixes.

[9:39]

I think a few things held BSD back. There never seemed to be a lot of developer interest in it. There wasn't a lot of interest in BSD from either the user side and certainly not from the developer side and we can go, and I'm not gonna, but we can go into political arguments about why, it was held back that there are personalities in the community that sort of held BSD back. So I'm a big believer in conservation of energy. I have a background in physics. And so I'm kind of a believer in there's only so much developer energy or attention going around and something's gonna take that energy and take most of the energy and and use it and I think that in that case, it was Linux. And there just wasn't enough developer interest or passion around BSD to let that get started. And we almost had that problem with all the different disparate Linux distributions where developers didn't know which one to pick. And we were very lucky in that regard, that regardless of all the fantastic distributions that are out there, really developer interest has settled on one of three distros. Either the Red Hat universe, the Debian universe, or the SUSE universe, a little bit Slackware. (But that's kind of faded a little bit.) So I would say we got lucky, because if that had continued, and they're all that developer, energy and attention had been spread across all of the many distros at the developer level, I think we would have had a real problem with getting anything done.

[11:43]

But in spite of the problems with the BSDs, they were being adopted and there was certainly an assumption in at least some quarters that adoption would continue. Bryan Cantrill again:

[11:55]

The BSDs were available for x86 and especially once the lawsuit was cleared in the early 90s... I can just tell you that the feeling among my peers in the late 90s was well, if we had to use x86, and at the time, we had the operating system, Solaris was ported to x86. But the assumption would be that it would just be FreeBSD. And I think that that's the assumption on the companies that we're building things on x86 as well in the late 90s, early 2000s. And I mean, ultimately, Linux clearly does surpass the BSD s in terms of adoption. And I think that there's some interesting questions in terms of why Linux versus the BSDs. But again, I think, in both cases they're the tail on the dog, the dog being the much larger economic trends.

[12:49]

And, even if the BSDs in their then current organizational form were flawed—even perhaps fatally so from the perspective of widespread adoption—we might have seen adaptation in the absence of Linus Torvalds and Linux.

Rob Hirschfeld, CEO and co-founder of RackN.

[13:11]

I think the market needed the type of thing we got from Linux. I do think that there was a need for somebody who is willing to be the benevolent dictator, and sort of guard and protect the kernel. And be that, that sort of very, you know, and he gets a lot of criticism for this too, because it's a very hard job and he doesn't always do it in a politic way or in a way that rubs people the right way. And there are concerns and challenges for that. At the same time during this run up where he had to make a whole bunch of technical decisions and yes/nos that we had somebody doing that. I don't think that's unique to Linus. He's done a good job of it, but there's mistakes he made. There's design decisions that were made, there were compromises or things that, you know, there's always balances. So, I think that there's a market force that would have drove us to this. And the market picked, somebody who is able to actually shepherd the project, well. There's a ton of Linux distros, there's no shortage of would be kings in this case, and that's the beauty of how open source works. It's not because there was one inevitable truth. What it turned out was we had enough people running that the best of that group could surface.

And best is strictly an objective term because there's some Linux distros that are still better for some things and others that are better for others. We got sort of a good middle ground and then we had a couple of companies rush in behind it to prove it commercially. It wouldn't have happened without Red Hat either.

[14:54]

Perhaps if Linux hadn’t existed in the person and form we know it today, the market would have naturally led to something similar coming into existence. Perhaps from the BSDs. Perhaps from an earlier OpenSolaris though that seems unlikely. Or perhaps from another university student building off MINIX.

In the last segment, we heard Steven Vaughn-Nichols ponder whether, in the absence of Linux, might not Microsoft Windows have played a much larger role on the server. This series hasn’t talked much about Microsoft so far. Which would probably greatly puzzle a time traveler from the mid to late 90s given Microsoft’s dominance at the time.

We didn’t even cover Microsoft in the history segment in Episode 1. Let’s remedy that before getting to the question at hand.

In the mid-1980s, Microsoft decided to build on its desktop PC domination to similarly dominate servers.

Microsoft’s initial foray into a next-generation operating system ended poorly. IBM and Microsoft signed a Joint Development Agreement in August 1985 to develop what would later become OS/2. However, especially after Windows 3.0 became a success on desktop PCs in 1990, the two companies increasingly couldn’t square their technical and cultural differences.

As a result, Microsoft had started to work in parallel on a re-architected version of Windows. CEO Bill Gates hired Dave Cutler in 1988. Cutler had led the team that created the VMS operating system for Digital Equipment’s VAX computer line along with other Digital operating systems.

Cutler had a low opinion of OS/2. By some accounts, he also had a low opinion of Unix as something designed by a committee of Ph.D.s.

In any case, Cutler undertook the design of a new operating system that would be named Windows NT upon its release in 1993. IBM continued to work on OS/2 by itself but it failed to attract application developers, was never a success, and was eventually discontinued. This was an early-on example of the growing importance of developers and developer mindshare, a trend that Bill Gates and Microsoft had long recognized and played to considerable advantage.

Windows NT on Intel was a breakout product.  Indeed, Microsoft and Intel became so successful and dominant that the Wintel term was increasingly used to refer to the most dominant type of system in the entire industry. By the mid-1990s, Unix was in decline, as were other operating systems such as Novell’s NetWare.

Windows NT was mostly capturing share from Unix on smaller servers but many thought they saw a future in which Wintel was everywhere. Unix system vendors, with the notable exception of Sun Microsystems under combative CEO Scott McNealy, started to place side bets on Windows NT. There was a sense of inevitability in many circles.

Unix might still have been the operating system of choice for large systems with many processors; Windows NT was initially optimized for smaller systems. But it was easy to see that Windows NT was fully capable of scaling up; it had been architected by Cutler to be able to serve as a Unix replacement. Once it got there, it was going to be very difficult not to rally around something which had become an industry standard just as Intel’s x86 processor line had. Products selling in large volume have lower unit costs and find it far easier to establish partnerships and integrations up and down the new stack with its horizontal layers rather than the vertical silos of the minicomputer and proprietary Unix vendors.

So what happened? Surely almost the entire industry couldn’t have been wrong. Bryan Cantrill again:

[19:26]

Yeah, the industry was wrong. And the industry's been wrong like many times before, so this should not be earth shattering or a newsflash. But the companies that embraced Windows had very serious deep structural problems. It was an act of capitulation. And it was not forward thinking at all. They were from all of them, right. From DEC, from HP, from IBM, and then I mean, probably the most pathetic one is SGI just because SGI absolutely should have been an independent thinker but felt that it needed to forego its future to Windows. I mean, you can kind of get to some of that fear in Larry McVoy’s Sourceware paper from 1993 that captures some of that fear as it existed in the industry. But there was just... Microsoft was a monopolistic competitor. They were vicious. They had a fearsome reputation. They had certainly conquered all personal computer operating systems. And it just felt to a bunch in the industry that they were going to conquer everything. I felt at the time and I very much voted with my career because I felt strongly that that was not the case as a 22 year old. And I went to go work for the only computer company that agreed with my point of view. And what I saw was the rise of symmetric multiprocessing and the rise of the Internet as something that Microsoft didn't get at all. And I just didn't see them participating in that and I saw not just the Unix based systems but other operating systems were in a much better position.

[21:19]

In this view, the inevitability of Microsoft was a shared industry delusion that was never going to come to pass. Not because an almost unimaginably different Microsoft couldn’t have played a different hand but because the Microsoft under Bill Gates really didn’t get the Internet and Internet standards until very late in the game. The Microsoft subsequently led by Steve Ballmer saw Linux and open source as a “cancer” to be wiped out. But maybe Microsoft could have adapted.

Brian Proffitt again:

[21:56]

I really think that if Microsoft had played the long game better back late 90s... even all the way up through the 2000s before maybe the late 2000s when they kind of got their heads on right, but they were too aggressive. We all know the personality stories about Steve Ballmer and you know Craig Mundi and other people in Microsoft, who were hyper aggressive about Linux.

I think that hyper aggressiveness didn't help them because it certainly polarized any business. So now it became we're either going to use Linux or we're not going to use Linux. So you had a lot of shops back in those days, IT shops that were basically like, okay, we're all in for Windows. Or in for Linux. And there wasn't a lot of back and forth in between and I'm referring to mostly the software on the server platforms because the desktop really, I'm sorry, Linux never really was able to take off. But I think that if Microsoft had been more softer around that, and if they figured out shared source earlier and not before they went out and got very aggressive negative campaigning against Linux and open source, I think that we would have had a far more different landscape because Microsoft has got it pretty right now.

Because you look at polls that come out around who's a leader in open source and who has the most open source contributions. There are a lot of marketing surveys independently that are saying, hey, Microsoft's doing very well on that. And old people like me are looking at that and going, what the actual heck is going on with that? You know, because now they finally figured it out. And I think they're being sincere with their open source. I don't think it's a marketing game anymore. But I think if they figured this out about 10 or 15 years ago, and not been so harsh against Linux, I think we'd be an open source. I think we'd be looking at a far more different landscape at this point. I think that a lot of open source development would be centered around Windows based platforms, not so much Linux, I think the arrow again, going back to conservation of energy, I think a lot of the air in the room would have been sucked out and been drawn more towards Microsoft. Because if I'm a developer and I'm looking at server distribution and desktop distribution, if Microsoft has all of the desktops or most of them, and if they have, hypothetically more of the server space, me as a developer, it's gonna be make more sense for me to go over and develop on Windows. Especially if they're doing some kind of open source that agrees with my personal politics or whatever.

[25:18]

Current Microsoft CEO Satya Nadella has demonstrated that Microsoft the company had the ability to course correct with respect to open source under the right leadership. It had and has critical assets, not least of which is its historical strength in connecting to developers. Balanced against that, it’s perhaps hard to see that happening much earlier than it did absent a generational changing of the guard.

In our final Episode 4, we’ll consider the following question:

Even if we stipulate that Linux and open source were inevitable at some level, could they have just fallen into niches as some industry analysts were predicting in the late 1990s? Successful at some level but not commercially important.

We’ll then close by summing up what we think we’ve learned and think about what insights we might extract about where open source is and where it is going.

[27:03]


EPISODE 4: HOW OPEN SOURCE WON--OR DID IT?

Segments:

Guests:

Welcome to the final episode of our series “Was open source inevitable?”

Our final scenario deals with commercialization. Is there a timeline where Linux succeeds in relative niches—network infrastructure and supercomputing, say—but never gets over the hump of enterprise IT skepticism that still existed around 2000. As it needed to do in order to become a mainstream commercial success.

One pivotal event in particular gets widely cited as getting Linux over the hump. That was IBM’s decision in 2000 to announce that it was embracing Linux as strategic to its system strategy. The next year, then IBM CEO Lou Gerstner said that they’d spend $1 billion on Linux over the next year.

There are two questions to ask about these decisions. Given Linux, was it inevitable that IBM would embrace it? And secondly, was that embrace really of crucial importance?

For the first question, we turn to retired IBM executive Irving Wladawsky-Berger who led Internet and then Linux strategy for IBM at the crucial turn-of-the-century juncture.

[1:59]

By the late 90s, it was clear that Linux was becoming more and more important. And we formed a major task force to see to what extent IBM should embrace Linux and this happened in 1999. And the task force came back and said, we absolutely should embrace Linux, that it was going to be an incredibly important part of computing, that we should embrace Linux across all of IBM's offerings. And that IBM should become a major supporter of Linux.

And I still remember very well in December of ‘99, I called Sam Palmisano, the head of IBM Systems Group. And I said, Sam, the task force recommends that we should embrace Linux. And Sam said, okay, Irving, we will do that. But you have to now come over and run an IBM Linux initiative. And I said to Sam, okay, we were pretty much done with our internet strategy. So I was no longer needed to run the Internet division And I said to Sam, when do you want to announce it? And Sam said, how about now? And I said Sam. It's the Christmas holidays. Maybe we should wait until the new year. And in the second week of January of 2000, we made a major announcement saying that IBM would embrace Linux across all of these offerings. And in fact, later that month in January of 2000, I gave a keynote at LinuxWorld, which was taking place in the Javits Center in New York City, about IBM’s Linux initiative.

At some level, the rest is history.

[4:17]

Thus, from the perspective of someone best in the position to know, IBM’s Linux embrace was something of an inevitability.

But how important was that endorsement really? Was it just a natural response to how Linux, and other elements of open source such as Apache, were already becoming widespread? Was widespread enterprise adoption just a matter of time?

However, some view IBM’s endorsement as important or even a game-changer.

Here’s Matt Asay of Amazon Web Services:

[4:55]

One of the biggest things that happened for open source was IBM’s billion dollar commitment to Linux. And it was mostly marketing dollars that they were committing. And it was, again, self interested because they wanted to build a business around Linux, but I was living through it. And I remember the time before it happened, and the time after it happened. Before it happened, we would struggle. We were selling Linux to these different manufacturers of personal digital assistants, and we'd walk in with Linux and and they'd say, I've heard of this GPL thing. It's radioactive. No way do I want that. And then IBM comes out again, I think it was 2001 says, and I mean this actually with profound respect. We're a big boring company. enterprise company, and we're gonna put a billion dollars into Linux and almost overnight, the conversations changed. And so that I think was a seminal moment for open source generally, certainly for commercial open source.

[6:01]

Dave Neary of the Red Hat Open Source Program Office:

The thing that people point to is that the IBMs were going to bet $1 billion on Linux. That is an inflection point.

[6:17]

And Steven Vaughn-Nichols of CBS Interactive.

What the IBM acceptance does Is it gives an official Fortune 50 blessing to an operating system, which previously was still seen as this thing that only really nerdy academic sorts were going to do anything with. And yes, it could be useful for little companies who can’t afford to buy an IBM mini or mainframe, or they, I suppose, a Sun SPARCstation. But now after their blessing, it's all these businesses that otherwise might not even at this point, have even heard about Linux yet, are waking up and saying, Well, what is this anyway? Where are they putting on these odd advertisements on primetime television with this little kid named Linux and who's going to do all these wonderful things. So as people who are primarily technologists, I don't think it made that much of a difference to us but for the business world, the greater economic world. I think that made an enormous difference. Whenever I write stories about Linux history, I always credit that as being the development which turned Linux from being this odd techie background thing to something that all businesses at least would be familiar with. And then of course, as time goes on more and more of them adopt it.

[7:48]

On the other hand, if IBM’s endorsement was truly as inevitable as we heard, doesn’t that suggest that Linux and open source already had a great deal of momentum? Perhaps it was Linux pulling IBM along rather than the other way around. Certainly, open source was already extremely important to that we’ve been calling niches such as Internet infrastructure and supercomputing—but these weren’t really just niches by the 2000 timeframe. Still, the IBM investment and endorsement may well have accelerated adoption by enterprise IT.

And accelerated the corporatization of open source which Diane Mueller, Director of Community Development at Red Hat, argues was needed for its eventual success:

[8:43]

I think open source itself was probably inevitable. There would be a point where we switched from just sharing best practices and lessons learned in simple scripts for doing things. I think there would have been a juncture we would have jumped over that hurdle and started sharing more of the code. I think that was inevitable. And then once people got addicted to the idea of collaborating on things, across business lines across global regions, then I think we would have always seen this growth in the open source landscape and the number of projects. And in some ways, as much as it sounds like a terrible thing, the corporatization of open source. If it might have died, it might have might have had this big arc and then we might have been in the trough of disillusionment that Gartner trots out, for open source had corporate corporations not started backing and realizing that they needed these people to be working full time on these projects because these projects became the linchpins, of their product offerings, or their service offerings or their hosting service, or the many things that depend on open source. So it's a bit of a conundrum because I really like the early days, going back to those DECUS user groups and you know, things where we were kind of very, very early days starting to share knowledge in an open way and come together around a platform. And there was it seemed in the prior times, there was much more of an academic flavor to it, an individual flavor to open source.

I think as soon as companies like Red Hat, started backing Linux, and offering support for it, then the corporatization became evitable. If no company like Red Hat had stepped into the foray, to start open offering services… But I think that was inevitable. Even the hobbyists need help even the, you know, hobbyists who wanted to use it on small businesses and there's, it feels a little bit like, you know, like a science fiction movie, if we really think about it as it might have stayed at the hobbyist level of services and support like, small 5-10 person companies that were supporting it.

But as soon as we flipped the switch, and companies, like Red Hat, started backing it in a big way. That was when the corporatization became inevitable to me. Looking back in hindsight, and had we not or had other people not also stepped into the forest and started putting engineering resources on these projects, making sure that it was fully supported. And open source might not have done it. It might not be what it is today. It was the, I think the realization that one you could make money supporting open source projects and doing the technical support and release management and all of the goodness, that's there. And had that not happened. We might still be downloading from a very different internet. And very different we might still be in gopher and Veronica land.

A lot of the innovation we see was driven by people’s thirst to productize and to create new things that they could make and monetize. So I think we’d see a very different landscape now had those initial companies not stepped into the foray.

[12:20]

In this series, we’ve gone through six scenarios, six counterfactuals, six possible inflection points where the timeline leading to open source as we know it today could have plausibly diverged. We took as given broad technology and economic trends such as commodity microprocessors and Moore’s Law. We also assumed an interconnected network at least passably resembling the Internet—and the inevitable sharing that took place over the network and in other ways.

Was Unix inevitable? An important question given that Linux—but really much of open source more broadly—is so tightly entwined with the Unix tree. The specific chain of events that led to the creation of Unix at Bell Labs looks fragile. But William Henry makes a convincing argument that some network-centric operating system for less expensive, less powerful hardware would have inevitably emerged from the widespread collaboration and sharing going on in academia and elsewhere. Furthermore, given how mainstream modern operating systems have generally converged around a process-centric design, rather than say dataflow architectures, it seems likely that not-Unix would look more like Unix than not.

What if Richard Stallman had not brought the Free Software Foundation into being and established principles for free software including a copyleft license—making sharing of software an overt political act? Dave Neary thinks that open source would have been successful without the copyleft GPL but maybe not as commercially successful. Richard Fontana notes that Stallman himself made it clear that free software was compatible with commercialization. But absent Stallman, an ideological void might have been filled by activists far less friendly to profit and corporate use.

Richard Fontana reminds us that open source licenses are rooted in copyright law and, were software not copyrightable, it’s not clear that you’d have open source as we know it today. (But you also maybe wouldn’t need it.) Coming back to Stallman, Luis Villa observes that it was his focus on controlling behavior through licenses that led them to become the often solitary blunt tool available; see the ongoing debate over ethical open source licenses today.

Perhaps the most contentious topic was the importance of Linux. Bryan Cantrill argues that one of the BSD Unixes that came out of Berkeley would have filled the void absent Linus Torvalds. They were already in use at companies like Yahoo and were popular in Internet infrastructure while Linux was still relatively immature. Rob Hirschfeld thinks you did need someone like a Linus Torvalds to make an open source operating system project a coherent entity, but if that person weren’t Linus, it would have been someone else. But Brian Proffitt worries that the fragmentation of the BSD communities might have continued unabated as it did in the proprietary Unix world with a corresponding, perhaps fatal, splintering of developer mindshare. And Steven Vaughn-Nichols wonders if, without Linus Torvalds, open source might have stalled out and we might all be running Windows2020 on our servers today. Linux perhaps comes closer than anything to for want of a nail the war was lost.

Speaking of Microsoft, how did it blow its seemingly irresistible rise in the 1990s? To Bryan Cantrill, Microsoft’s position was always an illusion and a serious miscalculation by much of the industry. That said, according to Brian Proffitt Microsoft could have played the long game better and parlayed their strength with developers into a better long-term position had they abandoned their most hardball tactics. However, doing so may not have been plausible with its first generation of leadership.

The endorsement of Linux by IBM was seemingly inevitable given the circumstances of the time according to Irving Wladawsky-Berger. Matt Asay, Steven Vaughn-Nichols, and Brian Proffitt all argue that endorsement helped accelerate Linux. But what was the cause and effect? Diane Mueller argues that, whatever the cause, the eventual corporatization of open source was probably necessary for its eventual success.

Perhaps your takeaway from our little jaunt through the history of open source is that the forces leading to open source were too powerful to allow the timeline to diverge in significant ways. As Bryan Cantrill puts it:

[17:52]

 I think that there were a lot of open source contributors in terms of software bodies out there. And I think the idea of Torvalds as creator of heaven and earth... I definitely think is a mis-read of history. I think history would have unfolded in not wholly dissimilar ways .And that's probably true of any given individual. It's very hard for single individuals unless they're going to be assassins of Archdukes to really shape the course of history. The economic forces at play are too great.

[18:31]

But as you consider whether you agree or disagree with what you’ve heard, you may also want to consider the timing. Because that can be important. Even if you assume that open source in some form was ultimately an inevitability, the relationship of important open source-related events to macro factors like the dot-com bust is both important and unpredictable. As is its relationship to the leadership at key companies.

Mike Bursell, Chief Security Architect at Red Hat:

[19:06]

I don't think it was inevitable at any particular time. I think maybe we could say that in the fullness of time, it maybe would be inevitable. But I can see we could have gone a lot further without it being inevitable, if that makes sense without it happening yet. I can, I could see our being sort of around now or 5-10 years ago, with open source becoming a thing, but I'm not convinced it was inevitable, right from the get go.

I think there was some lucky breaks. And maybe there could have happened earlier in some ways. But I don't think the timing is inevitable. I would like to think that human nature is such that we would have got there in the end. But that's not the same as saying we'd have it now.

[19:52]

Many companies could have done different things in this story: Microsoft, Sun, maybe DEC, IBM. It would have required overcoming great institutional inertia but it wasn’t impossible. Which may be a good lesson for why it is important to overcome organizational hurdles.

We’ve also tended to paint some things as black and white, when they’re really not. The commercial success of open source, the broad victory of collaboration as the key to innovation. It’s not that simple.

For example, Matt Asay of Amazon Web Services argues that while participation in open source today is deep and wide, it took a fairly long period to incubate open source as a commercially interesting way to collaborate:

[20:46]

I think it's the commercialization of open source and the rampant self interest, corporate self interest–it's the thing that gets called out is open source sustainable. Oh, we have these developers, how are they going to make a living? How are we going to ensure that this project persists? And I think and I don't want to be too Adam Smith on this. But I think that the the key to it all, the key to making open source really thrive is the fact that it no longer requires any of that collegiality. We have that, we definitely have that, you see it if you go to OSCON, or you see it in the message lists as well. People working at different companies getting along just fine. And they think of themselves first and foremost as a Linux developer. Oh, and by the way, I work for IBM, or I'm a Kubernetes developer, oh, by the way, I work for VMware or whatever, but they're their developer on a certain project first. So there is some collegiality there, but the thing that really makes it work, the thing that gives me the most hope that it's sustainable, that it's going to continue to thrive is precisely that we don't really have to rely on people's good intentions anymore. And I think there was a time it open source or free software, whatever you did, and as much as I think people are inherently good and I tend to trust people to do that right t hing I would, I feel safer with open source knowing that I don't have to trust people to want to do the right thing all the time.

[22:30]

Furthermore, while open source is now well-established as an approach to developing software, Rob Hirschfeld of RackN points out that aspects of the collaboration remain fragile:

On the surface, open source, the way we envision it working is very fragile. And when I say open source, what I mean is a shared code base where there's true multi vendor collaboration. And the word vendors is really important to me in open source, because it's people who have a commercial interest in the success of the codebase. And so if you're looking at a case where multiple vendors are sharing a common good, Tragedy of the Commons is a very real thing. There's sharing and collaborating around a common set of shared value components. That's a very hard thing to maintain, especially with loose governance and loose rules, which is sort of inherent in open source, then the idea that we're gonna have multiple people profiting from a shared code base is very, very hard to sustain in a real way. And very few, very few things have done it. We've seen open source succeed as a single vendor, or a single vendor dominated component where that vendor sort of shepherds it. And we've seen very few projects really succeed at a big scale, where they have a real communities sustaining model.

[23:59]

Certainly sustainability is an ongoing challenge, especially for projects that don’t have major corporate backing. Patreon donations are not in general a sustainability model. Chris Aniszczyk, Developer Relations at the Linux Foundation:

I have a lot of concerns around developers going this donation based approach. It's something that's kind of bothers me a little bit personally, because donations have never worked well for starving artists for aeons, I think throughout history. And what's even worse is I think it even enables what I essentially call like a developer, open source focused gig economy where people are expecting donations, and they're not making enough money. I've actually done a lot of research on this there's very few developers out there actually being sustained by donations. I think it's a poor model. Instead, we should be teaching them how to find jobs or build businesses around the cool stuff they've done so they can actually sustain themselves with a great business or with salary with benefits and all that goodness that we come to expect. You know, it's just it's interesting because we've had this recent trend of GitHub sponsors and people accepting donations, which is kind of nice. But like, I don't want developers to be confused that this is actually a sustainable way to do things. And it's just something that I think we could do better and it's almost wrong to spread that this is actually a possibility for most folks.

[25:25]

So even if you come away from this series convinced that there was indeed a certain inevitability to open source, we hope that you’ll also reflect on how its current state may not have been pre-ordained in every detail. Some dice rolls might have gone differently. It might have come later or with less impact. It’s not hard to find ongoing challenges even today.

Open source isn’t going away of course and some form of it probably never was. But it should never be taken for granted.

[26:38]