Regulating Cyberspace: A Case Study in SPAM - Phase Three

Lydia Pallas Loren
Northwestern School of Law of Lewis and Clark College

Phase One | Phase Two | Phase Three

 

PHASE THREE: Alternatives: Regulating From The Bottom Up

While regulation by governments is a standard way to shape behavior, there are other mechanisms that influence human interaction. Social pressures can play an important role in how one behaves in a given situation. This kind of pressure can be thought of as a type of "bottom-up" regulation. It is not dictated from on-high by a government with authority over the individuals. Rather it is the people themselves attempting to "regulate" actions through informal social control. The bottom-up approach to shaping behavior can take many forms, from social norms and mores, which when violated bring shame, dishonor, or shunning, to grass-roots boycotts of products in an attempt to reform a company’s practices through monetary pressure.

 

Netiquette

In cyberspace, there is a loose sense of what is acceptable and what is unacceptable. Sometimes described referred to as "netiquette," the unwritten code of conduct in cyberspace does influence the action of humans on the Internet. Netiquette, however, can only influence behavior. If an individual behaves in a way that is inconsistent with accepted netiquette, there are no legal consequences that can be brought to bear on that individual. However, netiquette has played a role in at least one decision relating to UCE, but only because the general rules of netiquette were expressly incorporated into the contract. Ontario Inc. v. Nexx Online Inc., [1999] O.J. No. 2246 (finding that sending spam is "contrary to the emerging principles of ‘netiquette’"). Unless a contract that incorporates netiquette exists, violators of netiquette are subject instead to the cyberspace equivalent of shunning -- violators are ignored, shamming -- violators are flamed, along with other technical responses to consistent violators.

 

Technical Boycotts

While no legal consequences can be forced on those who behave in ways seen as unacceptable to netizens, there are technical consequences that can be implemented. Technical retaliations in the form of email bombs or virus are one response that may shape future behavior, but these approaches are, for the most part, subject to their own legal prohibitions. See, e.g., 18 U.S.C. § 1030. Another technical solutions is filtering, discussed in Note 2, phase two, of this module, although filtering does not shape the behavior of the senders of UCE, except to the extent that senders modify their future actions to avoid the filters! An additional technical consequence that has been implemented to shape behavior relating to the sending of UCE can be seen as filtering on a much larger scale. One examples of this approach to shaping behavior is the MAPS (Mail Abuse Prevention System) RBL (our Realtime Blackhole List).

As stated on its homepage, "the MAPS-RBL is a system for creating intentional network outages for the purpose of limiting the transport of known-to-be-unwanted mass e-mail." Boiled down to its basics, the MAPS-RBL keeps a list of those network providers that are spam friendly. Other network providers can choose to subscribe to the MAPS-RBL list, which means that they will block all email coming from any account on the networks listed on the RBL list. All mail from the listed network providers is blocked, not just certain accounts known for sending UCE. The RBL thus establishes a technically implemented boycott of the listed network providers. Only network providers that subscribe to the RBL list participate in the boycott; email sent to non-subscribing network providers is unaffected. As with any boycott, the RBL will only be as effective as the strength of the number of subscribing network providers. In this way RBL seeks to shape behavior by making the sending of UCE not nearly as profitable by blocking messages to large numbers of recipients.

However, as discussed earlier in this module, the senders of UCE have been known to relay their messages through other innocent servers so as to disguise the true origin of their messages. Relaying permits someone who does not have an account with a particular server to, nonetheless, send a message "through" that server, making it look to the receiving computer as if the message came from the server through which it was relayed. Relaying avoids filters that have been programmed to block the email of the true sender of the UCE. This practice of relaying can result in problems for the server through which the email is relayed. In addition to the problems of misdirected email bombs, the server through which the UCE is relayed may also find itself subject to filtering as people associate that server with UCE.

Relaying is only possible if a server has configured its system to permit "open relay." If open relaying is permitted, then the server is a prime target for senders of UCE. Just as relaying avoids various filters, the practice also avoids messages being subject to the boycott of the RBL, at least initially. As soon as a network that permits open relay is identified to the RBL, that network provider will be listed on the RBL until the network is reconfigured to not permit open relay. By blocking all messages from networks that are UCE "neutral" RBL hopes that the subscribers of such network will pressure the network to modify its technical specifications concerning relaying. For more information on RBL’s position on relaying, see <http://maps.vix.com/rbl/rationale.html#RelaySpam>

The pros and cons of these types of bottom-up approaches to rule making in cyberspace are discussed in the following article by Professor David Post presented at the Yale Information Society Project Conference on "Private Censorship/Perfect Choice," April 9, 1999. The following excerpts are taken from his March 1, 1999 version, available at <http://www.temple.edu/lawschool/dpost/blackhole.html> (many footnotes have been omitted without indication of omission –ed.).

 

 

Of Horses, Black Holes, and Decentralized Law-Making in Cyberspace, David G. Post, Temple University School of Law, Postd@erols.com

There are, among the (rapidly-growing) community of people who spend their time thinking about law and policy in cyberspace, some rather interesting debates currently taking place. Though they are not always characterized in these terms, they reflect conflicts between fundamentally different visions of law-making and, even, of the nature of "order" and "disorder" in social systems. This conflict and these debates are by no means new; indeed, they have long and distinguished pedigrees. But they are reflected and refined in some rather fascinating ways in the Internet context -- or so, at least, I hope to persuade you in what follows.

Several years ago, Judge Frank Easterbrook posed the following challenge:

"When he was dean of [the University of Chicago law school], Gerhard Casper was proud that the University of Chicago did not offer a course in the "The Law of the Horse." He did not mean by this that Illinois specializes in grain rather than livestock. His point, rather, was "Law and . . ." courses should be limited to subjects that could illuminate the entire law. . . . that the best way to learn the law applicable to specialized endeavors is to study general rules. Lots of cases deal with sales of horses; others deal with people kicked by horses; still more deal with the licensing and racing of horses, or with the care veterinarians give to horses, or with prizes at horse shows. Any effort to collect these strands into a course on "the Law of the Horse" is doomed to be shallow and to miss unifying principles." (2)

Like others present at Judge Easterbrook's presentation, I have thought long and hard about his meaning. Legal theory and legal doctrine, Easterbrook suggested, is (or should be) sufficient to illuminate the "entire law"; we should be looking not for an intellectual property law of cyberspace, but a general theory of intellectual property law to apply in cyberspace. It certainly seems reasonable enough to presume(4) that no new legal theory or legal doctrine is necessary to explain and illuminate the transactions taking place in cyberspace, and that the burden of persuasion should be placed on those who assert that for some reason existing theory or doctrine is not sufficient to explain (or provide rules for) conduct on the global net.

It is a presumption one hears more and more these days. I would like, at this conference on "private censorship," to approach it from below, as it were, by relating a small incident that may illuminate one small corner of this large canvas. In a discussion group in which I participate a member of the group -- let's call him Professor X -- posted the following message:
 

 

To all:
Assuming that this message isn't screened out by the [the server hosting this discussion group], you might be interested in a "small" problem [my university] faces. A few weeks ago, someone "bounced" some spam off our server. It somehow corrupted our email system, and . . . I am beginning to get messages like this:

 

The message that you sent was undeliverable to the following: [name@host.com]

 

Transcript of session follows:
MAIL FROM: [professorX@university.edu] refused; see http://maps.vix.com/rbl/"

 

I hope it never happens to you. Meanwhile, any ideas about how to deal with it?
Cheers,

There were, as it turned out, lots of ideas about how to deal with it -- and therein hangs my tale. But first, the facts. The message that professorX@university.edu attempted to send to name@host.com was deemed "undeliverable" because it had fallen prey to something called the "Realtime Blackhole List" (RBL). Boiled down to its essentials, the RBL works as follows. Paul Vixie, an Internet veteran, compiles a list of Internet Service Providers -- call them "Listed ISPs" -- who, in his view, encourage the operation of "spammers" (i.e., commercial bulk e-mail operations) by permitting use of something called an "open mail relay system." University.edu was, unbeknownst to Professor X, apparently one of these Listed ISPs. Vixie makes his list available to the world at large at his website -- the location of which is, helpfully, appended to the message announcing the "refusal" to handle Professor X's email ("http://maps.vix.com/rbl/"). Other ISPs can, if they choose, configure their systems so that their customers cannot exchange e-mail with the customers of the Listed ISPs, in the hope of persuading the Listed ISPs (like university.edu) to alter their policies. [Call the ISPs choosing this course of action -- and there appear to be a fair number of them [8]-- the "Subscribing ISPs"] The email account of intended recipient of Professor X's message -- name@host.com -- apparently resides on the system belonging to one of these Subscribing ISPs. As a result, Professor X's mail bounced back -- with a message from host.com saying, in effect, "No thanks, we don't take any mail from, nor do we forward any email from our subscribers to, systems like yours that encourage spammers (as that has been determined by Paul Vixie and the other operators of the RBL site)."

The underlying problem that the RBL (and other similar anti-spam "boycott" or "blacklisting" operations) is attempting to address -- the proliferation of unsolicited mass e-mailing operations -- is, we might all agree, not a trivial one. At just the moment that e-mail has become an indispensable form of communication, of incalculable commercial and non-commercial importance for a significant and ever-growing segment of the world community, its utility is threatened by this barrage of unwanted and unsolicited communications. Get-rich-quick schemes, unsolicited stock tips, invitations to view pornographic websites -- the noise is drowning out the speech that many would rather engage in, making many places on the Net uninhabitable. Some have suggested -- plausibly -- that the viability and even the existence of many open discussion forums (in particular, many Usenet newsgroups) -- one of the Internet's earliest and most remarkable innovations -- is being fatally undermined by the explosion of mass mailings of this kind.

But is the RBL a good way to deal with this problem? And, as Judge Easterbrook (or even Dean Caspar) might put it: is our answer to this question affected in any fundamental way by the fact that this is all happening in cyberspace?

Views on the first question appear to be rather sharply divided. On the one hand, legal scholars have, in recent years, discovered -- or re-discovered -- the significance of "informal social control" in shaping behavior in social systems. Systems of rules and sanctions created and administered outside of any formal State-managed or State-sanctioned process -- "norms" -- can, it is clear, be both powerful and welfare-enhancing determinants of behavior in many contexts.

And what is the RBL if not part of just such a system of informal social control? Here, after all, is what Vixie is up to. He has proposed a definition of behavior that he finds unacceptable -- allowing open mail relay systems. He encourages you -- or, more accurately, your system administrator -- to sanction those who behave unacceptably (according to his definition). The sanction he proposes is a kind of electronic shunning; those applying the sanction simply cease all (electronic) communication with the offenders. He offers to serve as your agent for determining the identity of those (the "Listed ISPs") who are behaving in this unacceptable manner. He provides you with means to accomplish the shunning, i.e., to configure your system so that all e-mail addressed from your system to a Listed ISP system, and all e-mail addressed from a Listed ISP system to yours, is returned as undelivered.

Most significantly, if you do not agree with Vixie's particular definition of unacceptable behavior, or his choice of sanction, or the means he has chosen to implement that sanction, or his method of detecting violators subject to the sanction, you are entirely free to ignore them (or, if you'd like, to propose your own). Not that his behavior doesn't exercise a constraint on yours; but it does so only to the extent (and precisely to the extent) that others share his views on the definition of wrongdoing, the choice of appropriate sanction, the identity of the wrongdoers, etc. He can persuade, and cajole, and beg the hundreds of thousands of ISPs out there to join his group of Subscribing ISPs -- but he cannot force them to do so in any meaningful sense of that term. It is a near-perfect preference-revealing device, it would seem, for uncovering shared definitions of unacceptable conduct; the likelihood that Professor X will feel the sting of Vixie's sanction is perfectly calibrated to the number of people who share Vixie's views in these matters. If a substantial number of people share his view of unacceptable behavior, it may become a governing norm on the net; and if a substantial number of people share his view of what constitutes unacceptable behavior, who is to say that that view is not the "correct" one?

Many people, apparently. Surprisingly (to me), many people describe institutions like the RBL in very different terms than I have used: not as a "respectable" form of norm-creation or decentralized rule-making but rather as a species of vigilantism or vandalism, a wasteful and inefficient "arm's race" between spammers and anti-spammers that must be brought under control by some more formal rule-making process (such as an anti-spam statute). Prof. Lessig has forcefully articulated this view:
 

"[T]hese battles [between spammers and anti-spammers] will not go away. The power of the vigilantes will no doubt increase, as they hold out the ever-more-appealing promise of a world without spam. But the conflicts with these vigilantes will increase as well. Network service providers will struggle with antispam activists even as activists struggle with spam.

"There's something wrong with this picture. This policy question will fundamentally affect the architecture of e-mail. The ideal solution would involve a mix of rules about spam and code to implement the rules. . . . Certainly, spam is an issue. But the real problem is that vigilantes and network service providers are deciding fundamental policy questions about how the Net will work -- each group from its own perspective. This is policy-making by the 'invisible hand.' It's not that policy is not being made, but that those making the policy are unaccountable. . . . Is this how network policy should be made? The answer is obvious, even if the solution is not."(13)

This view -- not only that we should not rely on the interplay [a misnomer, perhaps] between spammers and anti-spammers to make network policy, but that it is "obvious" that we should not do so -- seems to be widely shared.(14) I find this somewhat perplexing. Both processes -- the bottoms-up one of which the RBL is a part, and the top-down legislative process that produced e.g., the State of Virginia's anti-spam statute) -- are capable of producing rules governing this conduct (though they may be different rules); both implement those rules via sanctions (though the sanctions are different); both sets of rulemakers are "accountable" (though in very different ways); and both processes have a degree of transparency (though to different people). How can we tell which process -- Vixie's or Lessig's -- is likely to produce better rules?

Whatever you think may be the answer to this question -- and I will get to my own views in a moment -- it is of central theoretical and practical importance in cyberspace. Lessig is surely correct; "these battles" -- and therefore this question -- "will not go away." A conflict between bottoms-up and top-down rule-making processes is at the heart of most, if not all, of the important and challenging cyberspace policy debates.

Consider, for example, the current turmoil in the domain name allocation system. The Internet exists as a single entity because of an informal and unspoken global consensus among ISPs to consult a single source -- the so-called "rootserver databases" -- to match domain names with their corresponding machine addresses so that Internet messages are routed to their correct destination. A self-appointed body, the Internet Assigned Number Authority (IANA), and later a private firm (NSI), managed this system since its inception almost 20 years ago, funded (in part) under a contract with the U.S. government. When that contract expired last year, the Commerce Department made the eminently sensible decision that it should no longer continue to spend taxpayer dollars on a system that clearly could be self-funding. But it could not bring itself simply to walk away from that relationship at contract termination; it would, the Department stated, be "irresponsible to withdraw from its existing management role without taking steps to ensure the stability of the Internet."(16) Instead, it embarked on a process of "transferring authority" for setting domain name rules and policy to a newly-created non-profit corporation (ICANN). The possibility that the system could sustain itself on a self-ordered basis -- an "authority-free" basis, the one on which, one might suggest, it was originally built(17) -- was not seriously considered.(18) Whatever one thinks of this decision, the creation of a single government-authorized entity to control these critical system resources will inevitably have deep implications for the Internet as a whole.

And while this conflict between top-down and bottoms up approaches is a pervasive feature of cyberspace policy debates, we possess no real analytic vocabulary or framework for comparing the rules generated by the two processes. We cannot evaluate ex ante the rules produced by decentralized processes like the RBL because of inherent, irreducible, and almost complete uncertainty about what they might be. No one can say what kind of anti-spam rules will emerge from the RBL process, or how the domain name allocation system will operate if the U.S. government steps aside, because that information does not exist until the process itself generates it. No one can say whether Vixie's initiative will, or will not, cause open mail relay systems to disappear, because that depends upon the response of hundreds of thousands of individual system administrators; no one can say whether alternative and as yet untried and perhaps unthought-of means of deterring spammers will prove more popular than Vixie's; no one can say how spammers will react to the absence of open mail relay (or to these other alternatives) or how the anti-spammers will react to those reactions, etc.(20)

Thus, we cannot lay the rules that might emerge from these bottoms up systems side-by-side with their top-down alternatives for purposes of analysis, deliberation, and debate. Though we often talk as if we can evaluate "network policy" by this kind of cool, rational calculation of the costs and benefits of the available alternative courses of action, that is necessarily something of a charade; one side of the equation invariably returns the message "Variables Undefined."

As a result, the policymaking deck is more than a trifle stacked against the bottoms up mode; because we can't see, or imagine, or debate the pros and cons of where the RBL might take us -- the rule(s) of spamming that the RBL and its variants could produce -- all we're left with is the bad news: mail that doesn't reach its intended destination, disruptions of all kinds, all of the disordered and aggravating messiness of bottoms-up processes. Why in heaven's name should we put up with it?

It certainly makes for an apparently simple policy choice: order versus chaos, the "stability of the Internet" versus disorder and instability. But it is not simple. It is precisely this disordered and aggravating messiness that ultimately is the source of the immense power of decentralized decision-making systems: repeated trial-and-error, and the pull-and-tug of competing rules and counter-rules, can generate answers to complex problems, configurations of the individual elements of complex systems, that can be found no other way.(21) There is an immense literature -- far too rich for me even to summarize in this brief essay -- describing this phenomenon in mathematical, physical, and biological systems, and I know of no plausible argument that suggests that problems defined in social systems are different in this regard.

But if you are uncomfortable with taking, say, the theory of evolution as evidence for the power of bottoms up algorithms to solve otherwise insoluble problems of coordination and design, consider the Internet itself as an example of the way in which decentralized, trial-and-error, consensual processes can build stable structures of unimaginable complexity and power. The rise of cyberspace, it is worth noting, took virtually everyone by surprise;(22) how could something as ridiculously complex as a single global communications network be built without identifiable direction, without some "authority" in charge of bringing it into being? A seemingly impossible coordination problem -- constructing, and getting large numbers of people to adopt, a single global language(24) -- had to be solved before the Internet could exist. And it was solved (in a remarkably short period of time).(25) Like the natural languages it so closely resembles, it emerged (and could only emerge) from a process that was at its core unplanned and undirected. The direction of its evolution was at any and all times unpredictable, determined by a bottoms-up process of developing consensus among larger and larger numbers of geographically-dispersed individuals. Though we can point ex post to many individuals and institutions who played particularly important roles in its emergence, no one "created" the set of rules we now know as the Internet because no one was or could have been in the position to do so, any more than anyone is in a position to create a new set of rules for English syntax. Emergent institutions like the Internet Engineering Task Force (whose motto, "We reject Kings, Presidents, and voting; we seek rough consensus and working code," aptly captures its decentralized orientation), the World Wide Web consortium, the Internet Assigned Numbering Authority, and the like -- institutions with no authority whatsoever to act on anyone's behalf, no fixed address or membership, and no formal legal existence -- somehow got hundreds of millions of individuals across the globe to agree on a common syntax for their electronic conversations.

Here, then, is my first response to Judge Easterbrook's challenge. Perhaps our existing theories of self-ordering and coordination are less powerful than we might have thought. Perhaps cyberspace -- unlike horses -- can at the very least inform our understanding of these very basic questions. If nothing else, perhaps the very existence of the net should caution us against dismissing too quickly the notion that there are some problems that are best solved by these messy, disordered, semi-chaotic, unplanned, decentralized systems, and that the costs that necessarily accompany such unplanned disorder may sometimes be worth bearing.

But there is another sense in which the cyberspace in cyberspace law matters (or should matter) for our thinking about these problems: having emerged from decentralized disorder -- from the primordial ooze of the Internet Engineering Task Force -- cyberspace has created conditions that favor the growth of powerful centralizing forces. There is, first, what we might call cyberspace's "jurisdictional conundrum" -- the difficulties inherent in mapping territorial legal regimes onto a medium in which physical location is of virtually no significance. The State of Virginia will soon discover that its anti-spam statute has little effect on the amount of spam that its citizens receive, because while spam originating anywhere on the network can easily make its way into Virginia, spam originating elsewhere -- i.e., outside of Virginia's borders -- is largely immune to Virginia's control for both "doctrinal" and practical reasons. The same will be true in regard to a federal anti-spam statute (if such a statute is enacted), just on a grander scale. We can already write the headline in the New York Times: "Use of Offshore E-Mail Servers Hinders Enforcement of Federal Spam Statute; Government Calls for International Cooperation to Solve 'Serious Problem.'" We will, inevitably (and, since we're on Internet Time, sooner than we think), hear calls for "international harmonization" of spam regulation, replicating the pattern currently spreading across the cyberspace legal spectrum.

Second, in cyberspace, the "code" in which the law is inscribed will increasingly come to mean the "code" inscribed in the software, the protocols, and the underlying architecture of the place itself -- "rules that require a password upon entry into a system; or that require a filename no longer than thirty characters; or that require a verified return address on a particular e-mail message; or that allow the places one's Web browser has visited to be reported to another Web browser."(30) One characteristic of this "code of the code" is that it is brutally efficient; the marginal cost of enforcement for these code/rules is zero, and they broach no argument and yield to no persuasion in their application. "One cannot flout the password requirement," and one "cannot 'almost' be on America Online -- you are either transmitting AOL-compliant messages or you are not."

Neither top-down rule-making from the very pinnacle of the international pyramid, nor rule-making inscribed in the code, is inherently malevolent. One can hardly deny that there are circumstances where it is in the common interest to stop those who would lob grenades over our borders from outside, just as there are circumstances where the brutal effectiveness of code-based rules is a welcome enhancement to the enforcement arsenal.

But this can easily become, I fear, a recipe for a policy-making disaster. If the confluence of these forces produces a single rule about the nature of "unsolicited bulk commercial email," a rule enforced unambiguously by perfectly discriminating code operating on a global basis, we will certainly have "ensure[d] the stability of the Internet." But at the extreme, a "stable" Internet is one locked in place, incapable of generating innovative responses to the very problems that the Internet itself is bringing into existence. No one really wants such a world, with its Orwellian overtones. But these forces will push -- are pushing -- us hard towards that extreme; I suggest that we will need a deeper understanding of, and appreciation for, the ways in which the chaos of the RBL can produce lawful order if we are to push back.

ENDNOTES:
2. Frank H. Easterbrook, "Cyberspace and the law of the Horse," 1996 Univ. Chic. L. Forum 207.

4. I take it that Judge Easterbrook meant it as a presumption. It would be an odd form of hubris to suggest that there could never be some phenomenon that would cause us to revise or even discard pre-existing theory or doctrine, and I read Easterbrook, more reasonably, to be suggesting simply that he had not seen sufficient evidence to suggest that cyberspace was such a phenomenon.

8. [Vixie's site lists over 80 network subscribers, and a number of software packages that incorporate "anti-relay" features. The precise number of subscribers is probably impossible to know with certainty]

13. Lessig, "Spam Wars" [cite]. (italics supplied).

14. In the course of the most enlightening discussion of these questions on the "Cyberprof" listserv, skepticism about bottoms-up processes in general, and certainly about the RBL, was widespread. E.g.,

"These private blacklists - however virtuous the maintainers might be - are a perfect example, imho, of where bottom up doesn't work. The externality from this boycott is huge. Yet there is no body that can reckon that externality. "

"[My company] fell victim to Vixie and his list during last summer. Given the nature of our proprietary architecture, making the fixes they wanted wasn't an option. While they eventually were forced to acknowledge this, we were blackholed for an unacceptable period of time while we tried to make them understand why we couldn't comply. The lack of formal process on their end seriously hampered our ability to get them to understand. Many of our customers had major problems arise during that time period because they couldn't use our service to get mail out to users on ISP who subscribed to the Vixie list."

"The average RBL'd site with an open mail relay is like a neighbor who allows members of the public open access to his yard, whence they deposit all sorts of trash into *my* yard. . . . Why can't I allow access to my yard without fear that some members of the public will abuse it to litter both mine and my neighbors' yards? Moreover, I wonder how many generations of locks and lock pickers we have yet to endure. Something is amiss in this let-it-all-hang-out picture."

Professor X himself, it might be noted, agreed with Lessig:

"I regard email as a tool, not a career. I appreciate that some are otherwise inclined, but neither I nor many other people are interested in its history and arcana. My point was and remains: Public policy should not require them to delve deeply to send a simple message and avoid what amounts to vandalism and vigilante responses thereto."

16. Department of Commerce White Paper (emphasis added).

17. Evidence that the DNS has, in fact, been functioning under a form of bottoms up management comes, ironically enough, from the difficulty that the government has had identifying with any precision the source of the authority that it proposes to transfer to ICANN. See [ ].

18. To be fair, it was considered. See [Department of Commerce "Green Paper."] It appears, however, to have been abandoned. See [Department of Commerce, "White Paper."]

20. The precise analytic vocabulary of economics is of no use to us here, even within the relatively restricted but important sphere of determining whether rules are or are not welfare-maximizing. It cannot help us assess the efficiency of the norms that would be produced by the system of which the RBL is a part because, at bottom, economists, and economic models, are no better at predicting the future than anyone or anything else. The analytic models of mainstream economics routinely assume away the "endogenous variables" -- the responses and counter-responses of individual system components, the pull and tug between competing parts of the whole -- that are at the heart of any bottoms up process.

21. In the language of complexity theory, decentralized bottoms up decision-making processes are powerful algorithms for finding solutions -- "high points on the fitness landscape" -- to problems defined over complex interdependent spaces. See [Cite Post & Johnson, "Chaos Prevailing"]

22. See the Economist, "The Death of Distance," [cite], for what is probably the best general description of the striking inability of politicians, social theorists, and even some very savvy players within the computer industry itself, to predict ex ante the emergence and growth of this medium.

24. The Internet is, at bottom, that language, the set of grammatical rules (the "Internet protocols" and related transmission and communication standards) that allow machines to exchange information with one another.[Cite Lessig Chicago-Kent talk]

25. The passive voice in the foregoing sentence is intentional. The Internet, like all such radically decentralized problem-solving systems, is a phenomenon of the passive voice; problems are solved, institutions are created, without an identifiable actor other than the system itself.

30. Lessig, "Reading the Constitution in Cyberspace," 45 Emory L.J. at 896. Lessig, see [ ], Joel Reidenberg, ["Lex Informatica: The Formulation of Information Policy Rules Through Technology," 76 Texas L. Rev. 553 (1998)], and Ethan Katsh ["Software Worlds and the First Amendment: Virtual Doorkeepers in Cyberspace,"1996 U. Chi. Legal F. (discussing cyberspace as a "software world" where "code is the law")] have written extensively and trenchantly about this transformation in the law. See also [Post, "Anarchy, State and the Internet," J. Online Law at [ ] (software protocols have a "competitive advantage" over other forms of social control in cyberspace because code can "more precisely demarcate" the line between permissible and impermissible behavior)]; Johnson & Post, ["Law & Borders," 48 Stan. L. Rev. at 1395-97 (describing ease with which 'software boundaries' can delineate separate territories in cyberspace)].

 

Notes, Comments, and Questions:

1. Technical nature of the RBL "boycott." As Prof. Post points out, the unforgiving nature of computer code makes enforcing rules that can be embodied in code very effective, one cannot ignore a password requirement. Prof. Post raises this point in relation to the potential threat of centralized rule making in cyberspace. If governments can embody their laws in computer code, it is extremely difficult for anyone to "break" the law. Doesn’t the unforgiving nature of computer code, however, also transform the nature of boycotts like the RBL? If an ISP decides to subscribe to the RBL list, e.g. participate in the boycott, Paul Vixie and the others who run the RBL dictate what messages will and will not be delivered to that ISP. The ISP cannot "sort of" participate in the boycott by making its own determination of whether a particular server is spam friendly. Consider a comparison to real-space: when people are encouraged to boycott tuna producers that are not dolphin friendly, it is left to each individual to decide how strictly to adhere to that boycott. Does the code nature of the cyberspace boycott change how we view the boycott as a form of decentralized rule making?

At least one domain name owner has been able to enlist the help of the courts and prevent, albeit temporarily, the listing of that domain on the RBL. See, Patricia Odell, Richard H. Levey, Yesmail Gets Restraining Order Against MAPS Blacklist, Direct Newsline, July 17, 2000 http://www.directmag.com/content/newsline/2000/2000071701.htm

2. Top-down or Bottom-up. Those supportive of bottom-up rulemaking argue that the benefits of flexibility and recognition of individual preference are better than state regulation. Those more inclined to favor a top-down rulemaking system argue that bottom-up rulemaking is full of market failure, information asymmetries, and externalities, whereas top-down rulemaking has the benefit of deliberative and representative decision-making. The negative externalities of a bottom-up rule making often lead to calls for state intervention in the form of laws that will either eliminate the negative externality producing behavior, or internalize its cost. Do the arguments concerning top-down versus bottom-up regulation differ when cyberspace is the realm of regulation?

3. The Role of Internet Service Providers. A fundamental characteristic of cyberspace is that no one entity or government controls the internet. This characteristic poses huge challenges to the enforcement of any "rules" in cyberspace. In his article, Prof. Posts urges the reader to consider the decentralized, bottom-up rule making as a viable and more credible alternative to shaping behavior than many lawyers and government officials are willing to admit. The example of the RBL as a model of bottom-up rule making, however, may not be a good one. Afterall, the decision as to whether a particular e-mail account is going to receive messages that originate from RBL listed servers is not made by the account holder. That decision is made by the service provider for that account. Are internet service providers good agents for representing their constituents’ (i.e. their subscribers’) interests?

If a service provider has decided to participate in the RBL boycott, an individual account holder cannot decide that he or she does not wish to be a part of that boycott, except by switching service providers. The ability to switch service providers, however, may be a powerful option, one that can influence the decisions of service providers. In the United States we sometimes talk of the "laboratories" of the states. If an individual does not agree with the law and regulations of one state government they can chose to move to another state. Of course the costs associated with such "voting with your feet" can be enormous. In cyberspace, if internet service providers are enforcing policies through their code and their email architecture, individuals can "vote with their wallet" – they can take their business to a different service provider. Is this an apt comparison?

The internet service provider, because it can act as a "gatekeeper" is often looked to as a possible point of enforcement for government created rules. Internet service providers have been in the middle of controversies relating to defamation in cyberspace and controversies relating to copyright and trademark infringement on the internet. In the case of defamation ISPs have managed to obtain an exemption from liability under federal law in the United States. 47 U.S.C. § 230(c). In connection with copyright infringement, ISPs have managed to obtain certain "safe harbor" exemptions if they assist copyright owners in removing allegedly infringing material. 17 U.S.C. § 512.

Are internet service providers the right actors on which to place responsibility for creating policy for cyberspace? Are they the right actors on which to place responsibility for enforcing centrally created laws? How about rules created in a decentralized rule making process?

4. The Law of Cyberspace. Judge Easterbrook’s reference to the "law of the horse" story raises important questions concerning the nature of a developing body of "cyberspace law." Is cyberspace law a different category of law such that it should be thought of as a separate discipline? Should law school classes on "cyberspace law" even exist, or would your legal education be better served by an integration into all of your law school classes the challenges that the digital world brings to law?

Return to the Main Module page

Return to the Learning Cyberlaw homepage