acm - an acm publication

Articles

Communications policy and information technology

Ubiquity, Volume 2003 Issue January, January 1- January 30, 2003 | BY Lorrie Faith Cranor , Shane Greenstein 

|

Full citation in the ACM Digital Library

Policy analysis involves tempering enthusiasm with a sobering dose of reality.


Policy analysis involves tempering enthusiasm with a sobering dose of reality.



As Langston Winner writes, it is not until "after the bulldozer has rolled over us" that we "pick ourselves up and carefully measure the tread marks." [1]That is, a technologist's forecast may not be scrutinized until well after a new technology has been deployed. To be sure, new technology is often developed in an air of optimism, as an attempt to solve problems. But, more often than not, new technologies actually create many new problems, fall far short of their predicted abilities, or bring with them a myriad of unintended consequences. When all effects are considered together, technology has an arguably positive impact on the quality of life. Yet, the emphasis should be on "arguable."

In the communications policy arena, the optimistic technologist perennially forecasts that new communications technologies have the ability to directly or indirectly address the most intractable problems. Communications technology promotes the dissemination of ideas, the free flow of information, and more direct political participation. Internet connections give remote communities access to medical information, libraries, or even university courses. In the most Utopian views, universal access to advanced communications technologies have a role to play in feeding the hungry, curing the sick, educating the illiterate, improving the overall standard of living around the world, and ultimately bringing about world peace.

Policy analysis often involves tempering this enthusiasm with a dose of sobering reality. While communications technologies probably have a role to play in making the world a better place, the impact of any specific technical advance is likely to be modest. Technologies often turn out to have limitations that are not immediately apparent -- they don't hold up to everyday use in the real world, they don't scale, or they have side effects. In addition, there are limitations in their benefits. The limits may not be inherent in the technological capability, but rather due to the regulatory institutions or the economic constraints that govern the deployment of a commercial implementation. While the technology may exist to deliver any information anywhere in the world, many people lack the money to pay for it, the equipment to access it, the skills to use it, or even the knowledge that any of this might be useful to them in the first place. Indeed, there may not be a viable business model for delivering basic services associated with a new technology at all.

Many of the papers presented at TPRC 2001 [the Telecommunications Policy Research Conference] examine the impacts of new communications technologies and their associated institutions. Despite the novelty of the technologies and the optimism that first surrounded their deployment, these examinations echo recurring themes. Technologies and institutions are often slow to deliver on their promises, many kinks remain to be worked out, and many questions remain to be answered. The tension between promising prospects and vexing conundrums arises in every paper in this volume.


Regulatory Conundrums and the Internet

As the Internet gained popularity in the 1990s, it was predicted to be a tool that could be used to break down national borders and their associated troublesome institutions. The Internet was heralded as both an enabler of direct digital democracy, and the incubator of a utopian society that had no need for government. But it has not worked out that way. As discussed in this book, Internet governance has emerged, but not smoothly. The vision of digital democracy has yet to materialize and papers in this book suggest that many obstacles still remain.

"Governments of the Industrial World . . . . You have no sovereignty where we gather," wrote Electronic Frontier Foundation co-founder John Perry Barlow in his 1996 Declaration of the Independence of Cyberspace [2]. "We have no elected government, nor are we likely to have one . . . . You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear." Five years later, governments throughout the world are attempting -- and in many cases succeeding -- to enforce their laws in cyberspace, and an elected government for the Internet has emerged in the form of ICANN. But the imposition of government and governance on cyberspace has not gone smoothly. While Barlow and others have maintained that "legal concepts of property, expression, identity, movement, and context do not apply" to the Internet, courts around the world have found that they do apply, although not always in consistent or predictable ways. And while Barlow foresaw that "from ethics, enlightened self-interest, and the commonweal, our governance will emerge," the governance that has emerged has been widely criticized for its lack of commonweal or enlightenment, and for frequently acting only in its self-interest and not for the benefit of the larger Internet community. Experience has shown that the Internet can be regulated, but that doing so is not easy. In this book we present three chapters that offer insights into some of the difficulties associated with Internet regulation: a case study in ICANN decision making, an analysis of the policy issues associated with attempts to establish a universal addressing system, and a discussion of how legal jurisdiction should be determined in Internet-related cases.

In Chapter 1, Jonathan Weinberg tells the story leading to ICANN's selection of seven new Internet top level domains in November 2000. ICANN staff, in setting the ground rules for considering new TLDs, emphasized that only a few applicants would be allowed in, and imposed strict threshold requirements. Staff determined that the Board should pick TLDs by looking at all relevant aspects of every proposal, and deciding which ones presented the best overall combination of a variety of incommensurable factors. As Weinberg explains, aspects of the resulting process were predictable: Anyone familiar with the FCC comparative hearing process for broadcast licenses can attest that this sort of ad hoc comparison is necessarily subjective, lending itself to arbitrariness and biased application. Yet the process had advantages that appealed to ICANN decision-makers. The Board members would be free to take their best shots, in a situationally sensitive manner, at advancing the policies they thought important. The approach allowed ICANN to maintain the greatest degree of control. The end result, though, was a process that Weinberg describes as "stunning in its arbitrariness, a bad parody of fact-bound, situationally sensitive (rather than rules-based) decision-making."

As Robert Cannon explains in chapter 2, ENUM marks either the convergence or collision of the public telephone network with the Internet. ENUM is an innovation in the domain name system (DNS). It starts with numerical domain names that are used to query DNS name servers. The servers respond with address information found in DNS records. This can be telephone numbers, email addresses, or other information. The concept is to use a single number in order to obtain a plethora of contact information. By convention, the Internet Engineering Task Force (IETF) ENUM Working Group determined that an ENUM number would be the same numerical string as a telephone number. In addition, the assignee of an ENUM number would be the assignee of that telephone number. But ENUM could work with any numerical string or, in fact, any domain name. ENUM creates multiple policy problems. What impact does ENUM have upon the public telephone network and the telephone numbering resource? For example, does this create a solution or a problem for number portability? If ENUM truly is a DNS innovation, how does it square with the classic difficulties experienced with DNS and ICANN? Is ENUM, while presenting a convergence solution, also encumbered with the policy problems of both the DNS and telephony worlds?

A unique challenge presented by the Internet is that compliance with local laws is rarely sufficient to assure a business that it has limited its exposure to legal risk. In chapter 3, Michael Geist identifies why the challenge of adequately accounting for the legal risk arising from Internet jurisdiction has been aggravated in recent years by the adoption of the Zippo legal framework, commonly referred to as the passive versus active test. The test provides parties with only limited guidance and often results in detrimental judicial decisions from a policy perspective. Given the inadequacies of the Zippo passive versus active test, Geist argues that it is now fitting to identify a more effective standard for determining when it is appropriate to assert jurisdiction in cases involving predominantly Internet-based contacts. The solution Geist suggests is to move toward a targeting-based analysis. Unlike the Zippo approach, a targeting analysis would seek to identify the intentions of the parties and to assess the steps taken to either enter or avoid a particular jurisdiction. Targeting would also lessen the reliance on effects-based analysis, the source of considerable uncertainty since Internet-based activity can ordinarily be said to create some effects in most jurisdictions. To identify the appropriate criteria for a targeting test, Geist recommends returning to the core jurisdictional principle -- foreseeability. Foreseeability in the targeting context depends on three factors -- contracts, technology, and actual or implied knowledge.


Digital Democracy: Prospects and Possibilities

As the popularity of the Internet has increased, so too has enthusiasm for "digital democracy." Many are excited about the prospects of citizens participating in virtual town halls, voting from home in their pajamas, and using the Internet to better inform themselves about candidates and ballot issues. Indeed, since the Reform Party held the first (partially) online election to nominate a candidate for US President in 1996, there have been a flurry of electronic voting experiments, online government initiatives, and other forays into digital democracy. The 2000 Arizona Democratic primary, which allowed voters the option of casting their ballots over the Internet, was either a great success or a dismal failure, depending on whom you ask. While digital democracy enthusiasts have applauded each step forward, critics have raised concerns about digital divide issues and security considerations, and even questioned whether digital democracy can deliver on its promise of increasing political participation. In this book we address three areas where the Internet may play a role in political participation in the US and elsewhere: voting, petition signing, and dissemination of political information.

In the aftermath of the 2000 US Presidential election, many states are considering changes to their voting systems in the hopes of avoiding the kinds of problems that occurred in Florida. Electronic voting machines and Internet voting are often cited as possible solutions. In chapter 4, Aviel Rubin examines the security issues associated with running national governmental elections remotely over the Internet. He focuses on the limitations of the current deployed infrastructure in terms of the security of the personal computers people would use as voting machines, and the security and reliability of the Internet itself. He concludes that at present, our infrastructure is inadequate for remote Internet voting.

In many states people must collect thousands of signatures in order to qualify candidates or initiatives to appear on a ballot. Collecting these signatures can be very expensive. A recent California initiative would authorize use of encrypted digital signatures over the Internet to qualify candidates, initiatives, and other ballot measures. Proponents of Internet signature gathering say it will significantly lower the cost of qualifying initiatives and thereby reduce the influence of organized, well-financed interest groups. They also believe it will increase both public participation in the political process and public understanding about specific measures. However, opponents question whether Internet security is adequate to prevent widespread abuse and argue that the measure would create disadvantages for those who lack access to the Internet. Beyond issues of security, cost, and access lie larger questions about the effects of Internet signature gathering on direct democracy. Would it encourage greater and more informed public participation in the political process? Or would it flood voters with ballot measures and generally worsen current problems with the initiative process itself? In chapter 5, Walter Baer explores these and other issues related to Internet petition signing.

It has been suggested that increased access to the Internet and other new media should lead to a greater availability of political information and a more informed electorate. However, studies have shown that while the availability of political information has increased, citizens' levels of political knowledge have, at best, remained stagnant. In chapter 6, Markus Prior explains why this increased availability of information has not led to a more informed electorate. He hypothesizes that because the availability of entertainment content has increased with the availability of political information, people who prefer entertainment to news may in fact be exposed to less political information than they used to be. Prior analyzes existing NES and Pew survey data to build a measure of Relative Entertainment Preference that describes the degree of preference individuals have for entertainment versus news content. He finds that people who prefer entertainment to news and have access to cable television and the Internet are less politically knowledgeable and less likely to vote than people who prefer news or have less media access.


Monopoly and Competition in Communications Markets

The decade of the 90s was full of optimism for new communications services. New technologies were supposed to give users choices among a variety of alternatives where one had previously existed. Competitive pressures were supposed to lead slumbering monopolists to innovate, cut prices and offer new services. The passage of the 1996 Telecom Act heralded in this new era, the impending end of monopoly provision of communication services and the obsolescence of old regulatory institutions. Actual events have tempered these optimistic forecasts or recast them in new light. Competitive markets have emerged only haltingly or in pockets, but also in some places where it was not expected.

One major goal of the Telecommunications Act of 1996 was to promote competition in both the local exchange and long distance wireline markets. In section 271 Congress permitted the Bell Operating Companies (BOCs) to enter the long distance market only if they demonstrate to the FCC that they have complied with the market-opening requirements of section 251. Many have questioned the logic behind section 271. Was it a reasonable means of achieving increased competition in both the local and long distance markets? What type of regulatory structure suits the technical characteristics of the industry and the legal and informational constraints on regulators who must ensure compliance?

Daniel Shiman and Jessica Rosenworcel examine a variety of schemes for ensuring BOC compliance that Congress could have used. Given the characteristics of the industry and the limitations on regulators' ability to observe BOC's efforts, they determine that the use of a prize such as BOC entry into long distance is a superior incentive mechanism. They further argue that conditioning a BOC's long distance entry on its demonstrating compliance with section 251 was a logical method of protecting the long distance market against a BOC discriminating against long distance competitors once it has gained entry. They also provide an update on the extent of competitive entry in the local exchange market five years after enactment of the Act. They argue that statistical evidence -- primarily ILEC lines sold to CLECs for basic telephone services -- appears to confirm that section 271 has thus far been effective in ensuring compliance.

Federico Mini also examines firm behavior after the 1996 Telecommunications Act. He focuses on the requirement that all incumbent local telephone companies cooperate with local entrants. Section 271 of the Act provides the Bell companies -- but not GTE -- additional incentives to cooperate. Using an original data set, he compares the negotiations of AT&T, as a local entrant, with GTE and with the Bell companies in states where both operate. His results suggest that the differential incentives matter: The Bells accommodate entry more than does GTE, as evidenced in quicker agreements, less litigation, and more favorable prices offered for network access. Consistent with this, there is more entry into Bell territories

Sean Ennis takes an original approach to understanding the links between competitive structure and conduct. He examines the relationship between changes in telecommunications provider concentration on international long distance routes and changes in prices on those routes. Overall, he finds that decreased concentration is associated with significantly lower prices to consumers of long distance services. However, the relationship between concentration and price varies according to the type of long distance plan considered. For the international flagship plans frequently selected by more price-conscious consumers of international long distance, increased competition on a route is associated with lower prices. In contrast, for the basic international plans that are the default selection for consumers, increased competition on a route is actually associated with higher prices. Thus, somewhat surprisingly, price dispersion appears to increase as competition increases.

This section finishes with a close look at the newspaper market, where local concentration exists in many cities. Student paper award winner Lisa George examines the effect of ownership concentration on product position, product variety and readership in markets for daily newspapers. Most analysts presume that mergers reduce the amount and diversity of content available to consumers. However, the effects of consolidation in differentiated product markets cannot be determined solely from theory. Because multi-product firms internalize business stealing, mergers may encourage firms to reposition products, leading to more, not less, variety. Using data on reporter assignments from 1993-1999, George tests how Newspaper variety varies with city newspaper concentration. The results show that variety increases with the decline in the availability of newspapers. Moreover, there is evidence that additional variety increases readership, suggesting that concentration benefits consumers.


The Future of Wireless Communications

New wireless markets have begun developing, but also await co-development of a variety of institutional features. By all accounts there are opportunities to create enormous value, but the regulatory issues are quite challenging. Lee McKnight, William Lehr, and Raymond Linsenmayer compare two models for delivering broadband wireless services: best effort and Quality-of-Service (QoS) guaranteed services. The 'best effort' services are more commonly known as unlicensed wireless services, while the 'Quality of Service guaranteed' services are more commonly referred to as traditional landline telephony, as well as cellular telephone services of either the second or third generation. This paper highlights the differing 'market' versus 'engineering' philosophies implicit in alternative wireless service architectures.

Doublas Webbink examines a problem of interest to many consumers. Increasingly wireline and wireless services, including those provided by terrestrial and satellite systems, are considered to be substitutes and sometimes complements, regardless of the laws and regulations applicable to them. At the same time, many writers and even government agencies (such as the FCC) have suggested that users of the spectrum should be given more property-like rights in the use of the spectrum and at a minimum should be given much more flexibility in how they may use the spectrum. Two recent developments have important implications with respect to spectrum property rights and flexible use of the spectrum. The first development involves several proposals to provide terrestrial wireless services within spectrum in use. Such service may also interfere with spectrum used to provide satellite services. The second development is the passage of the 2000 ORBIT Act, an Act that specifically forbids the use of license auctions to select among mutually exclusive applicants to provide international or global satellite communications service. This paper discusses some of the questions raised by these two events, but does not necessarily provide definitive answers or solutions.

Expanding the Understanding of Universal Service

New technologies also have not eliminated concerns about equitable availability and use of communications technologies. Communications infrastructure remains expensive and there are no easy quick-fix solutions to the lack of available infrastructure or the lack of adoption of Internet technology. It is also quite unclear whether American definitions for universal service are portable to other countries.

Kyle Nicholas focuses on how state-level policy and access patterns work to structure Internet access within rural communities. Combining both quantitative and qualitative data, he examines the role of geo-policy barriers in Texas, one of the largest and most rural states in the nation. Expanded Area Service policies are state policies wherein phone customers can expand their local calling area. Because useful Internet access requires a flat-price connection, EAS policies can play a crucial role in connecting citizens to one another. EAS policies (including Texas') tend to vary along five dimensions (community of interest, customer scope, directionality, pricing mechanism and policy scope). He shows that EAS policies that rely on regulated market boundaries for definition can generate gross inequities in rural Internet access. Interviews with Internet Service Providers in a case study of 25 rural communities reveals that LATA and exchange boundaries, along with geographically restricted infrastructure investments, curtail service provision in remote areas. A statistical analysis of 1300 telephone exchanges, including 208 rural telephone exchanges in Texas reveals that the farther a community lies from a metropolitan area the less likely they are to have reliable Internet access.

In the same spirit, Sharon Strover, Michael Oden, Nobuya Inagaki, and Jeremy Gustafson investigate the relationship between telecommunications infrastructure, economic conditions, and federal and state policies and initiatives. They present a detailed look at the telecommunications environment of the Appalachian region, particularly focusing on broadband technologies. A strong, positive association exists between telecommunications infrastructure and economic status. They examine the effects of federal and state universal service policies, as well as some of the ways states have leveraged their own infrastructure to improve telecommunications capabilities in their region.

Student paper award winner Martha Fuentes-Bautista surveys the universal service policies in Argentina, Brazil, Chile, Mexico, Peru, and Venezuela. This study explores the evolution of the concept of "Universal Service" during the rollout of the telecommunication reform in the last decade in these six Latin American countries. Country profiles and a set of universal service indicators provide a frame for discussing issues of accessibility and affordability of telephone service in the region. She finds that the reconfiguration of national networks fostered by liberalization policies offered risks and opportunities to achieve universal service goals. The diversification of access points and services enhanced users� choices, but price rebalancing and lack of Universal Service Obligations (USO) to target groups with special needs depressed the demand and threatened to exclude significant parts of the population. The situation requires the reformulation of USO incorporating all technological solutions existing in the market, and factors from the consumer-demand accounting for the urban-rural continuum, and different social and economic strata. This study identifies the emergence of a second generation of USO targeting some of these needs. However, she recommends that more competition and special tariff plans for the poor be incorporated to the options available in the market.

Finally, Michelle Kosimidis examines a key dimension in which the Internet is changing the way people around the world communicate, learn, and work. As has been noted, one way to address the "digital divide" is to ensure Internet access to all schools from an early age. While both the United States and European Union have embraced the promotion of Internet access to schools, the two have decided to finance it differently. This paper presents a variety of data on how different countries are promoting Internet access to schools. She argues that the main costs of Internet access to schools are not communications-related (telecommunications and Internet services) but rather non-communications-related (hardware, educational training, software). This paper goes on to discuss whether and how the identified costs should be financed.. Should it be funded by the telecommunications industry and its users or by a general governmental budget (educational budget).


Epilogue

Communications policy analysis often looks like a refereed contest between na�ve hope and jaded experience. Optimistic entrepreneurs embark on commercial ventures founded on near-utopian technology forecasts, casting aside the doubts of more chastened observers. Regulatory institutions slow these commercial processes down, spending enormous energy preventing monopoly bottlenecks from interfering, trying to ensure equitable provision of services, or injecting public interest politics into every facet of decision making. Both sides complain about the lack of closure.

And, yet, it never ends. While regulatory institutions just complicate matters to no end, prophets for new communications technology keep arriving. The prophets declare a business revolution in communications activities -- such as broadcasting, entertainment, retail marketing, or wireless communications. These same prophets proclaim that this year's technology novelties dilute standard lessons from the past. Because this technology contains so many unique features, it is ushering in a new commercial era which operates according to new rules. Jaded observers look on skeptically, labeling such prophesizing as self-serving or misguided. Many voices stand in opposition, representing users, political interests, or commercial adversaries. It is a wonder that anything gets done at all.

More recently this contest takes place against a backdrop of institutional change. We are entering a millennium where technical developments, market events and unceasing regulatory restructuring will place considerable tension on long-standing legal foundations and slow policy discussions. Legacy regulatory decisions had previously specified how commercial firms transact with the regulated public switch network. Until recently, the pace of technical change in most communications services was presumed to be slow and easily monitored from centralized administrative agencies at the state and federal level. It is well known that such a presumption is dated, but it is unclear what conceptual paradigm should replace it.

The scope of the problems is vexing. Do these legacy institutions act in society's interest or foster experimentation in technically intensive activities? To put it simply, do the existing set of regulations enhance the variety of approaches to new commercial opportunities or retard such developments? Going forward it is unclear whether these legacy institutions are still appropriate for other basic staples of communications policies.

In this spirit, this book presents a series of policy papers. To be sure, there is probably a grain of truth to the declarations coming from all parties. Every new technology holds the promise of a better future if it addresses an actual problem. Every new technology holds the prospect of unforeseen dangers if it contains no protection from unintended consequences.

Yet, the momentary euphoria affiliated with commercializing new technology does not, nor should it, justify too simplistic a view of what actually happens, nor what issues policy makers face. With that in mind, we ask the reader to read the analyses contained herein and consider whether hope or experience will triumph in the future.


Notes

1. Winner L. (1986) The Whale and the Reactor: A Search for Limits in an Age of High Technology, University of Chicago Press, Chicago, p. 10.

2. Barlow, J. (February 1996) A Declaration of the Independence of Cyberspace. http://www.eff.org/~barlow/Declaration-Final.html

Reprinted with permission from the introduction to COMMUNICATIONS POLICY AND INFORMATION TECHNOLOGY, edited by Lorrie Faith Cranor and Shane Greenstein, MIT Press, October 2002, ISBN 0-262-03300-3. http://mitpress.mit.edu/0262033003

COMMENTS

POST A COMMENT
Leave this field empty