[Economic Report of the President (1999)]
[Administration of William J. Clinton]
[Online through the Government Printing Office, www.gpo.gov]





CHAPTER 5
Regulation and Innovation

Because innovation--the development and adoption of new technology--
is essential to U.S. economic performance over time, regulation that
interferes with innovation, however justifiable on other grounds, comes
at a cost. Therefore, in such areas as competition policy, environmental
regulation, and electric power restructuring, the Administration has
worked to ensure that regulation not only does not interfere with
innovation, but indeed fosters beneficial technological change and
adapts itself to such change as well.
Appropriately designed regulation can achieve desirable outcomes
that unconstrained commercial activity would not produce. Historically,
regulation in the United States has been selectively applied both to
certain types of undesirable economic behavior and to certain effects of
that behavior. Antitrust laws, for example, promote competition and
prohibit anticompetitive actions that interfere with market performance.
Industry-specific economic regulation has traditionally constrained the
exercise of market power by natural monopolies such as telephone
companies and electric utilities. Environmental regulation, for its
part, has targeted the side effects of economic activity on the health
of people and of the environment.
Although regulation, when wisely applied, can prevent economic harm
and protect economic benefits, real productivity gains over time depend
on innovation--on the steady flow of new ideas, products, and processes.
Over the past 50 years, more than half of all productivity gains in the
U.S economy, as measured by output per labor hour, have come from
innovation and technical change. Innovation thus boosts all sectors of
the economy; it is important for agriculture just as it is for
semiconductors. Those industries that fall under the rubric of high
technology--including aerospace, telecommunications, biotechnology, and
computers--provide particularly dramatic examples of growth through
innovation: their combined share of manufacturing output has increased
by more than half since 1980. Indeed, high-technology products have
become an increasingly important part of everyday life for American
consumers. The spread of Internet use in the past 6 years, from a few
specialized applications to a routine tool for tens of millions of
Americans, is one notable illustration. But it is through innovative
effort economy-wide, both public and private, that the United States has
succeeded in strengthening its position as the world leader in research
and development (R&D Box 5-1). To take just one measure,

[[Page 172]]

the number of patents granted in the United States grew to more than
140,000 in 1998, after passing the 100,000 mark for the first time in
1994.
Given the economic importance of innovation, public policy can
achieve greater good when it extends its perspective beyond the
immediate goals of particular regulatory programs and takes into account
the effects of regulation on the development and adoption of new
technology. This chapter first addresses how U.S. antitrust policy,
beyond its conventional focus on the price and output benefits of
competition, has

Box 5-1.--The Scope of Government Support of R&D

The Federal Government supports innovative activity in both direct
and indirect ways. And it does so in no small measure: data from 1997
show that U.S. Government agencies provide about 30 percent of all funds
spent on R&D in the United States. The government's share of funds for
basic research (research that advances scientific knowledge but has no
immediate commercial objectives) is higher still, at about 57 percent.
The National Institutes of Health (NIH), for example, are a principal
source of funding for biomedical research. NIH programs provide
resources for such projects as AIDS/HIV treatment, cancer research, and
the Human Genome Project. The government has also taken a direct role in
R&D and scientific education through the National Science Foundation and
other agencies such as the Department of Energy, which oversees the
large complex of Federal laboratories. Federally funded research has
been responsible for major developments in space technology, defense
systems, energy, medicine, and agriculture, to list just a sample.
Federal agencies face the continuous challenge of matching their
missions to the technological needs of an evolving world.
Industry provides most of the remaining 70 percent of R&D funding
in the United States. Indeed, its proportion has grown steadily in the
past decade, to about two-thirds of the total. But government plays a
role--an indirect one--in this effort as well, for example through tax
incentives that encourage innovation. The research and experimentation
tax credit, which allows firms to reduce their tax obligations by 20
percent of qualifying R&D expenditure, was recently extended until June
1999. The government also supports basic research that underlies many
applied advances in private industry, and it engages in partnerships
with institutions such as universities to share the risk of long-term
R&D efforts that have the potential to create widespread benefits.

[[Page 173]]


incorporated consideration of the long-run benefits of innovation. The
chapter then examines how alternative ways of implementing environmental
regulation affect the innovation and diffusion of new technology.
Finally, the restructuring of the electric power industry is presented
as an illustration of how technological change affects the desired form
of regulation, and how regulatory changes in turn affect the pace and
direction of new technological and market developments.

COMPETITION POLICY AND INNOVATION

Innovation makes enormous contributions to the Nation's economic
growth, not just in the large and growing high-technology sector but
across all sectors of the economy. The impact of new technologies goes
beyond expanding the range of choices for consumers and lowering prices;
often, new ideas have significant consequences for the very structure
and performance of markets. In turn, one firm's competitive strategy and
market behavior can affect the incentive and the ability of all firms in
an industry to produce innovative goods and services, sometimes for the
worse. The reciprocal effects of technological innovation on markets,
and of markets on innovation, pose ongoing challenges for antitrust
policy. The antitrust authorities have not shied from these challenges:
1998 saw the continued application of the antitrust laws in
technologically complex industries, and renewed attention to the
economic benefits of innovation in assessing the health of these vital
markets.

MERGER REVIEW AND INNOVATION

Corporate merger activity continues at a swift pace: in fiscal 1998
over 4,000 merger notifications were filed with the Antitrust Division
of the Justice Department and the Federal Trade Commission, the two
Federal agencies concerned with antitrust. About 7,000 additional
mergers were valued at less than $10 million, the level at which
premerger notification is required. The total value of all mergers in
1998 is estimated at over $1.6 trillion. The scope of merger activity in
1998 is comparable, depending on the measure used, to that experienced
at the turn of the century and in the late 1980s. Although, as in other
years, most of these mergers were small, the recent wave of economic
consolidation has been distinguished by the number of very large mergers
and by the number of mergers in such highly innovative sectors as
telecommunications, aerospace, and biotechnology. These transactions, in
addition to simply creating bigger firms, sometimes create measurably
more concentrated markets. Given the importance of these advanced
industrial sectors for future growth, a pressing question for antitrust
authorities has been how such changes in market concentration and firm
size affect innovative activity.

[[Page 174]]

The United States has a decades-long history of enforcing its
antitrust laws to ensure that mergers, acquisitions, and other
structural changes in firms and markets do not unduly empower the
resulting enterprises to raise prices or restrict output. The use of
antitrust policy as a framework for preserving and encouraging
innovation, however, is a more recent development, on which there is
less consensus. The relationship between an industry's market structure
and the amount of innovative activity in that industry may differ from
the relationship between market concentration and short-term price
competition, the conventional focus of antitrust. Whereas concentration
nearly always weakens price competition, its effects on innovation are
less clear-cut. Antitrust authorities investigating today's mergers thus
confront a difficult task: they must not only assess the likely effects
of consolidation on prices and output in the relevant product market,
but also account for a merger's potential impact on innovation and the
benefits it promises to consumers in the long run.

DO BIGGER FIRMS HELP OR HURT INNOVATION?

Several recent mergers are notable for their sheer size. In the last
few years the financial services, telecommunications, and petroleum
industries have all seen mergers or proposed mergers valued in the tens
of billions of dollars. Antitrust policy in the United States does not,
however, generally treat firm size per se as important for determining
the strength of competition. Market share, which does not necessarily
correlate with size, is understood to be the more relevant determinant
of whether prices and quantities are set competitively.
There has been greater debate, however, about the relevance of firm
size for innovation. Indeed, one could make perhaps as strong a
theoretical case that bigness is good for innovation as that it is bad
or indifferent. Some commentators, following the economist Joseph
Schumpeter, have praised large enterprises for their superior ability to
attract the financial and human capital, bear the risk, and recoup the
investment required for sustained research and development (R&D)
activities. Small firms, on the other hand, have been touted as more
creative and more nimble in adapting to changes and opportunities than
their larger, more bureaucratic counterparts.
Empirical studies have consistently found that big enterprises are
more likely than small ones to undertake at least some R&D. In addition,
among those firms that do undertake R&D, bigger firms tend to make
larger R&D investments. Beyond a threshold level of size, however, it is
less evident that larger firms' R&D investments are proportionately
greater than those made by smaller firms. Most recent research supports
the consensus view that, in general, R&D rises only proportionately with
firm size.
Data matching R&D investment with the number of patents generated
have shown that smaller firms produce more innovations per

[[Page 175]]

R&D dollar than do large firms. But these results do not necessarily
imply that large firms are less desirable from an innovation standpoint.
First, not all patents are equivalent in value, and not all successful
R&D is patented. So simply counting patents is an imperfect measure of
innovative productivity.
Second, there may be diminishing returns to R&D. Big firms, because
of their greater resources and ability to diversify, may simply be more
willing to risk investing in projects that appear to have less prospect
of success. Some of these projects do succeed, making discoveries that
smaller firms might have missed.
Finally, large firms may earn higher returns on their R&D than small
ones because they can deploy innovations across a broader array of
products, or take advantage of process cost savings over a larger
production volume. This may explain why large firms continue to invest
in R&D even after their proportionate patent yield drops below that of
smaller firms.
In short, although available data and research do call into question
the conjecture that large firms are superior innovators, they do not
necessarily support the contrary view that large firms are bad for
technological progress and economic growth. The evidence suggests that
the large firms created by some recent mergers will have no special
tendency--but likewise no special reluctance--to engage in innovation.

MARKET CONCENTRATION, COMPETITION, AND INNOVATION

The focus on market share in U.S. competition policy fits logically
with antitrust's basic premise that economic performance improves with
competition. Of course, exception is made for industries that are
natural monopolies, in which costs per unit of output decline as a
firm's production increases, to the point that it is most efficient to
have just one firm produce all output. In such markets, which
historically have included railroads, electric power, and
telecommunications, monopoly may actually be better for consumers, so
long as the monopolist can be prevented from abusing its power to raise
prices or stifle innovation by potential competitors. Competition in
such cases would require wasteful duplication of facilities--parallel
sets of railroad tracks, or duplicate sets of wires connecting houses to
the electric power grid or the telephone network. For this reason
natural monopolies have generally been allowed to operate but subjected
to strict regulation. In most industries, however, economic theory and
antitrust policy have long seen more rather than less competition as
best serving the purpose of lowering prices, expanding output, and
making consumers better off.
The presumption in favor of greater competition becomes less
universal when the policy goal is not just lower prices for a given set
of goods produced under a fixed set of technologies, but also the
preservation of efficient innovative activity by firms over time. As a
theoretical

[[Page 176]]

matter, depending on various conditions, either monopoly power or
competition may yield the greater amount of innovation. On the one hand,
rivalry over market share gives competitive firms an incentive to
develop new products and processes that will help them improve or defend
their market position. On the other hand, competitive firms face greater
risk in their investments in innovation than do those with market power.
Even if a firm does make a potentially profitable discovery, and even if
it can establish intellectual property rights over that discovery that
give it a temporary monopoly, rivals may soon develop similar or better
advances that diminish or negate its value. The risk that a competing
firm's successful innovations will trump one's own grows with the number
of competitors, and the expected return to innovation may fall to the
point where it does not justify the cost.
Firms in competition also face more-binding financial constraints. A
monopolist or other firm with market power probably has, or can raise,
more cash for R&D and has a better chance of recouping its R&D
investment. Large, established firms might be particularly adept at
marshaling resources for incremental innovation or for helping to bring
a small firm's invention to market.
Even a monopolist--especially an unregulated one--has an incentive
to engage in cost-reducing innovations. But because a monopolist already
has the market share for which competitive firms strive, it may have
less incentive to pursue product innovations and improvements than do
firms facing competition. Further, a monopolist will have an incentive
to innovate strategically to protect its monopoly by excluding rivals
and by avoiding cannibalization of its existing business. This may lead
it to delay implementation of those innovations it does develop. A
monopolist might therefore be a qualitatively inferior innovator from
the perspective of consumers and overall economic welfare. A dominant
firm may also have an incentive to deter others from engaging in
innovative activity that threatens its market power. The result could be
a shift in the industry-wide pattern of innovation that makes everyone
except the dominant firm worse off.
The findings of empirical studies do not resolve this ambiguous
theoretical relationship between competition and innovation. Some
studies find innovation to be most intense among firms in oligopoly
markets that provide a mix of competitive incentives and above-
competitive returns. Other studies find no such correlation. To the
extent there is consensus, it is that neither the presence of many
competitors nor pure monopoly correlates systematically with optimal
levels of innovation. But even in such polar cases, predictions about
R&D activity are hard to make. The determination requires looking at the
facts in each case, because market factors other than concentration, as
well as a firm's regulatory status and the nature of its products and
technologies, also affect innovation.

[[Page 177]]

In some industries, fierce competition yields substantial R&D:
dozens of firms today are racing to develop new antiobesity drugs, for
example. But monopolies can be energetic innovators, too: during AT&T's
decades of dominance of the telecommunications industry, its Bell
Laboratories research arm developed a steady stream of new technologies.
In each case factors independent of market structure made the
difference. The market for antiobesity drugs is new, the rewards for
successful R&D are huge--future sales could reach an estimated $5
billion per year--and the efficient level of R&D investment could be
quite high. In the case of AT&T, although innovation in
telecommunications might have been greater under competition, consumer
demand for increased capabilities in the telephone system, opportunities
to enter new markets, and the guarantee of steady, regulated returns
that could help fund risky R&D made complacency undesirable even for an
established monopolist.
In addressing innovation, antitrust policy must therefore temper the
strong presumption in favor of competition that applies in conventional
analysis of short-run price and output levels. Although more rivalry
rather than less will often remain the rule of thumb, enforcement
authorities cannot as confidently presume as a matter of economic theory
that more competition is good or that market power is bad for R&D. When
the overall level and the future path of innovation are at issue, case-
by-case analysis of the economic facts is likely to be even more vital
than in conventional antitrust investigations.

MERGER POLICY IN HIGH-TECHNOLOGY MARKETS

The puzzles posed by the economics of innovation have not deterred
the antitrust authorities from investigating how mergers in several U.S.
industries would affect the flow of new ideas, products, and processes.
They have, however, taken a deliberate, measured approach to their
investigations. Recent enforcement decisions have taken into account
both the traditional presumptions about competition and the inability to
rely on those presumptions when it comes to promoting innovation. But
they also reflect careful consideration of the ambiguous effects that
firm size and market structure may have on innovation. Thus, although
the antitrust authorities have recognized the need for a dynamic
perspective on mergers and have not refrained from enforcement based on
concerns about innovation, they have brought such actions only where
changes in market concentration were extreme and, generally, where other
evidence of effects on innovation was present.

Early Cases

One of the first enforcement actions motivated by innovation
concerns occurred in 1990, when the Federal Trade Commission (FTC)
challenged the acquisition of Genentech, Inc., by the Swiss-based

[[Page 178]]

company Roche Holdings, Ltd. Some of the issues raised in that case were
traditional questions about reduction of competition: for example, Roche
was on the verge of becoming a major challenger to Genentech's dominant
position in the market for products to treat human growth hormone
deficiency. But more central to the Commission's complaint was that
Roche and Genentech were actual--not just potential--competitors in the
development of some other important therapeutic innovations, especially
for the treatment of AIDS and HIV infection. Concerns about dynamic
effects on the market and on the pace of innovation, not about short-
term price or output levels, drove the enforcement decision.
The Justice Department's Antitrust Division first challenged a
merger on innovation grounds in 1993, when it investigated the proposed
acquisition of General Motors' Allison Transmission Division by ZF
Friedrichshafen, a German company. Allison and ZF together produced 85
percent of world output of heavy-duty automatic transmissions for trucks
and buses, but they actually competed head to head in only a few
geographic markets. The Justice Department nonetheless concluded that
even markets whose concentration would be unaffected by the merger would
be harmed by the combined company's reduced incentive to develop new
designs and products, and it therefore moved to block the transaction.
These two cases differ in important ways, and each establishes a
significant precedent for factoring innovation effects into competition
policy. In reaching its decision to challenge Roche's acquisition of
Genentech, the FTC did not have to predict that the resulting increased
concentration in the biotechnology industry would reduce innovation.
Rather, the increase in concentration was accompanied by concrete
evidence that Roche was at an advanced stage in developing a competing
human growth hormone treatment, and that Roche and Genentech were among
a small group of companies racing to develop certain AIDS/HIV
treatments. The merger would thus have concentrated actual, not merely
potential or speculative, R&D efforts.
The Justice Department's action in the ZF/Allison case was in one
respect bolder. There was no specific R&D effort that the Antitrust
Division found would be compromised by the acquisition. But the decision
indicates that where the consolidation is so great as to leave an
industry near monopoly and without other potential sources of new
developments, potential harm to the ``innovation market'' could justify
challenging the transaction. These two factors--very high levels of
concentration and evidence of parallel and competing innovation
efforts--have also formed the basis for several recent actions through
which the relationship between antitrust and innovation has further
developed.

[[Page 179]]

Aerospace

The aerospace industry is one of the most innovative in the United
States. Its market is characterized by high concentration but also,
outside the defense sector, by international competition. In the past 2
years the FTC has approved one major aerospace merger, and the Justice
Department has blocked another. Innovation considerations are central to
explaining both these enforcement decisions.
In 1997 the FTC approved the merger of Boeing Co. and McDonnell
Douglas Corp., the two largest commercial aircraft manufacturers in the
United States. In that case, analysis of innovation in the aerospace
industry supported the merger, not because the transaction was expected
to increase R&D, but because the analysis showed that McDonnell Douglas
had fallen behind technologically and could no longer exert competitive
pressure on Boeing or its overseas rivals. Acquisition by Boeing would
therefore not reduce competition and would allow McDonnell Douglas'
assets to be put to better use by a more technologically advanced
enterprise.
Concerns about progress in aerospace innovation led to the opposite
conclusion in Lockheed Martin Corp.'s proposed acquisition of Northrop
Grumman Corp., first announced in 1997. The Justice Department's
challenge to the merger last year noted that Lockheed and Northrop were
two of the leading suppliers of aircraft and electronics systems to the
U.S. military. The Department concluded that the merger would give
Lockheed a monopoly in fiberoptic towed decoys and in systems for
airborne early warning radar, electro-optical missile warning, and
infrared countermeasures. In addition, the merger would reduce the
number of competitors in high-performance fixed-wing military airplanes,
on-board radiofrequency countermeasures, and stealth technology from
three to two. The agency contended that consolidation in these markets
would lead to higher prices, higher costs, and reduced innovation for
products and systems required by the U.S. military.
Although traditional competitive concerns about prices were an
important part of the challenge to this acquisition, concerns about
innovation were central. For example, the Justice Department noted that
both Lockheed and Northrop had launched R&D efforts in advanced airborne
early warning radar systems, and it concluded that consolidation of the
two efforts would harm future military procurement. The Department also
found evidence that competition is particularly important for
technological advances in high-performance military aircraft. It thus
concluded that ``competition is vital to maximize both the innovative
ideas associated with each military aircraft program, as well as the
quality of the processes used to turn innovative ideas into cost-
effective, technically sound, and efficiently produced aircraft.''

[[Page 180]]

The antitrust authorities' linking of competition to innovation in
the Lockheed/Northrop case was a cautious one. Two factors weighed
heavily toward blocking the transaction. First, there was evidence that
Lockheed and Northrop either were actually conducting competing R&D on
relevant products or were the leading contenders to conduct such R&D in
the future. Second, there was evidence that their consolidation would
lead to either monopoly or substantial dominance in relevant product
markets, not just reducing but in large part eliminating competitive
pressure. Thus, a combination of market structure and the existence of
parallel innovation efforts pointed toward a likely reduction in
innovative activity if the merger were consummated.

Biotechnology and Pharmaceuticals

The FTC recently focused on innovation concerns in crafting a
consent agreement with two merging firms in the biotechnology and
pharmaceuticals industry. In 1996 Ciba-Geigy Ltd. and Sandoz Ltd., two
Swiss firms with substantial U.S. operations, announced plans to merge
into a new company, to be known as Novartis. The FTC raised several
objections to the merger. Some of the objections concerned traditional
antitrust matters: the FTC was concerned that the combination would give
the merged entity power to reduce competition and raise prices in the
market for herbicides used in growing corn and in that for flea-control
products for pets. The FTC accordingly ordered that one party divest its
businesses in those markets as a condition for its approval. The more
novel parts of the Commission's challenge, however, had to do with the
prospects for innovation in the market for gene therapy products, which
allow treatment of diseases and medical conditions by modifying genes in
patients' cells.
At the time of the FTC's investigation, in 1996 and 1997, no gene
therapy products were yet on the market; indeed, none had even been
approved by the Food and Drug Administration. Conventional antitrust
analysis therefore did not apply, because there was no product market in
which to analyze the merger's effects on prices and output. The
Commission instead adopted a dynamic perspective: looking to the future,
it found two reasons for long-run competitive concerns. First, the
market for gene therapy products is expected to grow rapidly, with
annual sales of $45 billion projected by 2010. Second, Ciba and Sandoz
were among a very few firms with the technological capability and rights
to intellectual property necessary to develop gene therapy products
commercially. Together they would control essential patents, know-how,
and proprietary commercial rights without which other firms, even if
they did eventually develop gene therapy products, would be unable to
commercialize them.
The FTC concluded that ``preserving long-run innovation in these
circumstances is critical.'' The Commission did not, however, block the
merger. Instead, it crafted a consent decree designed to correct those

[[Page 181]]

aspects of the transaction that raised concerns for current and future
competition. As noted, the Commission required divestiture of certain
overlapping herbicide and flea-control businesses. More interestingly,
the Commission did not require divestiture of either firm's gene therapy
division. Instead, Ciba and Sandoz agreed to license technology and
patents sufficient to allow one of their rivals to compete against the
merged entity in the development of gene therapy products.
The Commission's remedy steered between the potentially conflicting
economic effects that a merger can have on R&D. On the one hand,
consolidating complementary capabilities can enhance innovation and
allow a combination of firms to achieve what the same firms could not
have achieved separately. On the other hand, concentrating markets to
near-monopoly levels can dampen the pressure to innovate and reduce the
enhanced probability of success that comes from multiple R&D efforts.
The Commission declined to order either Ciba or Sandoz to divest its
gene therapy subsidiary because it found that the R&D efforts of the
parent companies and their subsidiaries were closely coordinated, so
that divestiture would have been disruptive and counterproductive for
innovation. The decision instead to order compulsory licensing to a
capable competitor was designed to preserve both market competition and
the benefits of the merging parties' relationships with each other and
their respective gene therapy subsidiaries.
The market context in this case is significant. Ciba and Sandoz were
not merely two of several viable competitors in the relevant market;
their merger did not simply change the degree of competition within a
middling range of market concentration. Rather, their combination
concentrated virtually all innovation capability and essential inputs
for the commercialization of gene therapy under one corporate roof.
Innovation concerns became sufficient to motivate intervention because
the facts showed a combination of monopoly market structure and a
reduction in the number of potential innovation efforts. These provided
sound economic support for the use of competition policy to preserve the
impetus for technological progress. But the FTC's action also broke
important new ground: it expressly recognized that a current merger
could be challenged on grounds of future innovation and competition in a
product market that does not yet--but likely will--exist.

INTELLECTUAL PROPERTY AND ANTITRUST

As the above discussion of merger review demonstrates, the
incorporation of innovation concerns into antitrust enforcement often
involves intellectual property issues. The purpose of intellectual
property protection is to encourage people to bring inventions and other
creative works into the marketplace. In so doing it furthers, in the
words of the U.S. Constitution, ``the Progress of Science and useful
Arts, by securing for limited Times to Authors and Inventors the
exclusive Right to their

[[Page 182]]

respective Writings and Discoveries.'' To be sure, not all inventors or
artists are motivated by economic gain. But in many cases the decision
to devote time and resources to risky, innovative projects or to invest
in publication will hinge on the ability to profit from success.
Patents in the United States accordingly confer limited rights to
exclude others, even those who have come up with the same idea
independently, from making, selling, or using a covered invention
without the patentholder's consent. Patenting allowed Eli Whitney to
capture the profits his cotton gin made possible, just as today it
allows an electrical engineer to secure her rights to the returns on an
advance in computer technology. Copyright statutes similarly provide
protection against unauthorized copying of original works in a variety
of media (including electronic media; see Box 5-2), even if the copying
is not literal or exact. Only Thelonious Monk (or the record company to
which he sold the rights) could freely record ``'Round Midnight''; only
a software developer (or a manufacturer to which the developer grants a
license) has exclusive rights to copy and sell its programs
commercially. Finally, trademark laws can be used to protect brand
recognition. One restaurant entrepreneur cannot misleadingly use another
restaurant's name for his own new business; a new soft drink's label
cannot look too much like the market leader's.
On the surface, a tension exists between intellectual property
protection and competition policy: one grants exclusive rights that
confer a limited, temporary monopoly; the other seeks to keep monopoly
at bay. But at a more basic level the two areas of policy have a common
goal: to enhance economic performance and consumer welfare. For that
reason patents, for example, are extended only to novel, nonobvious, and
useful inventions and are limited in duration to 20 years. Copyrights
are granted for the life of the author plus 70 years.
Once an innovative product has been developed, efficiency dictates
that it be produced competitively. So patents should not provide a
greater incentive to invent than is necessary to get the invention into
the stream of commerce. The limits on the duration, scope, and
availability of patents implicitly balance the benefits of preserving
incentives to innovate against the efficiency costs of granting
exclusive rights. A similar balance between innovation and competition
appears in U.S. antitrust policy, which recognizes that innovation
sometimes benefits from cooperation among competitors (Box 5-3). The
National Cooperative Research and Production Act, for example, reduces
potential antitrust liability for qualifying R&D and production joint
ventures. In fiscal 1998, 38 such joint ventures registered with the
Department of Justice and the FTC, bringing to over 750 the number of
registrations since the statute was passed in 1984.
Similarly, the 1995 Antitrust Guidelines for the Licensing of
Intellectual Property acknowledge the exclusivity conferred by
intellectual property protection but recognize that patents do not
necessarily

[[Page 183]]

Box 5-2.--Electronic Commerce and Digital Copyright Protection

More than 70 million Americans now have access to the Internet,
which they use in no small part for commercial activities, including the
purchase of music, video, software, text, and other information goods
that can now be sent directly from one computer to another. The volume
of this electronic commerce exceeded $10 billion in 1998 and is
predicted to reach $300 billion within a few years. Electronic commerce
provides unprecedented opportunity for firms and individuals to sell and
distribute such digital goods widely and quickly. But with these
benefits comes risk: the ease with which a recording company can deliver
a new song to buyers electronically is matched by that with which buyers
can illegally copy and resell it. For electronic commerce to reach its
potential, sellers must be sure that their products are legally
protected from such piracy.
New copyright legislation has taken steps to protect digital goods
and so encourage innovative commercial uses of electronic media. The
1998 Digital Millennium Copyright Act makes it a crime to break the
``digital wrappers'' that protect electronically encrypted intellectual
property, or to sell equipment designed to penetrate such encryption.
This increased protection of digital goods will help spur commerce and
innovation, but it may also unduly restrict legitimate uses of
copyrighted material. For example, the fair use doctrine allows free
access to copyrighted works for limited personal, educational, and
research purposes that do not compromise the work's commercial value.
What has traditionally been prohibited is not access to the copyrighted
work, but rather its indiscriminate copying and distribution. An
absolute ban on bypassing digital wrappers might allow publishers to
impose a per-use fee on publications in digital format. This would block
free access to such works and thus erode the fair use principle. The
1998 Digital Millennium Copyright Act attempts to balance the need to
preserve commercial incentives with the right to fair use by permitting
anyone who cannot get access to materials usually covered by the fair
use doctrine to petition the Librarian of Congress for an exemption from
the statute.
confer market power and that licensing of intellectual property is
generally procompetitive. Licensing and other arrangements for
transferring patents or copyrights can help bring complementary factors
of production together and thus allow faster and more efficient use of
new inventions. This benefits consumers by reducing costs and
encouraging the introduction of new products. Under the guidelines, the
FTC and

[[Page 184]]

Box 5-3.--Cooperative Innovation and the Y2K Problem

As explained in Chapter 2, many older computer programs encode
years using only the last two digits and will not properly interpret
``00'' as ``2000'' when the year 2000 arrives. This ``year 2000'' (Y2K)
problem may cause data to be lost and programs and systems to fail
worldwide. The risks are particularly acute in industries where
different firms' computer systems are highly interdependent.
Accordingly, once the extent of the problem was recognized, a number of
manufacturing firms and securities firms proposed, through their trade
associations, to exchange information among themselves and their
computer services suppliers that would expedite resolution of the
problem in their industries. Participating firms would share information
gathered from manufacturers about efforts to make chips, other hardware,
and software compliant with Y2K demands, and would exchange the results
of product tests, successful remedies, and information about the sources
of various computer products.
The competitive concerns raised by the prospect of such
collaboration were multifaceted. For example, securities firms compete
with each other not just in the provision of financial services,
relevant information for which is stored in each company's computers,
but also in the procurement of computer systems. Exchange of information
about products and the results of various tests could potentially be
used by rivals as a vehicle for fostering and monitoring collusion in
both areas of competition. At the same time, computer hardware
manufacturers and software developers compete in the development of new
products and in innovating around
the Department of Justice balance these benefits case by case against
the risk that a particular licensing arrangement could reduce
competition in the product market or in the development of new
technologies.
For example, in 1997 the Justice Department concluded that an
agreement to package certain patents essential for advanced video-
compression technology into a single license was permissible because the
patents were complements and because the licenses, which would be
granted on a nondiscriminatory basis, were unlikely to facilitate
collusion or the exercise of market power. But in another action the FTC
required recision of an agreement that pooled patents for laser systems
used in eye surgery because the partners in the deal were the only
independent competitors in the market for that equipment prior to the
pooling arrangement. Recently, the Justice Department successfully
concluded its 1996 challenge to a license that granted a hospital access
to software necessary to repair medical imaging equipment only if the
hospital agreed not to compete with the licensor in providing repair

[[Page 185]]

Box 5-3.--continued

challenges like the Y2K problem. The proposed information exchange
could give these firms competitively valuable details about their
rivals' product developments or terms of sale to customers, undermining
competition and opening the door for collusion here as well.
Collaboration on the Y2K problem also offered clear benefits,
however. A joint effort would avoid duplicative equipment testing and
information gathering, allow more efficient identification of successful
remedies, and permit faster and more accurate responses to computer
system vendors about remaining problems. Manufacturers could devote
resources to product improvement that would otherwise have been devoted
to exchanging information.
The Justice Department stated in its letters reviewing the
proposed collaborations, issued July 1 and August 14, 1998, that it did
not foresee grounds for enforcement action, because the proposals
contained sufficient safeguards that the benefits of cooperation
outweighed the risks to competition. The firms agreed to cooperate
without exchanging price or customer information that could be used to
restrain competition. And computer manufacturers would receive test
information about their own products only, not those of their rivals.
Although the Justice Department recognized that the information
exchanges could still affect competitive strategy, it concluded that the
agreements were unlikely to lessen innovation or pricing rivalry among
vendors and offered real prospects for reducing the costs and increasing
the speed of a resolution to the Y2K problem.
services to third parties. These cases reflect careful monitoring by the
antitrust authorities of the interaction among intellectual property
protection, competition, and innovation.

NETWORK COMPETITION AND INNOVATION

Antitrust policy in the United States has devoted substantial
attention in the past year to the relationship between competition and
innovation in what are today called network industries. Enforcement
actions in the credit card and software industries as well as consent
decrees in the telecommunications industry have highlighted the
challenges enforcement agencies face in balancing long-run encouragement
of innovation with short-run concerns about competition.
Networks are a familiar concept to Americans: we are linked to each
other by telephone networks, we increasingly shop and obtain information
through the web of linked computers we call the Internet, and we
confidently slide a card issued by one bank into an automatic teller

[[Page 186]]

machine owned by another. The distinguishing characteristic of network
goods is that their value to each consumer increases the more they are
used by others. New telephone subscribers add to the number of people
that existing subscribers can call; their participation in the network
increases the system's value to current and future users. New buyers of
a word processing package are more people with whom earlier purchasers
can easily exchange documents. This additional value that new users add
to network goods is termed a ``network externality.''
Network benefits are not limited to communications systems or to
systems in which communication is an element. A good whose usefulness
depends on the existence of complementary products--products used in
conjunction with the original good--may likewise increase in value to
users as more and more people adopt it. A widely used product may
attract greater investment in the provision of complements than one that
has few users. In the personal computer industry, for example, software
producers typically devote most of their efforts to writing programs
that will be compatible with the more widely used hardware platforms and
operating systems. (Achieving compatibility sometimes requires reverse
engineering of existing products; see Box 5-4). Over time more, better,
and cheaper software thus becomes available for more popular machines
than for others. Similarly, the best-selling video game platform will
attract more game developers, thus reinforcing the advantage of that
platform over competitors.
Because of network externalities, a product's popularity can be
self-reinforcing: new customers buy the more popular good because of the
larger externality, which then grows still further, making the product
yet more attractive to additional purchasers. This dynamic sometimes
makes network markets ``tip'' toward monopoly. A network monopoly has
benefits for consumers not generally found in conventional markets,
because its dominance can maximize the network externality. But network
dominance also poses hazards that compound conventional economic
concerns about monopoly.
First, the product that becomes the network standard will not
necessarily be the most capable, most efficient, or highest-quality
product on the market. Because consumers want the good that will offer
the largest network externality, expectations about a product's success
can be at least as important to their purchase decisions as price and
quality. Consumers using products, even superior products, that have
lost the competitive battle receive a much smaller network benefit, and
may eventually have to incur the costs of switching to the dominant
product. These include not only the cost of purchasing the rival product
but the cost of learning to use it. By the same token, if an inferior
good gets a decisive lead in ``installed base'' among consumers, their
switching costs may be enough to keep them from moving to the superior
standard. And new customers may find that the greater network
externality available from the leader offsets the price or design
advantages of the contender.

[[Page 187]]

Box 5-4.--Reverse Engineering and Compatibility

When competing network products are mutually compatible, consumers
benefit from the same network externality regardless of which product
they choose. If the value of a word processing package depends on the
number of people with whom documents can be shared, then a new entrant
can overcome its network disadvantage by enabling its product to
exchange files with the leading program. Similarly, if a new game
platform can play cartridges designed for rival systems, it gains value
from the increased availability of complementary goods. Translation
between systems is not always perfect, however, and a dominant firm
facing new rivals might try to reestablish its advantage by
reintroducing incompatibility in subsequent versions of its software.
Nevertheless, cross-compatibility remains an important competitive
strategy for entrants into network markets--and is beneficial for
consumers.
To achieve compatibility, a competitor may have to ``reverse
engineer'' the rival's product, to learn how to make it work together
with its own. For that reason, firms with a market edge might try to
protect their products against efforts to establish cross-compatibility
by restricting competitors' access to critical interfaces where
information is exchanged. One means of doing so is to enforce a
copyright on the particular lines of computer code that a rival would
have to use to make its product compatible. Courts, however, have been
increasingly reluctant to uphold copyright protection for such purely
functional aspects of computer programs. A leading producer may instead
try to encrypt or otherwise technologically protect the information to
which a rival seeking compatibility needs access. The Digital Millennium
Copyright Act of 1998 expressly permits software developers to
circumvent such protections. It thereby limits the extent to which a
program copyright can block competition by noninfringing programs or in
markets for complementary software. But to avoid undermining the
incentive to develop new software, the act allows circumvention only to
the extent necessary to achieve compatibility.
Second, these same switching costs can make network markets
particularly hard for new competitors to enter, especially if new
products cannot interconnect with those already in the market. This
potentially makes network monopolies quite stable and reduces the
dominant firm's incentives to introduce innovative products and
services. An example is the delay in the marketing of digital subscriber
line (DSL) technology for high-speed telecommunications. Although DSL
technology has been available since the 1980s, only recently did local
telephone

[[Page 188]]

companies begin to offer DSL service to businesses and consumers seeking
low-cost options for high-speed telecommunications. The incumbents'
decision finally to offer DSL service followed closely the emergence of
competitive pressure from cable television networks delivering similar
high-speed services, and the entry of new direct competitors attempting
to use the local-competition provisions of the Telecommunications Act of
1996 to provide DSL over the incumbents' facilities.
Third, a network monopolist may have advantages in selling
complementary goods that allow it to extend its dominance from one
market to another. Advantages in complementary markets are not
necessarily anticompetitive. The provider of one good may be able to
exploit economies of scale and scope that make it a superior provider of
the complementary good. But a monopoly provider of one product may also
be able to tie or bundle a second product in a way that forecloses
competition in the second product market. For example, it may condition
sale of the monopoly good on whether the buyer also purchases the
complementary good.

The Challenge for Antitrust

In network markets as in others, antitrust law does not condemn
monopolies legitimately achieved. Incentives to innovate and compete
might diminish if dominance itself, honestly earned, could be second-
guessed by enforcement authorities. Instead, what antitrust proscribes
is anticompetitive conduct--predatory or exclusionary practices--that
creates or maintains monopoly power. The particular challenge of network
markets is that, because network effects can accrue rapidly and be
costly to reverse, there is a premium on being able to identify and stop
anticompetitive activity quickly. Once dominance is acquired, it may be
impractical or undesirable to use regulatory or antitrust remedies to
undo the outcome, even if an inferior standard prevails or if
anticompetitive tactics have been employed. To be sure, antitrust can
target unlawful conduct designed to preserve or extend those outcomes.
But once customers have adopted a standard, remedies that would reduce
the accrued network externality are costly, no matter how dominance was
achieved.
Identifying predatory or exclusionary practices early can be
difficult in the network context. Competitive strategies that would be
inherently suspect in a conventional goods market may be reasonable in
network markets, especially when competitors believe, rightly or
wrongly, that the winner will take all. For example, pricing below cost
is often a telltale sign of predation in conventional markets. But in
network markets it may be a matter of competitive necessity to price
below cost in order to penetrate the market quickly, gain a lead in
installed base, and raise expectations that a product will deliver a
large network benefit. Predatory pricing rules in Federal antitrust
policy do allow for

[[Page 189]]

transitional circumstances and recognize that prices may not reflect
startup costs for new entrants. In applying those rules in network
markets, authorities must analyze, on the facts of each case, when
aggressive pricing constitutes a legitimate strategy that other
competitors would rationally pursue, and when they amount to predatory
conduct that forecloses competition.
Similarly, when a network monopolist enters a market for
complementary products on terms that make it hard for competitors to
succeed, authorities must determine whether the monopolist's advantage
stems from genuine efficiencies or from anticompetitive arrangements.
Where efficiencies are identified that cannot be achieved in a manner
that has less effect on competition, enforcement agencies must balance
the welfare gains from those efficiencies against the welfare losses
from reduced competition. A good illustration of the problem comes from
the days before personal computing. Technological innovations adopted in
the 1970s made mainframe computer components sufficiently compact that
certain memory devices were for the first time built into the main
computer cabinet and hardwired into the central processing unit. IBM
Corp., the market leader, thus began to sell computers and memory
storage as an integrated unit. Independent manufacturers of IBM-
compatible memory devices sued, claiming IBM had leveraged its market
power in mainframe computer processors into the more competitive
peripherals market. In California Computer Products v. IBM, decided in
1979, the U.S. Court of Appeals ruled in IBM's favor after finding on
the facts that, in this particular case, integration was an efficient
and natural result of beneficial product innovation.
Several very recent enforcement actions demonstrate the complex
issues at stake in network competition and show how preserving both the
incentive and the opportunity for development of innovative products and
services has become an essential concern of competition policy. Among
these are actions in the credit card industry and in the markets for
Internet software and services.

Credit Cards

As use and acceptance of a particular brand of credit card grow,
that card becomes more valuable for both businesses and consumers. This
gives rise to a classic network externality, with all the benefits to
consumers--and the possible effects on competition and innovation--
already described. Concern over competition and innovation among
general-purpose credit card networks recently prompted the Department of
Justice to file an antitrust suit against the two largest networks, Visa
and MasterCard.
The credit card industry operates at two distinct levels. Consumers
and merchants are most directly involved in the downstream level, which
encompasses card issuance and card acceptance services. The players at
that level are banks and other institutions that issue cards

[[Page 190]]

and compete for customers on the basis of interest rates, annual fees,
payment terms, customer service, and various enhancements or usage
bonuses. The Justice Department's challenge concerns the industry's
second level: the upstream level, encompassing the underlying card
networks themselves. These networks provide various services to card
issuers: they implement systems and technologies for card use and
clearance, develop card products, and promote the card brand. They also
set fees for participation in the card network.
The competitive dynamics of these two levels are very different. If
numerous institutions can join a network and issue cards, competition at
the downstream level--for consumers of card services and merchants
requiring acceptance services--will be strong. Competing at the network
level, however, is more difficult. Establishing brand name recognition,
developing processing and information systems, and building a sufficient
base of merchants and card users take enormous amounts of time and
money. Either a new entrant at the network level must attract potential
issuers from more established systems, or it must enter the market at
both levels itself, issuing cards and providing acceptance services as
well as providing network services. The difficulty of the undertaking
can be surmised from the fact that only one new network, Discover (now
Novus), has successfully entered the general-purpose credit card market
in the last 30 years.
Visa and MasterCard began as separate, competing networks owned and
governed by their card-issuing members. Each eventually accepted the
other's members into its network as participating owners. As a result,
the two networks now have substantially overlapping ownership and
governance. The Justice Department's case focuses primarily on the
innovation-reducing consequences of this arrangement. The Department
alleges that the corporate governors have stopped both networks from
introducing new products and services because improvements in one
network, although they would benefit consumers, would largely shift
profits from the other network rather than raise overall returns. And
with a combined 75 percent share of the credit card market by volume of
transactions, the governors face little pressure from competitors to
implement new initiatives in the systems jointly.
The Justice Department's complaint specifically identifies
innovations that it alleges were delayed by the two networks'
overlapping structure. One of these is ``smart card'' technology: the
use of integrated circuits in the cards themselves to store more data,
perform a greater array of functions, and better monitor fraud and
credit risk. According to the Department, when Visa indicated that it
did not want to introduce smart cards, MasterCard's board decided not to
continue their development. Whether the decision was anticompetitive or
driven by legitimate business judgment about the commercial viability of
smart card technology remains to be proved. But whatever the outcome,
the

[[Page 191]]

Justice Department's challenge represents an important application of
antitrust policy to the particular problems of competition and
innovation in network industries.

Telecommunications and the Internet

Network effects have been essential to the structure and regulation
of telecommunications. At the beginning of this century communities were
often served by competing telephone systems, with AT&T and an alliance
of independent companies each taking about half the market. Generally,
the competing systems refused to interconnect with each other and
exchange traffic, and so a customer could only call people who
subscribed to the same network. Eventually, AT&T was able to tip the
market in its favor by patenting superior long-distance technology to
which subscribers of competing telephone companies were denied access.
This gave consumers an incentive to switch to AT&T, and the company grew
into a nationwide monopoly.
In 1984 the Federal Government broke up AT&T's integrated monopoly
into a long-distance company and seven regional companies providing
local telephone service. Each of these seven companies still had a
monopoly over the local service network in its region. The
Telecommunications Act of 1996, however, opened the door to local
telephone competition by requiring the regional monopolies to, among
other things, interconnect and exchange traffic with new entrants into
the market on nondiscriminatory terms. From the standpoint of network
economics, this provision makes entry easier by allowing any new
telephone company, no matter how small, to offer consumers the same
network benefit as a larger carrier.
Preserving competition has also been a regulatory priority in
telecommunications networks other than the telephone system. Internet
``backbone'' providers transport information between the high-capacity
computer networks that make up the Internet. They sell their services to
businesses, institutions, and the Internet service providers (ISPs) that
offer Internet access directly to consumers. They also negotiate terms
for the exchange of traffic with each other to provide the universal
connectivity that defines the Internet. When MCI Communications Corp.
and WorldCom, Inc., which in addition to their other lines of business
were two leading backbone service providers, were merging in 1998, the
Justice Department required MCI to divest its Internet backbone business
to an independent competitor. Without the divestiture, the merged
company would have had substantial control over the transport of
Internet traffic, making it more tempting to reduce the services it
provided to rival networks with which it exchanged traffic. The
Department's enforcement action thus helped preserve competition in the
backbone market and ensure that no single company could dominate the
``network of networks'' that comprises the Internet.

[[Page 192]]

In another part of the Internet market, the Justice Department has
challenged what it alleges are anticompetitive practices in the market
for browsers, software that consumers use to access the Internet from
their computers. All computers have operating systems that control and
allocate the hardware resources of the computer and allow it to run
various applications programs of the user's choosing, such as word
processors and browsers. The necessity for any new operating system to
be accompanied by a range of compatible applications creates a barrier
to entry into the operating system market. Operating systems are subject
to network effects because more programs will be developed to run on the
more widely used systems. As more programs are developed to run on a
particular operating system, that system becomes yet more popular to
consumers. The result is a market for operating systems that has a
propensity to tip to a dominant provider. Currently, Microsoft Corp.'s
Windows operating system dominates the market for systems that run on
IBM-compatible personal computers.
The Justice Department claims, among other charges, that Microsoft
has misused its dominance in the market for personal computer operating
systems to maintain power in that market and to attempt to gain
dominance in the complementary market for browsers. Microsoft, which
packages its browser with current versions of Windows, has allegedly
required computer manufacturers to agree, as a condition for receiving
licenses to install Windows on their products, not to remove Microsoft's
browser or to allow the more prominent display of a rival browser.
Because consumers demand that manufacturers preload Windows onto new
personal computers, manufacturers face heavy costs if they do not accept
Microsoft's terms. Similarly, the Department claims that Microsoft has
refused to display the icons of ISPs on the main Windows screen or list
them in its ISP referral service unless the ISPs agree, in turn, to
withhold information about non-Microsoft browsers to their subscribers.
The ISPs are also required, the Department alleges, to adopt proprietary
standards that make their services work better in conjunction with
Microsoft's browser than with others. Microsoft responds that
integrating its Internet browser makes its operating system more
functional and increases the features and uses of programs written for
that operating system, to the ultimate benefit of consumers. The company
also claims that the contractual arrangements with ISPs are nothing more
than cross-promotional agreements, which are common within the computer
industry.
The case against Microsoft reflects an effort by the Justice
Department to prevent perpetuation of monopoly by allegedly
anticompetitive means, to protect competition in the Internet browser
market and to maintain incentives for the development of innovative
software by preventing anticompetitive actions against successful
products. The challenge for competition policymakers in this context is
to preserve competitive opportunities without punishing successful
competitors.

[[Page 193]]

At issue is where to draw the line. Is a successful company's use of
aggressive tactics legitimate, so that regulation might reduce future
innovation incentives and consumer welfare? Or do those tactics cross
the line into misuse of market position to engage in predatory or
exclusionary conduct that forecloses competition and innovation, to the
ultimate detriment of consumers? Striking the right balance is essential
for promoting innovation and protecting consumer welfare in the fast-
moving conditions of network competition.

ENVIRONMENTAL REGULATION AND INNOVATION

Environmental regulation addresses the problem of environmental
damage caused by pollution generated as a consequence of economic
activity. As long as polluters do not bear the full cost of the
environmental damage they impose on others, they will lack the incentive
to reduce emissions adequately. Unregulated markets therefore typically
generate too much pollution. Well-designed environmental regulation can
reduce pollution and increase the net value of economic activity, which
is the value of goods and services produced after deducting all costs of
production, including the social costs of environmental damage.
Environmental policy may have a significant impact on the pace and
direction of innovation, which over the longer term may be of greater
importance than the impact of policy on immediate environmental
outcomes. In what follows, the interaction of environmental regulation
and innovation is examined. The incentive to generate new technologies
under alternative forms of environmental regulation is discussed. This
is followed by a discussion of the diffusion of existing technology
among potential adopters and the role for policy to modify diffusion
rates. Some of the major points of this discussion are illustrated in
the context of policy regarding global climate change. Finally, the
long-run impact of environmental regulation on productivity is
discussed.

ENVIRONMENTAL POLICY AND INCENTIVES TO INNOVATE

Three Approaches to Environmental Regulation

Governments can implement environmental regulation in any of three
principal ways: by providing producers and consumers with economic
incentives to reduce their emissions, by enforcing limits on the rate of
pollution discharge, or by mandating technology that producers or
consumers must use to reduce pollution. This Administration's
environmental policy has increased the use of incentive-based
approaches. The preference for such approaches is often justified on
static cost-effectiveness grounds: an incentive-based approach can
achieve any environmental goal at lowest cost, given existing
technology, because it induces emitters to reduce emissions as
efficiently as they can with the

[[Page 194]]

technology at hand. But incentive-based approaches can also be justified
on dynamic grounds: under incentive-based regulation, sources of
emissions may be more inclined to develop new technology that reduces
pollution at lower cost than under alternative forms of regulation. In
this way, market forces ensure that innovation and creativity are used
to help improve the environment rather than devoted to finding ways to
escape the brunt of regulation.
Examples of incentive-based approaches include tradable permit
systems, emissions taxes, subsidies to reduce pollution, and liability
rules. Under a tradable permit system, the government issues permits
that allow emission of a given quantity of a pollutant; total emissions
are limited by the number of permits issued. Emissions without a permit
are banned. Although total emissions are thus capped, each source of
emissions can choose its own level of emissions by buying or selling
permits. The added flexibility afforded by permit trading allows sources
that find abatement expensive to buy permits from sources that can abate
at less cost. Thus, overall emissions are reduced at lower total cost.
In 1998, for example, the Environmental Protection Agency (EPA)
introduced regulations to reduce nitrogen oxides (NOx) emissions in 22
States and the District of Columbia, allowing for emissions trading
among electric utilities that are sources of NOx emissions. Sources
needing more permits than have been allocated to them can buy them from
sources that succeed in reducing emissions below their initial
allocation.
Under an emissions tax, sources of emissions are taxed on their
activities that cause environmental damage. If the tax is set to
approximate the social cost of the environmental damage caused by the
activity, sources face appropriate incentives to reduce emissions to an
economically efficient level, that is, the level at which the social
benefits deriving from additional pollution reductions just cover their
cost. Despite the theoretical appeal of emissions taxes, however, they
have rarely been used to regulate pollution in the United States.
Subsidies, on the other hand, have been used occasionally to
encourage the use of more environmentally benign technologies. A system
of environmental subsidies mirrors that of an emissions tax: sources of
potential environmental benefits receive government payments to
encourage their beneficial activities. For example, under the Energy
Policy Act of 1992, electricity produced from wind and biomass fuels--
two environmentally benign sources of energy--receives a tax credit of
1.5 cents per kilowatt-hour generated.
Finally, liability rules impose financial responsibility on
emissions sources for any environmental damage they cause, thus
providing them with a direct incentive to reduce the adverse
environmental impacts of their activities. For example, the Oil
Pollution Act of 1990 makes firms liable for cleanup costs, natural
resource damages, and third-party damages caused by their oil spills
into surface waters.

[[Page 195]]

Similarly, the Clean Water Act makes parties liable for the costs of
cleaning up their spills of hazardous substances.
As noted at the outset, an economic advantage of incentive-based
approaches is their static cost-effectiveness: given existing
technology, they achieve a given environmental objective at lower cost.
For example, a system of tradable permits minimizes the cost of a given
amount of emissions reduction by ensuring that the reduction is
undertaken by those emissions sources, and only those sources, that can
do it most cheaply. This comes about because any source that can lower
emissions at a cost below the market price of permits will profit by
doing so, through the sale of its unneeded permits in the market.
Likewise, any source for which the cost of reduction exceeds the market
permit price will find it profitable to pollute beyond its allowance,
covering its excess emissions by buying additional permits in the
market.
It is not always feasible to monitor the contribution of individual
sources to environmental damage. In such cases it is impractical to
allocate emissions permits, levy taxes on emissions, or assign liability
for damage. Instead, incentive-based environmental regulation may take
the form of providing incentives for emissions sources to change their
production methods, rather than incentives to reduce pollution per se.
For example, fertilizer runoff from farmland causes nitrate pollution of
ground and surface waters, but it is difficult to measure the pollution
attributable to each of the many widely scattered (``non-point source'')
producers. In part because farmers contribute to non-point source
pollution, the Department of Agriculture pays up to 75 percent of the
costs of certain conservation practices that reduce environmental
damage, under the Environmental Quality Incentives Program of 1996.
In contrast to incentive-based approaches, technology standards
stipulate the equipment and methods that sources must employ to control
emissions. Performance standards, on the other hand, specify a limit on
the emissions allowed by each source but allow the source to choose how
best to meet this limit. Many environmental regulations combine elements
of both performance and technology standards. For example, the Clean
Water Act requires sources to meet an effluent performance standard for
conventional pollutants that is set according to what could be achieved
using the ``best conventional technology.'' Often this becomes a de
facto technology standard. Conversely, technology standards sometimes
allow sources to use technologies other than those specified if they can
demonstrate that the alternative technology will achieve the same amount
of pollution reduction.
In the context of environmental regulation, technology or
performance standards, in contrast to incentive-based approaches, may
not be cost-effective, because they provide no mechanism for
concentrating emissions reductions where they are cheapest. Of the two
types of standards, performance standards are preferred because they
allow emissions sources

[[Page 196]]

the flexibility to choose lower cost methods of abatement. Technology
standards may also lock in the use of pollution control technologies
that are unnecessarily costly in the face of changing conditions.

Incentives to Innovate Under the Three Approaches

Although incentive-based regulation may thus be preferable to
regulation by performance or technology standards from the perspective
of the short-term, static cost of achieving given environmental
objectives, evaluation of the relative cost-effectiveness of the three
approaches over longer horizons is more complex. Achieving ambitious
environmental goals in a growing economy will require advances in
technology (Box 5-5). The evolution of pollution control costs over time
is affected by innovation, and the three approaches differ in the
incentives they offer potential innovators. Innovation may be
particularly important when environmental regulation is relatively new,
because then there are often unexplored avenues of research and
significant learning-by-doing effects.
An important criticism of technology standards is that they may
provide little incentive to search for more cost-effective ways to
reduce emissions. A technology standard provides an incentive to develop
cheaper new technologies only if those technologies can meet mandated
targets and win regulatory approval. Performance standards, in contrast,
provide an incentive to find lower cost ways of reducing emissions, at
least to the level of the standard. However, they may give little
incentive to search for new methods to reduce emissions below the

Box 5-5.--Recent Trends in Air Quality

Environmental regulation has sharply reduced emissions of a number
of important pollutants over the past several decades. Emissions of five
of six major air pollutants (the exception being nitrogen oxides) have
fallen substantially since passage of the 1970 Clean Air Act Amendments
(Chart 5-1). The EPA's phaseout of lead additives in gasoline has been
largely responsible for the spectacular fall in lead emissions since the
1970s: lead emissions in 1997 were less than 2 percent of 1970
emissions.
These improvements occurred during a period of considerable
economic growth. From 1970 to 1997, real GDP expanded by 114 percent, so
that emissions per unit of GDP have fallen dramatically since 1970. In
certain sectors the reduction in pollution per unit of output has been
especially striking. Vehicular emissions of volatile organic compounds
per mile traveled have fallen by 81 percent, and emissions of carbon
monoxide by 73 percent, since 1970. These impressive reductions could
not have taken place without substantial innovation in new processes and
products as well as their widespread adoption.

[[Page 197]]


current standard, unless standards are expected to become tighter in the
future.
One way to increase the incentive to innovate under performance
standards is for regulators to commit to the implementation of a strict
standard in the future. Such strict, ``technology-forcing'' performance
standards raise the value of innovations that lower pollution control
costs. Whereas requiring emissions sources to meet a stringent standard
immediately with existing technology may impose large costs, announcing
the same stringent emissions targets well in advance provides an
incentive to innovate, as well as time to develop the infrastructure and
make other investments necessary to adopt and implement new
technologies. This can reduce compliance costs significantly. For
example, in 1970 the California Air Resources Board adopted stringent
air emissions standards for new cars, which took effect in 1975. Many at
the time did not believe the standard could be met at a reasonable cost.
Yet the stringent standard contributed to the development of an emerging
technology, the catalytic converter, which cut automobile emissions
dramatically and is widely used today. There is a downside, however, to
the technology-forcing approach. Innovative activity is risky:
investments in R&D may or may not pay off in new discoveries. If they do
not, compliance costs may fall by less than anticipated, and the
ambitious environmental goal may prove extremely costly to meet. And
relaxing the goal at a later date in the face of high compliance costs,
thereby rewarding failure, has its own drawbacks.

[[Page 198]]

In contrast to both performance and technology standards, incentive-
based approaches reward emissions sources for developing methods that
reduce emissions, regardless of their current level. For example, under
a system of tradable permits, any technology that reduces emissions
allows a source to profit from higher permit sales (or lower permit
purchases). Similarly, under emissions taxes, subsidies to reduce
pollution, or liability rules, innovations are rewarded through lower
costs, higher subsidies, or lower liability payments, respectively.
Because incentive-based approaches provide rewards for reducing
emissions at all pollution levels, rather than just to a given standard,
they offer incentives for innovation that are superior to those under
either technology or performance standards.

The Impact of Alternative Regulatory Policies on ReducingSulfur Dioxide
Emissions

Regulation of sulfur dioxide (SO) emissions from coal-
fired electric generating plants illustrates the importance of
environmental regulatory structure for cost savings and innovation. The
1977 Clean Air Act Amendments required new fossil fuel-fired electrical
generating plants to remove 90 percent of SO from their
smokestack emissions (70 percent if the plants use low-sulfur coal).
This policy effectively mandated the use of scrubbers, devices that
remove SO from the exhaust gases produced by burning coal.
Title IV of the 1990 Clean Air Act Amendments established a tradable
permit program for SO emissions. In phase I of the program,
which began in 1995, permits were allocated to 110 electric utility
plants around the country. In phase II, which begins in 2000, the
program will be extended to cover virtually all fossil-fuel-burning
electric generating plants and is ultimately expected to reduce
SO emissions to 50 percent of 1980 levels. Under the
tradable permit program, plants that can reduce emissions cheaply, by
switching to low-sulfur coal, for example, can sell permits to plants
for which emissions reduction is more expensive. Estimates of cost
savings just from allowing trading range from 25 to 43 percent.
Changing the SO regulatory system to a tradable permit
system may also spur innovation that results in additional cost savings.
Original compliance cost estimates will be overstated when they do not
adequately take technological advances into account. (Box 5-6 explores
whether there is a systematic tendency for preimplementation cost
estimates to exceed costs actually achieved.)
In fact, estimates of the cost of reducing SO emissions
in 2010 have fallen substantially over time. In 1990 the EPA forecast
that the total annual compliance cost for SO emissions
reduction in 2010 would be in the range of $2.6 billion to $6.1 billion
(in 1995 dollars). In contrast, a 1998 study projected annual compliance
costs in 2010 at just over $1 billion (again in 1995 dollars). Factors
other than technological change

[[Page 199]]

Box 5-6.--Comparing Estimates of Environmental Compliance Costs Before
and After Regulation

In part because of the recent experience with SO
regulation, some environmentalists have voiced concern that estimates of
compliance costs made before regulation is implemented systematically
overstate the likely costs. A recent study reviewed the limited number
of cases, from 1972 through the early 1990s, where both pre- and
postimplementation cost estimates exist, to determine whether the former
routinely overestimated compliance costs. The study found both cases of
overestimation and cases of underestimation. Prior to 1981, compliance
costs for nearly all new regulations were apparently overestimated.
Since then, however, the accuracy of estimates has improved and the
balance has been more equal.
Preparing accurate estimates of compliance costs involves many
challenges. When estimating costs in advance of implementation, analysts
must inevitably base their forecasts on the policies actually proposed.
But policies are often changed or relaxed in the process of
implementation, so that comparison of these early estimates with actual
implementation costs often ends up comparing apples and oranges.
Furthermore, cost estimates prepared before implementation typically
assume 100 percent compliance. But not all firms may comply, and those
that do not are often those with the highest compliance costs. Cost
estimates after implementation are inevitably based on data covering
only those firms in compliance, and hence they tend to be lower than
estimates based on perfect compliance. On the other hand, to the extent
that cost estimates are not sufficiently optimistic about future
technological advances, the costs of compliance will be overstated.
also help explain the dramatic decline in expected compliance costs. For
example, certain aspects of the program that effectively loosened the
limit on total emissions were not included in the original forecast.
Perhaps the single most important factor, however, was the decline
in railroad freight rates as a result of railroad deregulation. Coal
from the Powder River Basin in Montana and Wyoming has the lowest
production cost and lowest sulfur content of any coal in the United
States. Lower railroad rates reduced the cost of transporting low-sulfur
Powder River Basin coal to Midwestern utilities. Coal-fired electric
generating plants already dependent on coal transported from distant
locations gained direct cost savings. Other plants found they could
reduce emissions at lower cost by switching to low-sulfur coal rather
than investing in scrubbers.

[[Page 200]]

The SO experience reveals several advantages of relying
on incentive-based approaches to environmental regulation. First, even
with a given technology, allowing trading lowered compliance costs.
Second, tradable permits provided added incentives to innovate. Third,
tradable permits allowed sources the flexibility to adapt to changing
circumstances rather than be locked into a prescribed method. The
Administration has recently adopted rules to allow trading of
NO emissions and is a strong proponent of establishing an
effective international permit trading system to meet the reductions in
greenhouse gas emissions agreed to in the 1997 Kyoto agreement on
climate change.

Getting Innovation Incentives Right

It is widely recognized that the volume of R&D activity undertaken
in a market economy may fall short of what would best serve society's
interest. The market failures that produce this outcome apply broadly
throughout the economy but may be particularly acute in the area of
environmental technology.
One critical reason why private R&D activity may be less than what
is socially ideal is that the economic and social benefits of a
promising new technology may exceed what the innovating firm can capture
for itself. This appropriability problem can emerge where patent
protection is incomplete, so that rival firms can quickly and freely
imitate an innovation, or where basic research leads to advances in
knowledge that are difficult to patent. Even where patenting is secure,
there are often important knowledge spillovers from one firm to another.
Innovations in one field may spawn ideas that lead to innovations in
others. Empirical evidence supports the notion of appropriability
effects: such evidence strongly indicates that the social rate of return
from R&D greatly exceeds the private rate of return. Therefore, a strong
case for public support for R&D can be made, to better align the private
returns with the social.
Two additional concerns relating to the private provision of R&D are
of specific importance to environmental policy. First, environmental
regulation itself may aggravate the appropriability problem. As noted
above, under technology and performance standards, emissions sources do
not receive credit for the value of environmental improvements they
introduce. As a result, beyond the usual appropriability problems facing
innovators, there may be too little incentive for firms to generate
environmental innovations.
Second, inappropriate incentives for innovation may also result when
environmental regulation, even when incentive-based, is either too lax
or too stringent. When regulation is too lax, emissions sources may have
insufficient incentive to innovate to reduce emissions or to lower
costs; when it is too strict, they may spend more on devising

[[Page 201]]

innovations than the resulting reduction in emissions is worth.
Abstracting from the appropriability concerns common to all R&D,
incentive-based approaches generate efficient innovation incentives only
when they succeed in ``getting prices right''--that is, when they ensure
that the prices of tradable emissions permits or the taxes levied on
emissions fully reflect the actual damages resulting from pollution.
Only under these conditions will potential innovators appropriately
weigh the cost of innovations against the expected benefits, including
both expected reductions in compliance costs and the benefits from
reduced pollution.
Thus, although private sector incentives to innovate are typically
insufficient, more R&D activity is not always better. Like other
investments, investment in R&D activity is justified only when the
expected benefits exceed the costs. Of course, it is difficult at the
outset to predict the success of an R&D venture, because the returns are
inherently uncertain. As Albert Einstein put it, if we knew what we were
doing, it wouldn't be research.
Even when regulation succeeds in ``pricing'' environmental damage
appropriately, a strong case can usually be made for government support
of environmental research because of the large gap that likely exists
between social and private returns, particularly in the area of basic
research. The Federal Government funds environmental research to
identify environmental threats and find solutions to those threats.
Basic research into environmentally friendly technologies can provide
the knowledge base for the development of cheaper means of controlling
the environmental impact of economic activity. In 1994, direct Federal
investment, amounting to $5.1 billion, accounted for around 50 percent
of all U.S. environmental R&D expenditures. The greater part of the
government's environmental R&D investment is carried out through its
system of research laboratories and competitive grants to universities
and researchers. Research is also undertaken through public-private
research partnerships such as the Partnership for a New Generation of
Vehicles (Box 5-7).

ENVIRONMENTAL POLICY AND THE DIFFUSION OF TECHNOLOGY

Although innovation is a necessary precondition for improved
environmental technology, better environmental performance will not be
realized unless that new technology is adopted. Regulatory,
informational, and other hurdles may block or delay the adoption of new,
more environmentally friendly technologies. Policy may play a useful
role in encouraging the diffusion of new technology if consumers or
firms do not adopt new technologies as fully or as rapidly as is best
for society.

[[Page 202]]

Box 5-7.--The Partnership for a New Generation of Vehicles

The Federal Government can play a particularly vital role in
promoting R&D in situations where the private sector's incentive to
pursue innovations with environmental payoffs is distorted. For example,
low gasoline prices have made consumers less concerned about fuel
efficiency, dampening the automobile industry's interest in developing
more-fuel-efficient vehicles. Yet vehicle emissions are a major source
of greenhouse gas emissions and other pollutants, and therefore such
efforts would produce clear benefits to society.
In response, the Partnership for a New Generation of Vehicles was
established in 1993 between the Federal Government and the major
domestic automakers, with the aim of dramatically increasing the fuel
efficiency of vehicles while maintaining performance and price. A goal
of the program is to develop, by about 2004, a production prototype of a
midsized sedan that would achieve 80 miles per gallon. The R&D needed to
reach that goal ranges from basic research into lightweight materials
and alternative power sources to applied engineering of new
manufacturing processes. To entice firms to join the research endeavor,
the government co-funds both basic and more applied research and
provides access to the extensive Federal laboratory system and its
experts. To date, several new technologies have been developed that are
bringing this goal closer to reality.

Patterns and Incentives in Technological Diffusion

The diffusion of a new technology often follows a well-established
pattern. Initially, the new technology is adopted by only a few. Over
time the pace of adoption increases, slowly at first and then more
rapidly. The pace of adoption finally reaches a peak and then begins to
fall as the market approaches saturation. The trendline of cumulative
adoption thus follows an S-shaped curve. The spread of information among
potential adopters seems to explain this pattern. A few pioneers are the
first to become aware of the new technology and make the decision to
adopt. Word of the new technology then spreads to those in contact with
the pioneers, and each new user informs several others, so that
adoptions begin to pick up momentum. Finally, after the bulk of the
population of potential adopters has learned about the new technology,
the rate of new adoption slows.
This pattern of diffusion provides important insights into the rate
of adoption, but it does not answer the policy question of whether that
rate is efficient. Failure to adopt technology may be appropriate--the
costs of adoption may simply exceed the benefits. But market failures
may also impede adoption, even when the benefits outweigh the costs.

[[Page 203]]

For policy purposes it is important to distinguish between these two
situations. Only in the second can policy play a constructive role in
promoting the adoption of new technology. Like the incentives for
innovation, the incentives for adoption of new technologies will be
inadequate when market prices fail to reflect the full environmental
impact of pollution. For example, if energy prices do not reflect the
full environmental consequences of energy use, consumers will have an
inadequate incentive to purchase energy-efficient products. An obvious
solution to this problem is to ``get prices right''--to adjust energy
prices so that consumers face the true costs of their decisions.
A different problem arises when potential adopters lack complete
information about potentially useful new technologies. In making their
decisions about what products to buy, consumers may need to acquire
information. As long as consumers both pay all the costs of acquiring
information and reap all the benefits of making a more informed
decision, their lack of complete information does not constitute a
market failure. But in fact they do not reap all the benefits: in the
course of adopting a new technology, one person often spreads
information about that technology to others, through conversation or by
observation. This sharing of information confers a benefit on those who
receive it, but because the first adopter does not profit from that
benefit, he or she will not account for it in deciding whether to adopt.
If this problem results in too little sharing of information, and
therefore too little adoption of worthy new technologies, the solution
may be for the government to provide information, or to require others
to provide it. The government can also lower the cost of acquiring
information by providing a credible source of objective information. The
Energy Policy and Conservation Act of 1975, for example, requires many
appliances to carry energy labels showing the product's energy
efficiency rating and an estimate of its annual energy costs. The EPA
and the Department of Energy also operate the Energy Star program, in
which products are assessed for their energy efficiency, and efficient
products are allowed to display the Energy Star label.
Another approach when consumers lack full information is to regulate
technology directly. For example, the Department of Energy has
implemented energy-efficiency standards for appliances. This approach
may be preferred when providing information is costly.

Residential Energy Conservation: The Energy Paradox

Studies have found that many consumers are unwilling to invest in
energy-efficient products such as compact fluorescent light bulbs,
improved insulation materials, and energy-efficient appliances, even
though they would save money by doing so. Their failure to make these
energy-saving and apparently cost-saving investments is sometimes called
the ``energy paradox.''

[[Page 204]]

Consumers' investment in energy efficiency, whether in installing
better insulation or buying more energy-efficient appliances, typically
involves, like most investments, an initial cost followed by future
benefits from lower energy bills. Studies have calculated the rate of
return for a variety of investments in energy efficiency and found that
these returns often have a present value that exceeds typical financing
costs. Thus, consumers could expect net economic savings over time.
One possible explanation for the energy paradox is that many
consumers are not in a position to capture the promised savings and
therefore have little or no incentive to invest in energy efficiency.
For example, renters may not make energy-efficient investments if their
rent includes a fixed amount for utility costs, so that they do not
directly reap the benefits from conservation. Consumers might also lack
information about energy-efficient alternatives. For instance, there is
some evidence that providing free information increases adoption rates
for energy-efficient lighting. Or consumers may simply be myopic,
influenced more by the immediate cash expense than by the promise of
future savings. Policies that lower the initial cost of purchase may
therefore be the most effective in encouraging adoption.
Some analysts think the energy paradox may be an illusion, an
artifact of flawed data or logic. The engineering data used to estimate
energy-efficiency gains may be too optimistic: the gains achievable in a
laboratory setting may be far greater than what a typical consumer in a
typical home would realize. Consumers may fail to install insulation or
other energy-saving investments correctly, for example. The costs of
investing in energy efficiency may be underestimated as well. The time
and resources consumers devote to learning about energy-efficient
investments are not usually factored into the analysis. For some
consumers, these costs may exceed any possible savings. Energy-efficient
products may also have other features or other effects that consumers do
not like. Improved insulation may raise indoor air pollution by reducing
ventilation; fluorescent light bulbs may not fit existing light
fixtures. Finally, given uncertainty about the future price of a new
technology, delay may be rational. Even if immediate adoption would save
money, consumers who wait may get a better price and thus save even
more. Because adoption can take place at any time, analyses that ignore
this ``option value'' of waiting may overstate the value of current
adoption.
A conclusive answer to the energy paradox has yet to be found. In
any case, recent low energy prices combined with implementation of
energy efficiency standards for appliances and various informational
programs seem to have reduced the opportunities for investments that
save both energy and money.

[[Page 205]]

INNOVATION AND DIFFUSION: AN APPLICATION TO CLIMATE CHANGE POLICY

Climate change is a problem that will be with us for a long time:
policies to address the threat will require the abatement of greenhouse
gas emissions over decades, even centuries. Given this long horizon,
innovation in technologies that can reduce greenhouse gas emissions must
play a role, and therefore the impact of climate change regulation on
incentives to innovate cannot be ignored. The ultimate cost of global
efforts to address this environmental challenge will depend importantly
on the pace at which such innovation takes place. The Administration's
efforts to deal with climate change therefore incorporate many of the
principles discussed above, to create appropriate incentives that
promote both innovation and the speedy diffusion of new technology.
These efforts are reflected both in achievements in international
negotiations and in domestic actions.
Emissions of greenhouse gases, primarily from the burning of fossil
fuels and deforestation, have led to a 30 percent increase in the
atmospheric concentration of these gases (primarily carbon dioxide,
methane, and nitrous oxide) from levels prevailing prior to the
industrial revolution. If emissions continue along their projected,
``business as usual'' path, a doubling of carbon dioxide concentrations
from their levels before the industrial revolution is likely midway
through the next century. According to the best climate models, this
could lead to global warming of the atmosphere of between 1.8 and 6.3
degrees Fahrenheit by 2100. The potential adverse impacts of such a
change are many: a rise in sea level, greater frequency of severe
weather events, shifts in growing conditions due to changing weather
patterns, changes in the availability of fresh water, threats to human
health from increased range and incidence of disease, and damage to
ecosystems and biodiversity.
To address the risks of climate change, the member countries of the
United Nations have participated in a series of international
negotiations, including conferences in Rio de Janeiro in 1992, in Kyoto
in 1997, and most recently in Buenos Aires in 1998. Building on the 1992
United Nations Framework Convention on Climate Change, the Kyoto climate
change agreement places binding limits on emissions of greenhouse gases
by the industrial countries over the period from 2008 to 2012. The
agreement contains several features that promote the cost-effective
reduction of these gases. For example, its proposed emissions trading
program grants sources the flexibility to trade emissions allowances
with sources in other industrial countries. Further, the agreement
provides industrial countries with the flexibility to implement policies
that promote trading across different types of greenhouse gases. Sources
in industrial countries will have opportunities to invest, through the
agreement's Clean Development Mechanism, in

[[Page 206]]

clean-energy projects in developing countries, and thereby generate
emissions credits for use at home.
The emphasis on emissions trading in the Kyoto agreement embodies
the Administration's preference for incentive-based environmental
regulation. For the reasons explained above, an incentive-based approach
should give firms strong incentives to find low-cost methods of reducing
or sequestering greenhouse gas emissions. By pricing greenhouse gas
emissions, this approach also stimulates the diffusion of existing
technologies and provides private sector incentives for R&D into the
next generation of technologies. In addition, announcing emissions
targets well in advance may produce payoffs akin to those of a
technology-forcing standard. Such an approach provides incentives for
firms to innovate, while also allowing them time to adjust by replacing
depreciating plants with equipment incorporating new technology, thereby
further lowering the cost of emissions reduction. In conjunction with
the international trading system proposed under the Kyoto agreement, the
Administration supports developing a domestic greenhouse emissions
trading program starting in the 2008-12 commitment period. This would
allow U.S. firms to participate in international trading of greenhouse
gas emissions, as part of an efficient, low-cost national abatement
strategy.
Because 82 percent of domestic greenhouse gas emissions come from
the burning of fossil fuels, achieving climate change policy goals will
require improving the energy efficiency of the economy. The rate of
energy efficiency improvement (EEI) across the economy can be thought of
as the sum of three factors: market-induced, policy-induced, and
autonomous EEI. Market-induced EEI reflects the effect of changes in
energy prices on consumption decisions. Policy-induced EEI reflects the
effects of policies on energy consumption. The autonomous component of
EEI is that which would take place even in the absence of policy and
market price changes. The gradual structural shift in the U.S. economy
toward services and away from manufacturing and agriculture may explain
some of this component. Changes in energy efficiency over recent decades
is summarized in Box 5-8.
Policies can provide incentives to invest in energy-efficient
technologies and increase the rate of EEI through price changes. For
example, the Administration's economic analysis on climate change found
that a tradable permit program that results in permit prices of $23 per
ton of carbon would increase the annual rate of EEI approximately 25
percent above the level projected in the absence of such a policy.
In addition to policies affecting energy prices directly, the
Administration believes that a strong argument can be made for policies
to stimulate innovation and diffusion through R&D and appropriate fiscal
incentives. The President's 2000 budget includes continued funding for
the Climate Change Technology Initiative (CCTI), a program

[[Page 207]]

Box 5-8.--Energy Efficiency Since the 1970s

Energy efficiency in the United States is now much greater than it
was at the time of the first oil shock just over 25 years ago.
Nevertheless, because of growth in the economy, the United States today
consumes more energy than it did in 1973. The ratio of energy use to
GDP, a measure of the energy intensity of output, fell rapidly in the
1970s and early 1980s but stopped declining in the late 1980s. More
recently it has again begun to decline (Chart 5-2). Yet despite these
efficiency gains, total energy use rose by 27 percent between 1973 and
1997 (Chart 5-3), stimulated by population growth and rising GDP per
capita. Virtually the entire increase came after 1986, a year that
ushered in a period of relatively low energy prices. Before 1986,
relatively high energy prices had kept energy use flat.
One of the most dramatic increases in energy use has been in that
by motor vehicles: their annual fuel consumption rose 54 percent between
1970 and 1996. Although the average fuel efficiency of new passenger
cars more than doubled between 1973 and 1996, from 14.2 to 28.5 miles
per gallon, the fuel efficiency of the Nation's vehicle fleet has not
increased as much, because of a shift toward light-duty trucks and
sport-utility vehicles. The efficiency gains were also partly offset by
an increase in miles traveled per vehicle and a large increase in the
number of vehicles. The net effect of these changes has been a small
decline in fuel use per vehicle but a large increase in total energy
consumption (Chart 5-4).
Energy use in homes, in contrast, was about the same in the early
1990s as it was in the 1970s, as efficiency gains have kept pace with
increases in the number of households, in average house size, and in the
average number of appliances per household. For example, the efficiency
of the average new refrigerator improved 192 percent from 1972 to 1996.
Energy use per household declined rapidly in the late 1970s and early
1980s but has been stable since.
designed to spur the development and adoption of new energy- and carbon-
saving technologies through tax incentives and R&D investments. Many of
the efforts within the CCTI reflect recommendations made in a 1997
report by the President's Committee of Advisors on Science and
Technology. The Committee found that ``the inadequacy of current energy
R&D is especially acute in relation to the challenge of responding
prudently and cost-effectively to the risk of global climatic change
from society's greenhouse gas emissions.'' By providing public support
for energy R&D through the CCTI, the level of innovation will likely
increase, offsetting in part the appropriability problems associated
with this type of R&D.

[[Page 208]]



[[Page 209]]


The proposed CCTI package for fiscal 2000 contains $3.6 billion over
the 1999-2004 period in tax credits for energy-efficient purchases and
renewable energy. These include tax credits of $1,000 to $4,000 for
consumers who purchase highly fuel-efficient vehicles, a 15 percent
credit (to a maximum of $2,000) for purchases of rooftop solar
equipment, a 10 to 20 percent credit (also subject to a cap) for
purchases of energy-efficient building equipment, a credit of $1,000 to
$2,000 for purchasing energy-efficient new homes, an extension of the
wind and biomass tax credit and an expansion of eligible biomass
sources, and an investment credit for the purchase of combined heat and
power systems. The package also contains $1.4 billion for fiscal 2000
for additional R&D investments covering the four major sources of carbon
emissions in the economy--buildings, industry, transportation, and
electric power--and investments in carbon removal and sequestration. The
proposal builds on the fiscal 1999 budget, which included more than $1
billion in CCTI funding for R&D. The funding in that budget represented
a 25 percent increase over fiscal 1998 appropriations for climate change
R&D.
Complementing these fiscal measures, the Federal Government can
undertake other actions to promote the diffusion of climate-friendly
technology. In October 1997 the President called for a series of steps
to reduce energy use by Federal buildings, vehicle fleets, and other new
equipment, and to promote the use of renewable energy sources. As the
Nation's largest single energy user, the Federal Government spends
nearly $8 billion each year for power to operate facilities, vehicles,
and

[[Page 210]]

equipment, and more than 90 percent of this energy comes from fossil
fuels. The Federal Government plans to expand its procurement of
renewable and less carbon-intensive fuels. These efforts will accelerate
the diffusion of new energy-efficient and carbon-lean technologies.
Further, the Federal Government's experience with these technologies
should speed their diffusion through the rest of the economy, by
demonstrating their applicability and feasibility for other users.

THE LONG-RUN COSTS OF ENVIRONMENTAL REGULATION

The policies just described are based on the conviction that the
development of new technology, and the widespread adoption and diffusion
of already existing technology, can make environmental protection less
expensive, and that over the long run it is possible to have both
economic growth and a sounder environment. Yet some analysts make a much
bolder claim: they argue that further environmental protection can be
achieved at little or no economic cost. The energy paradox, described
above, perhaps provides some evidence for this claim. If stricter
environmental regulation is costless, then implementing such regulation
is unambiguously desirable, because it would mean that real
environmental benefits can effectively be had for free. Although it is a
difficult proposition to test, the weight of the evidence suggests that
stricter environmental regulation would impose an additional cost, but a
modest one.
There are several ways in which stricter environmental regulation,
by conferring benefits on regulated firms and the economy as a whole,
might pay for itself. First, environmental regulation might force firms
to reconsider their methods of production, which could lead them to
discover new methods that simultaneously lower both emissions and cost.
For example, in direct response to environmental regulations requiring
the phaseout of chlorofluorocarbons, a new method was found for cleaning
electronic circuit boards that not only eliminated the use of these
chemicals but increased product quality and lowered operating costs as
well. Second, firms that become subject to strict environmental
regulation before their rivals do may gain a competitive (first-mover)
advantage over their competitors by developing new products and
technologies for which demand may later become widespread. For example,
Scandinavian pulp and paper equipment suppliers increased their exports
after more environmentally friendly production processes were introduced
in Scandinavia. Third, if there are significant spillover effects from
R&D, all firms may benefit from additional R&D activity that comes in
response to environmental regulation, even though each firm individually
might not have expanded its R&D efforts without the spur from
regulation.
Many would dispute the proposition that environmental benefits can
be obtained at no net cost. After all, if opportunities for profitable

[[Page 211]]

investment are there for the taking, why should firms need prodding by
regulators to seize them? Profit-maximizing firms gain by cutting costs
and seizing strategic advantages. The profit motive itself should ensure
that no large cost savings go unrealized, or first-mover advantages
untapped. This critique, however, does not take into account the benefit
of additional R&D in the presence of spillover effects. Moreover,
difficulties in internal organization may prevent a firm from operating
in a manner fully consistent with profit maximization. However, it is
not clear that government policies can be designed to overcome these
internal organizational problems.
Resolving the debate about whether environmental regulations impose
long-run costs will require solid empirical evidence. Although it is
difficult to test the proposition directly with existing data, some
evidence concerning the long-run productivity consequences of
environmental regulation is available. (Some intriguing evidence also
exists on the environmental regulatory consequences of increased
productivity; see Box 5-9.) The bulk of this evidence indicates that
increasing the stringency of environmental regulation does entail a
modest reduction in long-run productivity.

REGULATION AND INNOVATION: THE CASE OF THE ELECTRIC POWER INDUSTRY

This chapter has discussed the interplay between regulation and
innovation, showing how innovation often necessitates regulatory change,
and in turn how regulatory change can affect the pace and direction of
innovation. Here we illustrate these themes with a discussion of the
ongoing deregulation and restructuring of the electric power industry,
one in which technological and organizational innovation has changed the
appropriate form of regulation. The electric power industry provides an
appropriate case study both because of recent initiatives to introduce
competition in electric power generation and because of the potential
environmental impacts of power generation.
Although other industries (air travel, trucking, and
telecommunications, for example) have been opened to competition over
the past few decades, the electric power industry, with sales of $212
billion in 1996, is among the largest yet to be targeted for
deregulation. Competition has already been introduced at the wholesale
level (electric power generation), but retail electricity markets (the
sale of electricity to final consumers) are still, for the most part,
regulated monopolies. In 1998 the Administration proposed legislation to
remove many of the remaining barriers to competition and encourage
States to implement retail competition. The goal of the Administration's
Comprehensive Electricity Competition Plan is to provide consumers
access to the wholesale power market while maintaining regulation of
transmission and

[[Page 212]]

Box 5-9.--Is There an Environmental Kuznets Curve?

We have so far examined the question of whether environmental
regulation affects productivity. But could there be an effect in the
opposite direction? Some have suggested that higher productivity might
lead to increased demand for environmental protection, by way of an
increase in income per capita.
In an empirical analysis, the economist Simon Kuznets found that
income inequality rose with income per capita at low levels of income,
but fell with income per capita at higher levels. The inverted-U
relationship thus described has come to be known as the Kuznets curve.
Several analyses of patterns of emissions of air and water pollutants
across countries have shown a similar relationship to income per capita:
emissions seem to increase with income at low incomes, and fall with
income at high incomes--an environmental Kuznets curve. If the familiar
inverted-U relationship in fact holds in this domain as well (a more
recent study, using the latest available data, failed to find it),
countries that reach a certain level of development should experience
declining pollution with economic growth, because of increased demand
for environmental protection with higher income. In other words, growth
is not necessarily an enemy of the environment.
Just where the turning point in the relationship between
development and environmental quality occurs, if it occurs, is important
for predicting whether global emissions of any pollutant are likely to
increase or decrease in the near future. If peak pollution levels occur
at relatively low levels of income per capita, global emissions should
soon begin to fall as more countries pass the peak. However, a
substantially higher peak would mean that pollution will likely get
worse before it gets better. One study found that sulfur dioxide
concentrations peak at income per capita levels around $5,760, roughly
that of a middle-income country like Chile. A second study using
slightly different data and methods found that emissions per capita of
sulfur dioxide, particulate matter, nitrogen oxides, and carbon monoxide
peaked at higher income levels.
Unlike air and water pollutants, which have primarily local
effects, greenhouse gas emissions seem to increase with income at all
income levels. This should not be surprising. Because greenhouse gas
emissions contribute to changes in the global atmosphere but do not have
visible local effects, national governments, even in the richer
countries, come under less pressure from their citizens to regulate
their national emissions alone. Without international agreements to
limit greenhouse gas emissions, achieving a more prosperous world may
entail ever-increasing emissions.

[[Page 213]]


distribution systems, which will probably remain natural monopolies.
Just as telephone deregulation has allowed consumers to choose their
long-distance company, so deregulation of the electric power industry
will soon allow them to choose their source of electricity. The plan has
five main objectives: to encourage States to implement retail
competition; to protect consumers by promoting competitive markets; to
ensure access to and the reliability of the power transmission system;
to promote and preserve public benefits (for example, through assistance
to low-income customers and consumer education); and to amend existing
Federal statutes to clarify Federal and State authority with respect to
the industry. The Administration's proposed deregulation plan provides
an excellent example of how an enlightened regulatory approach can
remove barriers to private innovation, resulting in both economic and
environmental benefits. The competitive incentive to produce electricity
more efficiently is expected to translate into lower fuel consumption
and less pollution.

From Innovation to Deregulation and Competition

The electric power industry has been regulated since the early
1900s, when States first began to grant electric companies exclusive
service areas. Electric utilities were overseen by public utility
commissions (PUCs) and guaranteed a ``reasonable'' rate of return on
their investments, provided they set reasonable rates and met various
social objectives such as universal access.
Regulation was justified on the grounds that it was less costly to
have one electric utility provide service than to have competing
utilities. Firms faced enormous startup costs in installing generating
units, transmission and distribution lines, and individual connections.
Duplication of transmission and distribution networks by competing firms
would have caused unnecessary expense. With the support of the privately
owned utilities, States restricted competition by granting utilities
monopoly status to encourage them to make the necessary investments and
avoid wasteful duplication. As demand for electricity grew rapidly,
developments in generating technology also supported the notion that
electricity supply was a natural monopoly. By the 1970s, coal- and
nuclear-fired plants generally needed to be very large, exceeding 500
megawatts capacity, to exploit economies of scale. The capital demands
for such a large plant needed to be spread over a large consumer base
for the utility to recoup its investment. Since then, technological and
organizational innovations in electric power generation have blunted its
natural monopoly characteristics and reduced the need to restrain
competition in the generation of electricity. Deregulation in the
natural gas industry and the increased availability of gas caused gas
prices to fall. The cheaper fuel source spurred innovation in electric
power generation and made combined-cycle gas turbine plants, which today
can be as small as 100 megawatts, competitive with much

[[Page 214]]

larger coal plants. In 1994 these technologies contributed to a 35
percent fall in the average size of new fossil-fuel generating plants
relative to that of existing plants. These changes mean that large users
can threaten to generate their own electricity if their utilities do not
offer lower rates. Technologies on the horizon promise further
reductions in the efficient size of electricity generation, to the point
where even residential users may some day find it economical to generate
their own power (Box 5-10).
The development of an interconnected electricity system, and an
improved understanding of how to operate generating plants and the
transmission grid independently of each other, have made competition
feasible. As the market for electric power grew, individual systems
began to interconnect, making it physically possible for consumers in
one utility's service area to receive electricity from generators in
another. To maintain the integrity of the electric power grid, the
quantity of electricity supplied must always match the quantity
demanded. With quantities demanded fluctuating constantly, the output of
generators supplying power to the grid must be closely coordinated.
Until recently, this was taken to mean that generation, transmission,
and distribution services needed to be jointly owned. Recent
technological and institutional innovations, however, such as
computerized controls and independent system operators (ISOs), offer
ways to coordinate unaffiliated generators and provide fair, open access
to transmission lines while maintaining their integrity.
Today the electric power industry is governed by a mix of State and
Federal regulation. But a series of Federal actions beginning in 1978
has begun to introduce competition at the wholesale level. The Public
Utility Regulatory Policies Act of 1978 (PURPA) first opened the door by
requiring public utilities to purchase power from renewable sources and
from sources using cogeneration (see Box 5-10). The price of this
``qualified power'' was determined by State regulators and tended to be
greater than the utility's average cost of generation. Although this
requirement saddled some utilities with high-cost, long-term contracts,
it also demonstrated that generators not owned by the public utility
could be integrated into the electric power system, and it helped spur
the development of smaller scale generating technologies. The Energy
Policy Act of 1992 went further, creating a new class of independent
generating companies that could sell power directly to utilities. In
April 1996 the Federal Energy Regulatory Commission (FERC) issued Order
888, requiring public utilities to provide access to their transmission
lines at reasonable, nondiscriminatory rates.
At the State level, to further these policies and reap the benefits
of competition, many utilities are collaborating to create regional or
statewide ISOs to manage their transmission grids. ISOs set transmission
prices and can contract for network services (to provide backup power,
for example). There are currently four ISOs in operation

[[Page 215]]

Box 5-10.--The Trend Toward Decentralized PowerGeneration

The trend toward smaller, cleaner, and quieter generating plants,
combined with certain aspects of the physics of electricity transmission
and generation, has led some to claim that the days of centralized
electric power are numbered. Generating electricity from a fuel source
is never perfectly efficient; some of the energy in the fuel source is
inevitably lost in the transformation process. This energy typically
takes the form of heat, which can be captured and used in industrial
processes, or as space heating if the generator is physically close
enough to consumers in need of heat. An electric power plant thus
produces two potentially valuable products--electricity and heat--for
the price of one. The exploitation of these potential economies is
called cogeneration.
Once generated, electricity typically goes through many steps
before reaching the end user. It may be transmitted over high-voltage
wires for long distances, after which it must be transformed into lower
voltage to be distributed, and finally transformed again before being
delivered to consumers. On average, some 7.5 percent of the electricity
generated is lost through the distribution chain before reaching the end
user. On-site electricity generation avoids the greater part of these
losses, thus increasing efficiency and lowering costs.
In the past, economies of scale in electricity generation and the
nuisance of locating loud and polluting plants near homes and businesses
outweighed this incentive for small-scale local generation. This
situation has begun to change, however, as very small scale plants are
becoming more competitive with large-scale generation, and as plants are
becoming quieter and less polluting.
These changes do not necessarily imply the total demise of
centralized power. An electric power grid remains an efficient way of
allowing generating plants with different production characteristics to
serve consumers with different load profiles. For example, electricity
demand from many businesses peaks during the day, whereas residential
demand is concentrated during the mornings and evenings. If each of
these groups generated its own electricity, not only would each need to
have its own facilities, but each facility would spend many hours per
day with slack capacity. A single large generating plant can supply the
same customers with less total generating capacity. Depending on the
size of distribution losses and the value of excess heat, it would be
wasteful to have two separate plants, one at the office and another one
at home, when one plant could service both loads.

[[Page 216]]


around the country, and seven others are in the planning stages. Still
others are planning to form power exchanges or pools to help create
efficient spot power markets.
States throughout the country are going further, expanding consumer
choice by introducing retail competition into electricity markets.
Eighteen States have passed legislation or issued regulations toward
this end. Many States and utilities across the country have implemented
pilot programs, and statewide retail competition is, to various degrees,
already being offered in California, Massachusetts, Montana,
Pennsylvania, and Rhode Island.
Although States are thus moving forward, several Federal laws and
regulations still hamper full competition in retail markets. For
example, the Public Utility Holding Company Act of 1935 makes it hard
for utilities to cross State lines to compete in each other's markets.
PURPA requires public utilities to purchase expensive ``qualified
power'' but would not impose such costs on new competitors. The
Administration's electricity competition plan would remove these and
other barriers to competition. It would also modernize the institutions
that protect the reliability of the electricity supply system, enabling
them to function more effectively in emerging competitive markets

The Benefits of Deregulation

The traditional means of regulating monopolies through rate setting
did not provide strong incentives for utilities to improve their
efficiency or offer new services--things that would happen naturally in
a competitive market. By allowing companies to compete to provide
electricity to consumers, deregulation forces companies to search for
more efficient means of producing and delivering electricity, as well as
new means of providing the energy services desired by customers. In a
$212 billion industry, even small efficiency gains from competition can
have large benefits.
Above and beyond the direct efficiency gains in the production and
delivery of electricity, retail competition can encourage firms to offer
new products and find innovative ways to reduce overall energy costs.
Time-of-day metering can encourage consumers to shift their purchases
away from peak periods and thereby reduce capacity requirements. As
already discussed, there appear to be barriers in the markets for
energy-efficient products. Utility commissions have therefore stepped in
to force public utilities to invest in energy efficiency. In the move
toward a competitive industry, utilities are now rethinking such
investments. There is no way for a utility to force consumers to keep
buying its power once the utility has made an efficiency investment
(buying insulation for a consumer's house, for example). New structures
will develop in a more competitive market to allow firms to pay for and
install energy-efficient equipment in return for a share of the
subsequent savings. Restructuring, by making it easier to bundle
efficiency services with

[[Page 217]]

the provision of electricity, could provide incentives for increased
growth of energy service companies (ESCOs). The potential role for ESCOs
is illustrated by the experience in California under deregulation, where
many supply contracts for commercial and industrial customers include an
energy management component.
Competition may also permit customers to express, through their
purchases, their preferences for environmentally sound electricity.
``Green'' power marketers have sprung up in many of the States now
offering retail competition and in those with pilot programs. For a
premium, these marketers sell electricity that is generated with a
greater proportion of renewable sources than the current mix. If enough
consumers are willing to pay enough extra for green power, it will
provide a profit motive to encourage the future development of such
resources.

THE CHALLENGES OF A COMPETITIVE MARKET: ENVIRONMENTAL AND SOCIAL
OBJECTIVES

Regulatory changes bring with them a host of challenges, as old ways
of meeting various objectives must be rethought. In the past, PUCs had
direct oversight over utilities. In some States they sought to include
environmental considerations in their approval criteria for new
generating assets. This encouraged the construction of generating plants
that were less polluting than would have been the case if utilities were
allowed to ignore this issue. With competition, however, PUCs lose their
ability to influence the composition of electricity supply. If a utility
is required to buy more expensive clean energy, its rates will have to
reflect the higher costs. With competition, consumers would then be able
to buy power from other providers who had lower costs because they were
not subject to the same provisions.
In a competitive market, unless these environmental spillovers are
internalized through other means (such as existing environmental
regulations), the government must step in to pursue them in new ways.
For example, as already noted, PURPA requires utilities to buy power
from ``qualified'' clean generators. In support of the same goals, the
Administration's proposal includes establishing a tradable renewable
portfolio standard to promote more environmentally friendly power
production. This approach would require each generator to cover a
fraction of its total generation from renewable sources (not including
hydroelectric power). If a seller did not generate enough renewable
power by itself, it could purchase credits from companies that exceeded
their generation requirement.
Similarly, under competition, other social objectives cannot be
pursued by placing requirements on only one set of actors--the
utilities. Therefore, the Administration's competition plan would
establish a ``public benefits fund'' to support affordable electricity
service to low-income customers, invest in energy efficiency measures,
and promote

[[Page 218]]

other social goals. The fund would be supported by a surcharge on all
electric power transmission.
Deregulation relies on the forces of competition to keep prices
reasonable for consumers. The benefits of deregulation, therefore,
depend on the extent of competition in each market. The Administration's
plan enhances FERC's authority to block anticompetitive mergers and to
promote competition through divestiture and other means.