Interesting People mailing list archives

Is the UHF Frequency Shortage a Self Made Problem? by Paul Baran


From: David Farber <farber () central cis upenn edu>
Date: Fri, 7 Jul 1995 10:55:27 -0400

MARCONI CENTENNIAL SYMPOSIUM
Bologna, Italy
June 23, 1995


Is the UHF Frequency Shortage a Self Made Problem?
Paul Baran
Atherton, California, USA




THE FIRST HUNDRED YEARS
Today on this occasion of the Marconi Centennial, we celebrate the
accomplishments of the first hundred years of radio. The history of radio
has been one of rapid and continuing progress. No letup is in sight. Each
generation sees major advances over the previous art. Yet throughout the
history of radio, scarcity of spectrum has been a fact of life. Lack of
spectrum limits progress in creating new applications.


The constant shortage of spectrum space is not a new issue. One of the very
first questions asked of young Marconi about his nascent technology was
whether it would ever be possible to operate more than one transmitter at a
time. ... New demand is like a constant vacuum, sucking up frequencies as
they become available. Even today, with over 30,000 times more spectrum at
our disposal than in Marconi's day, entrepreneurs wishing to implement new
services encounter the same perpetual shortage of frequencies.




REGULATION
Although it's been called ether, the radio spectrum really is a lot like
money. No matter how much some people seem to have, they always want more.
Enough never seems to be enough. Given the limited spectrum available
combined with the growing demands of potential users, it became necessary
early in the game to devise some means of rationing the spectrum resource.
National and international regulatory structures evolved over time
primarily to restrict access to the spectrum only for specific, allocated
uses which were then further limited only to chosen institutional entities
- called licensees.


Today, we have new technologies that could potentially ameliorate this
perennial shortage. But such technologies cannot be fully utilized because
the very regulatory system initially set up to address the frequency
shortage of the past stands in the way of the present and the future.


History has shown that there are very few mechanisms as effective at
maintaining the status quo as a set of institutionalized regulations. Once
set in regulatory concrete, reconsideration of the basic underlying
assumptions is very difficult. While it will be an uphill fight to
re-examine the basic underlying assumptions of any law or administrative
rule, it is clearly not impossible. It will just take longer than if not so
well institutionalized.


So on this day, as we celebrate the first one hundred years of radio, let
us take this occasion to review some of our basic assumptions about
spectrum utilization and to consider a possible alternative approach to
frequency regulation in the future.


My words this afternoon focus on the UHF spectrum, 300 to 3,000 Megahertz,
that most desired part of the radio spectrum for communications with high
data rate for local area data devices. These are the frequencies preferred
by designers seeking to use low cost electronic products to deliver new
services. Today, the UHF band is the carrier of the bulk of terrestrial
radio services -- cellular telephony, broadcast television, cordless
telephones, etc. And, it is used for low altitude satellites as well.




A PARADOX
To suggest that there really is no fundamental reason for a shortage of UHF
spectrum is to violate the common wisdom. Tune a spectrum analyzer across a
band of UHF frequencies and you encounter a few strong signals. Most of the
band at any instant is primarily silence, or a background of weaker
signals.


The spectrum analyzer connected to an antenna reveals that much of the
radio band is empty much of the time! This unused spectrum might be
available for transmission if we could take measurements and know exactly
when and where to send the signal.


In part, the frequency shortage is caused by thinking solely in terms of
dumb transmitters and dumb receivers. With today's smart electronics, even
occupied frequencies could potentially be used.




DIGITAL VERSUS ANALOG
To the modern communications engineer, a lack of strong signals anywhere,
no matter how distributed, represents a theoretically unused capacity that
is available to be utilized with the proper signal processing. With
advanced signal processing techniques, transmission of signals on top of
undesired signals received at lower levels represents a potential source of
usable transmission capacity. There is a caveat here. We are assuming
digital signals that are able to operate with lower signal to noise ratios
than analog signals. That means if the desired signal is but slightly
stronger than an interfering signal, it can theoretically be received
without error. This game doesn't work with old fashioned analog modulated
signals, such as analog broadcast TV signals where even weak interference
40 dB below the picture is visible. 40 dB is a power ratio of 10,000 to 1.
That means if an interfering signal is 1/10,000 as strong as the analog TV
signal it will be visible in the received TV image.


Compare this situation relative to the case of a digitally modulated signal
able to operate at a 20 dB signal to noise ratio. 20dB is a power ratio of
1 to 100, or a tolerance 100 times as great as in the analog TV case. With
the addition of error correction codes, some digital systems can operate at
a 10 dB level or a noise tolerance 1000 times as great as our analog TV
example.


How much new usable capacity could we gain by using digital transmission?
Think in terms of a curve of energy versus frequency at the receiver. The
potentially available bandwidth can be visualized by inverting this
received energy versus frequency curve and then adding a second curve above
the first curve separated by an amount equal to the required signal to
noise ratio. This new curve suggests the amount of potential spectrum
actually available for reuse using the improved modulation. ...


In reality, the major spectrum hog is analog broadcast TV transmission. In
the US and to an extent in other countries a spectrum analyzer will find
much of the allocated VHF and UHF TV spectrum unused, even in big cities.
The UHF television band is punctured with vast empty holes called "taboo
channels". These channels are left unoccupied because of the frequency
selectivity limitation of early era television receivers. Today we know how
to build far better receivers than when this early rule was adopted and
when those frequencies were set aside.


We should never forget that any transmission capacity not used is wasted
forever, like water over the dam. And, there has been water pouring here
for many, many years, even during an endless spectrum drought.




LINK VERSUS SYSTEM THINKING
We have briefly considered digital's greater tolerance to interference and
how this can be translated into better spectrum usage. But, this direction
offers a relatively minor improvement as compared to other possibilities.
To better capture these additional potential savings, it is necessary to
think in terms of networked systems rather than single links.


Our understanding of the issues is most highly developed when considering
how to make best use of a single communications channel or a single link.
...


The subject area that is most ripe for advancement is the focus of this
paper -- learning how to optimize the overall interests of a multiplicity
of competing heterogeneous users, each with different requirements, and all
sharing a common block of shared frequencies.


The challenge is how to make best use of a common shared spectrum to handle
disparate users, modulation and applications in a world of different system
technologies, different system owners and different needs and objectives.


The argument will be made that instead of today's regimented channel by
channel, highly centralized form of regulation, an alternative approach
requiring only a minimal measure of cooperation would work to the maximum
benefit of all.




A LINK IS NOT A SYSTEM
When wireless first started, a system comprised a transmitter and a
receiver. Just as the telephone has evolved from being a pair of
instruments connected by a pair of wires to a switching network, so too has
radio moved from the transmitter and the receiver pair to becoming part of
a larger switched system.


Communications networks are designed by choosing and joining together a
combination of different media links and switches, as no single
communications medium is ideal in all situations. If the link requirement
is for long distance transmission, then optical fiber may be used. If the
Posted-Date: Fri, 7 Jul 1995 10:45:34 -0400
Mime-Version: 1.0
Date: Fri, 7 Jul 1995 07:44:45 -0700
To: farber () central cis upenn edu (David Farber)
From: jwarren () well com (Jim Warren)
Subject: Re: WIRELESS AT UCSC


requirement is to address many users located hundreds of meters apart from
one another, then coaxial cable or twisted pair may be the preferred
medium, depending on the data rates for that part of the network.


One example of such a composite network is the cellular telephone network,
composed in part by radio links integrally attached to the switched
telephone network. Such networks are owned by a single entity and tend to
be reasonably well optimized, with the economic factors considered as a
whole by the network designers.


As we build more of these kind of networks in the future we are likely to
find that wireless will increasingly becoming the preferred medium, for the
tails of the network - allowing "tetherless" operation. This composite
arrangement provides freedom and flexibility to the user and it combines
access to the more cost effective longer distance transmission media.




RANGE REDUCTION
When we combine the radio tails with wired portions of the network, we face
a tradeoff as to the amount of each media is used. In the UHF band the
number of geographically dispersed users that can be simultaneously
accommodated by a fixed spectrum varies as the inverse square of the
transmission distance. Cut the range in half, and the number of users that
can be supported is doubled. Cut the range by a factor of ten, and 100
times as many users can be served.


Reduce the power further, and then essentially any number of users can be
fit into the exact same spectrum presently tied up in supporting a few
longer distance, higher power users. In other words, a mixture of
terrestrial links plus shorter range radio links has the effect of
increasing by orders of magnitude the usable frequency spectrum.


... By authorizing high power to support a few radio users to reach
slightly longer distances, we deprive ourselves of the opportunity to
better serve the many. Automatic power reduction increases the number that
can use a shared spectrum.




COMPOSITE PATH
How realistic is it to reduce the range of transmission for the relatively
few to allow a greater number to benefit?


Consider today's millions of short range cordless telephones all sharing a
minuscule slice of the radio spectrum, while a small number of licensed
users occupy most of the spectrum. Most of these could be served by shorter
range radios plus a telephone of fiber line to provide the longer distances
sought. While the resulting path is not all wireless, neither are today's
cellular systems. The advantage of tetherless operation is retained for the
user's convenience. There is very little "give up."


In the US, for example, and, increasingly in other countries, there is an
underutilized transmission capacity in already installed TV coaxial cable
and the telephone twisted pair plant. Assuming a move to this direction, a
vast communications capability to homes and businesses can be created to
allow the support of a far greater number of users with greater bandwidths
than feasible today.




MOVING RADIO BASED TV TO CABLE
We could significantly increase the available UHF bandwidth by giving each
TV viewer "free" access to a TV cable to receive the present few
off-the-air signals that they now receive. Let's look at the economics. In
the United States, TV cable passes about 94% of the households, with 63%
already connected. How much would it cost to connect everyone to cable and
recycle the released bandwidth at a cost?


How Much Will it Cost?
Since TV cable systems are laid out without knowing which houses will take
TV service or not, taps to serve each potential subscriber are in place. No
additional power is required. The incremental cost of running a drop cable
to each house is on the order of about $40 per house.


How Much Would be Saved?
Almost 500 MHz of spectrum is presently assigned for over-the-air TV
transmission. In the US the Federal Communications Commission recently
raised about $8 Billion selling access to about 70 MHz of UHF spectrum for
Personal Communications System [PCS] services. This is about $80 per US
household, or about $1.14 per household per Megahertz of bandwidth. And
this cost only covers the cost of the license paid to the US Government -
before any actual equipment is deployed!


If we assume that the TV band occupies about 480 MHz (80 channels) of
spectrum, the value of this TV spectrum asset if sold on a comparable
basis, would be worth about $547 per household, or about 13.7 times the
installation cost of the new drop cables.


What Does This Mean to the Cable Operator?
The cable operator would, of course, lose some revenue if each house were
to be given free off-the-air signals. But, the number of cable subscribers
who presently pay for off the air signals alone is small, and the received
revenue is not large. The loss of revenue to the cable operator is minor
compared to the potential revenue for higher value services made possible
when the cable enter the houses of that one third of the population not
presently reached by the TV cable. And, any short fall might be covered by
the alternative revenues received from freeing up the UHF spectrum.


What Does This Mean to the Broadcast Station?
The TV broadcast station would now be able to reach all their present
viewers and no longer have to pay for the TV transmitter costs. "Owning"
the TV license places the broadcast station in a position to make far more
money leasing frequencies than operating a TV transmitting station.


Of course, the issue of who really owns the spectrum is an interesting
issue of public policy. We shall ignore this issue other than to note, that
if economics are considered most broadly, TV broadcasting is probably not
the most economical use of the spectrum. ...




EVOLUTION TO DIGITAL
With the movement of TV to cable, digital modulation is likely to be
increasingly used. Digital modulation is already being used in early trials
on TV cable systems.


Digital in lieu of the present analog modulation allows ten times as many
TV signals to be sent over an existing TV cable. For example, the TV cable
currently carrying 50 analog channels would be able to carry 500 TV
channels.




SMART TRANSMITTERS
We don't have to wait for the eventual transfer of the UHF TV spectrum to
cable. The existing spectrum can be more efficiently used by the use of
smart receivers and transmitters.


Inexpensive microcontrollers would be used that first listen and then
automatically choose preferred frequencies to avoid other signals in the
band. It is really a matter of being a good neighbor. The smart transmitter
reduces its power level to that needed to produce an error free signal and
no more. A pristine pure slice of spectrum to have error-free performance
is not required when using digital modulation. Digital logic on a chip
implements error correcting codes to convert a small amount of redundancy
in transmission enabling even highly corrupted signals to be cleaned up to
emerge error free.




SPREAD SPECTRUM
One particularly interesting variant of digital - spread spectrum
modulation can allow more users to share a common band of channels.


But, there is a regulatory lag in encouraging the fullest use of such
technology as spread spectrum seems to require more spectrum space. The
idea of signals taking more bandwidth is at variance with the mind set of
government regulators whose objective has historically been to minimize the
occupied bandwidth. And, this takes us to our next topic, Regulation.




REGULATORY HISTORY
Given the history of the shortage of spectrum leading to the necessity of
rationing, it is understandable how national and international regulatory
structures evolved, concerned in major measure with doling out bandwidth.
The assumption of shortages is so institutionalized into regulatory policy
that the basic assumptions that got us here rarely ever get re-examined.
And, when they are, changes tend to occur at glacial speeds.


We have a wide range of sophisticated but under-utilized technology with
which to address this problem, but there is a roadblock as our
institutionalized regulatory structure with its implicit assumptions that
spectrum is a scarce commodity like real estate, leading to a zero-sum
game. While this view of the spectrum may have been valid once upon a time,
it is less so today, and will not likely be true tomorrow.


But, the rules of the regulatory game are set by governments, while the
issues are primarily technical.


Government agencies tend to be staffed by lawyers who view a frequency as a
unique property right. If I owned a frequency, then you can't use my
frequency. It's mine, exclusively mine.


Yet, communications engineers know that statistical averaging of larger
blocks of frequencies can allow for better usage. That is what cellular
radio is all about. There was a regulatory delay in the allowance of
cellular telephones in the US for well over a decade. In fairness, newer
thinking is increasingly being incorporated in the regulatory decisions.
But, from the point of view of a technologist, the process is agonizingly
slow and in need of rationalization.




ALTERNATIVE DIRECTIONS
Given the technological options described above, the assumption that there
will always be a shortage of UHF frequencies deserves reconsideration. If
our present regulatory approach is lacking, how can we do better?


It is my belief that public policy might be better served if we moved to an
environment of near zero regulation.


In such an environment anyone and everyone would be allowed to use the
spectrum, without the barriers to entry that keep out the true innovators.


Of course, there will be some minimal rules necessary, such as maximum
allowable transmitted power and power densities. The micro-managed
regulatory approach of today, such as who can use any single frequency is
neither necessary nor desirable. If the hypothesis is correct that there is
a potential for a limited amount of spectrum to carry all the traffic
imaginable (assuming that the power and the range of the transmitters is
limited), then the purpose of regulation would no longer be primarily
keeping potential users away from the spectrum.




CHAOS?
Would this laissez faire form of regulation lead to chaos? Possibly, but
most likely not. Consider the many millions of cordless telephones, burglar
alarms, wireless house controllers and other appliances now operating
within a minuscule portion of the spectrum and with limited interference to
one another. These early units are very low power 'dumb devices' compared
to equipment being developed, able to change their frequencies and minimize
radiated power to better avoid interference to themselves and to others.


Of course that means that there will have to be enough frequency spectrum
set aside to do so. But, once having done so, we would have created a
communications environment able to handle orders of magnitude more
communications than today.




REGULATION FOR THE FUTURE --THE INTERNET MODEL
The Internet provides an instructive model for the future of
telecommunications regulations. The Internet allows worldwide
communications at a far lower cost than any alternative; serving data users
inexpensively, and opening access to the world's information to a greater
number of people than ever initially imagined.


In the Internet, there is no central node, and only a minimal centralized
management structure, limited to a few housekeeping functions such as
standards setting. Local decisions essentially control the network. The
independent pieces of the network operate in a coordinated manner with a
minimum of restrictions. This lack of a limiting centralized structure has
permitted the Internet to be responsive to a very large unregulated
constituency and allowing explosive growth and with increasing usefulness
to its users. Probably the closest parallel structure to the Internet is
the free market economy. We know that works. Will it work for regulating
the radio spectrum?


The Internet is an organization of users sharing a common resource, as
appropriate to the sharing of a common band of frequencies by all comers.
The Internet model for regulation would be similar to the data network in
which each user follows a simple set of commonly observed rules. Which
frequency to use and when, or which form of modulation to use would be left
to each user. The Internet model has many of the characteristics of a
desired communications regulatory approach for the future.


Such a direction does require a big evolution in the thinking of the
current communications regulatory agencies. The present regulatory
mentality tends to think in terms of a centralized control structure,
altogether too reminiscent of the old Soviet economy. As we know today,
that particular form of centralized system didn't work all that well in
practice and, in fact, ultimately broke down. Emphasis with that structure
was on limiting distribution, rather than on maximizing the creation of
goods and services. Some say that this old highly centralized model of
economic control remains alive and well today -- not in Moscow but, rather,
within our own radio regulatory agencies.


Current thread: