Toward the end of the 1980's, the Internet community became increasingly aware of two approaching problems that threatened to choke off further growth if not dealt with in time: rapid depletion of free addresses and growth in the size of routing tables. At first sight, it seems surprising that the 4,294,967,296 addresses permitted by a 32 bit address field (232) are in danger of running out. This rather large number appears more than adequate, even when the explosive growth in the Internet's popularity is taken into account. It may not be quite enough to give every living person on the globe an IP address, but it is surely enough to handle every "living" computer, if not all the world's digital devices. But address space is much more limited in practice than it appears. The reason for this is not bad engineering or bad organization, but follows from the intrinsic properties of addresses. Furthermore, what appear to be separate problems -- address depletion and growth of routing tables -- are closely related and need to be addressed together
While a stock of 4,294,967,296 numbers might be sufficient to issue globally unique serial numbers to all the worlds's digital devices, it is not sufficient for addressing. An address differs from a serial number in that it bears a direct relation to real-world physical objects and their location in space. It carries "topological" significance: some internal structure that shows the way to the owner's location in the topology of the system. The structure of an addresses is normally a hierarchy, with each level comprising a set of lower level units and itself being part of the set of higher units. This can be seen clearly by the way mailing addresses are organized, where the topology maps to locations on the earth's surface and the hierarchy identifies successively more precise locations. For example, a typical American mailing address could consist of as many as eight layers: 1) name, 2) apartment number, 3) building number, 4) street, 5) city, 6) state, 7) zip code, and 8) country. It should be noted that these eight layers may have quite different membership sizes. There are about 180 independent countries in the world. There are exactly 50 US States. A state can contain many hundreds of cities and towns, and so on. The number of layers, moreover, varies from place to place. A three-layer address is not unusual. This writer once had a perfectly functional, globally unique, address consisting: Jack Aubert, Joal, Senegal.
Because addresses are related to topology, they can not achieve the same allocation "efficiency" as random numbers. It will probably be necessary to "waste" many addresses. In a centrally managed address system such as the US zip code system or the national telephone system, it may be possible to minimize "wastage" by structuring both the number of hierarchical levels and their relative sizes in such a way as to preserve a balance throughout the system. Zip code boundaries can be drawn (and if necessary redrawn) to contain roughly the same number of people. All zip code numbers can be used in sequence without having to skip over any, with sequential blocks identifying contiguous areas on the map. But in systems that are not managed centrally like the Internet, where there is no overall control of the topological meaning of addresses, it is difficult to achieve efficient use of address space and efficient routing. As a consequence, there will be many taken-but-unassigned numbers where a particular instance of a hierarchical domain is not fully populated -- for example like a telephone exchange with less than 9999 subscribers, or an internet site with fewer hosts than the minimum-sized network address.
Some degree of topological hierarchy is a necessity for routing tables in anything beyond a miniature system. If addresses were assigned as serial numbers with neither waste nor topological meaning, there could be no division of labor: all routing information would be pushed to a single giant top-level flat table. The size of the routing table, bears an inverse relation to the hierarchical organization of the address space and consequent "waste" of addresses that the hierarchy entails. It follows from this that "solving" the problem of address space depletion by simply increasing the number of bits without including some provision for addressing hierarchy will exacerbate the router table problem.
Internet IP addressing is decentralized, but not unstructured. Addresses are doled out by the Internet Network Information Center (InterNIC) in the US and by its sister organizations, the RIPE (Europe) and APNIC (Asia) and are organized into a minimum two-layer hierarchy consisting of "network" and "host." Networks are free to organize their own subhierarchy according to their own topology. The total address space of over 4 billion is aggregated into only around 4 million networks. This reduces the top-level routing problem by three orders of magnitude, but the choice of three fixed size network classes, (16,772,216 hosts in an A address, 65,536 hosts in a B address and only 255 hosts in a C address) was not the optimal division. There were too few B's and too many C's both in terms of demand and in terms of routing efficiency. This due to what has been called the Goldilocks effect.
By the late 1980's there was still plenty of room for assignment of class C addresses, but demand for class B addresses was growing by 20% a year in 1990 with projections for exhaustion by 1994 if nothing was done. When it became clear that demand for Class B network numbers was showing no signs of subsiding, the InterNIC instituted a new policy of being more discriminating when assigning numbers. Simultaneously a Routing and Addressing (ROAD) Working Group was formed in November 1991, in order to come up with proposals for short and long term solutions to the address depletion problem. After meeting several times, the ROAD presented a recommendation for use of Classless Inter-Domain Routing (CIDR) as a short-term solution.2
The CIDR scheme, which was formally adopted by RFC-1519, did nothing to increase total address space, but did improve the ability of the system to better match address assignment to topology. It did so by doing away with the traditional boundaries separating IP address into network and host portions, in effect making the two-tier hierarchy adjustable. CIDR address assignment permits consolidation of arbitrary blocks of contiguous Class C addresses into a single network, thereby creating a hybrid address class intermediate between the 256 node C space and the 16 million node B space. This was accomplished by applying a network mask so that, for example a router that was providing connectivity for the network numbers 263.234.0 through 263.234.7, would announce to its neighboring routers a route to 263.234.0/21 (in human notation). The number after the '/' character indicates that the network bit mask consists of 21 bits. Internally this would be represented by a 32 bit field consisting of 8+8+5 ones followed by 3+8 zeros so that when ANDed with the address, would identify all numbers left of the (decimal) 7 in the above address as part of the host. Neighboring routers would, therefore, become aware of a single host for what previously would have been a block of 8 C addresses.
CIDR address allocation is now being used by the InterNIC and has taken some of the pressure off the B address space -- at least in those cases where a C is too small and a full B is clearly too large. RFC 1519 also included a proposal to reserve some of the unallocated upper part of the Class A address space (sometimes called the "A# space") for CIDR use by subdividing it into smaller blocks. This proposal has not been implemented, but could be if the need becomes acute. Meanwhile, there was a call to the Internet community to return unused address space; this has reportedly yielded some results.
These were stop-gap measures. Consolidation of some of the Class C addresses into larger groups allows increased address efficiency in address allocation, but a long-term solution will have to include an expansion of the total address space, as well as a more efficient way of organizing the address hierarchy. In March, 1992, the Internet Engineering Task Force (IETF) called for proposals to explore new approaches for bigger addresses. Around the same time, the Internet Architecture Board called for the IETF to prepare a plan to use the ISO's Connectionless Network Protocol (CLNP) as the basis for the next generation. The IETF, however, rejected this recommendation and issued a call for proposals where CLNP might be explored along with other avenues. A call for proposals went out in July 1992 and a number of working groups were formed in response.
During the course of 1992, working groups developed, refined and merged a number of different approaches. At the beginning of the year, there were four separate proposals: CNAT, IP Encaps, Nimrod and Simple CLNP. Three more followed before the year's end: The P Internet Protocol (PIP), Simple Internet Protocol (SIP) and TP/IX. In March, 1993, Simple CLNP became TCP and UDP with Bigger Addresses (TUBA). By November, IP Encaps merged with SIP and PIP to form Simple Internet Protocol Plus (SIPP) and TP/IX changed its name to Common Architecture for the Internet (CATNIP). As these proposals were taking shape it became clear that a procedure for evaluating them would be needed. In July 1993 at an Amsterdam IETF meeting the process for deciding on a new protocol was set in motion and an IPng (IP next generation) "Area" group was formed with a mandate to develop criteria. In December, 1993, the IPng area issued RFC 1550, calling for White Papers on topics related to the IPng requirement and selection criteria. The 21 papers submitted in response to this call culminated in a list of 19 specific criteria for evaluating candidates proposals. There were three "finalist" proposals: TUBA, SIPP and CATNIP. Each of these three proposals is specified in RFC White Papers in detail. 3
Although the criteria were adopted without formal prioritization, the question of total address space was a topic that came up frequently. There was a general consensus that it is important to avoid repeating past mistakes: IPng should have more address space than is likely to be needed in the foreseeable future. Given the need to allow room for growth, variability in hierarchical levels and topology and the need for unassigned addresses between assigned addresses, it is difficult to set a theoretical outer bound on the size of the address space needed. As various writers discussed estimates for the optimal size of an address space, there was a tendency to leapfrog each other's estimates for global number of households, to global device count, to multiple interfaces per device. The general approach was essentially to pick the largest number that could conceivably be required... and then make it a lot bigger.
However, an interesting and instructive empirical approach to the issue was
taken in RFC-1517, submitted by Christian Huitema:
log (number of objects) H = ------------------------ available bits
Using log base 10, as is commonly done for large numbers, H will vary between 0 and a theoretical maximum of 0.30103 (log base 10 of 2). We can not expect to achieve this ratio in practice, but one can take an empirical approach and examine real-world systems. Huitema looked at several systems at "the point where the plans broke, i.e. when people were forced to add digits to phone numbers, or to add bits to computer addresses." Based on these examples, including DECNet, the French telephone system and the U.S. postal code, Huitema found that the efficiency ratio usually lies between 0.14 and 0.26. By applying these ratios to different address sizes, Huitema got the following maximum comfortable population counts.
Bits of address space and addresses for different H ratios6
Pessimistic (0.14) Optimistic (0.26) 32 bits 3 E+4 (!) 2 E+8 64 bits 9 E+8 4 E+16 80 bits 1.6 E+11 2.6 E+27 128 bits 8 E+17 2 E+33
It should be noted that the current 3.5 million addresses registered with the DNS puts IPv6 at an H ratio of .20
The three candidates for IPng took two basic approaches to the problem of address spaces: TUBA and CATNIP each proposed similar variations on the ISO's Connectionless Network Protocol (CLNP) variable address scheme, which generally uses an 80 bit address space. SIPP, on the other hand, proposed to retain a fixed-size address (as in IPv4) but to double it from 32 to 64 bits.
The TUBA proposal, despite its name, did not specify how big the big addresses would be [RFC 1347 page 8] "for example CLNP allows addresses to be any integral number of bytes up to 20 bytes in length"7 (italics added). TUBA assumes a variable address field without any specific limit. CATNIP's was explicitly specified as a convergence protocol. It would integrate IP, IPX and CLNP, although its roots in the ISO's CLNP are much in evidence. It was not necessary for either CATNIP or TUBA to explicitly deal with address size or hierarchy issues, since a model hierarchy already exists based on the OSI's NSAP's:
SIPP, with its proposal to simply increase the existing address field from 32 to 64 bits, was the odd man out. Many reviewers found this address space to be insufficient. If Huitema's pessimistic estimate for topological efficiency is to be taken seriously, a 64 bit space would be inadequate. Furthermore, in addition to its dubious global address space allocation, "a number of reviewers felt that SIPP did not address the routing issue in any useful way. In particular, the SIPP proposal made no serious attempt to develop ways to abstract topology information or to aggregate information about areas of the network." 9
It is therefore surprising that in spite of being found wanting in these critical areas, SIPP emerged as the winner. There are several explanations. First, SIPP had other strong points; address space and topology were not the only issues. Second, its competitors, particularly TUBA, apparently suffered from their ties to somebody else's standard (the ISO's CLNP). The TUBA defenders could not agree among themselves on the ability of the IETF to modify CLNP standards.10 Third, there was "a general consensus expressed in mailing list discussion in favor of a fixed length address." 11 Finally, and most importantly, the SIPP proposal was substantially changed before adopting it by increasing the proposed 64 bits to 128 bits. The issue of hierarchical topology was to be handled subsequently. In any case,"SIPP" and "IPng" expired together and were reborn as "IPv6"-- the name officially adopted for the successor protocol. A series of RFCs giving specifications for IPv6 were issued in December, 1995, including a sketch of how topological information will be handled. This is now being refined through discussion in mailing lists.
128 bits (2128) is a very big number. In fact, it is a very, very, very big number. It works out to be : 340,282,366,920,938,463,463,374,607,431,768,211,456. Some intuitive appreciation for the order of magnitude can be grasped from the following comparison: If the earth's globe were made up entirely of fine beach sand, each grain could be given approximately 386,727 IPv6 addresses! (Assuming sand = .1 millimeter and the earth's diameter = 7,926 miles.)
The IPv6 standard provides for three types of addresses: Unicast, Anycast and Multicast. A particular feature of IPv6 is that addresses are assigned to interfaces instead of nodes; it is assumed that a node may have more than one interface. A Unicast address identifies a single interface. An Anycast address is semantically identical to a Unicast address but can be assigned to multiple interfaces connected to the same node, so a packet can be sent to any one of several interfaces identified by that address. A Multicast address identifies a set of interfaces, normally belonging to different nodes, and replaces "broadcast" addressing in IPv4. IPv6 addresses have a variable internal structure, of which the "address prefix" is explicitly defined. This address prefix, which has variable length, identifies assignment of a block of address space to a specific category. Two of the three address types, Unicast and Multicast, can be identified by their prefixes, while Anycast addresses are taken from the Unicast address space. Space has also been reserved for other purposes such as IPX addresses and local use addresses (analogous to the current use of the 10.0.0.0 class A network address for private use). The following [RFC 1884] table shows reserved address spaces:12
Allocation Prefix Fraction of (binary) Address Space ------------------------------- -------- ------------- Reserved 0000 0000 1/256 Unassigned 0000 0001 1/256 Reserved for NSAP Allocation 0000 001 1/128 Reserved for IPX Allocation 0000 010 1/128 Unassigned 0000 011 1/128 Unassigned 0000 1 1/32 Unassigned 0001 1/16 Unassigned 001 1/8 Provider-Based Unicast Address 010 1/8 Unassigned 011 1/8 Reserved for Geographic- Based Unicast Addresses 100 1/8 Unassigned 101 1/8 Unassigned 110 1/8 Unassigned 1110 1/16 Unassigned 1111 0 1/32 Unassigned 1111 10 1/64 Unassigned 1111 110 1/128 Unassigned 1111 1110 0 1/512 Link Local Use Addresses 1111 1110 10 1/1024 Site Local Use Addresses 1111 1110 11 1/1024 Multicast Addresses 1111 1111 1/256
The assignment of variable-length address prefixes introduces a new element of topological hierarchy into IPv6 that was not spelled out in its ancestor, SIPP, but which is a logical extension of IPv4's CIDR technique. Of particular note are the two 1/8-of -total segments taken from the middle of the address space and reserved for the two types of Unicast addressing which will presumably constitute the bulk of "normal" IPv6 addressing, provider-based and geographic-based. Division of the address space in this way and placement of these two blocks in the middle means that the (presumably) most common addresses type (Unicast) could be identifiable by a router on examination of only the first four bits of the address.
Much work remains to be done on developing more hierarchy for address allocation. RFC 1887, issued in December 1995, covers Architecture for allocation of unicast addresses in an attempt to reconcile the need for an efficient routing-structuring hierarchy with the Internet's decentralized control. This RFC takes the view that the Internet topology can generally be divided up into "leaf-routing domains" (essentially local sites) and "Transit routing Domains" (TRDs) (essentially providers and backbones).
The next step to develop hierarchical format for provider-based addresses is contained in a draft RFC "An IPv6 Provider-based Unicast Format," circulated on April 9 and due to appear shortly as a standard. It proposes that addresses administration be organized into a three level hierarchy -- registry, provider, and subscriber. Such an address would thus have a minimum of five levels of hierarchy, of which only the topmost (i.e. the provider-based unicast identifier) is not directly related to topology. The proposal will, add a second fixed-definition layer of hierarchy to the provider-based unicast prefix. The new layer will be a 5-bit Registry ID, enough to allow for 28 more Registries in addition to the four that already exist. Each Registry will handle subscriber based addressing in its own part of the world.
The proposed address format is:
| 3 | 5 bits | n bits | 56-n bits | 64 bits | +---+----------+------------+--------------+--------------------+ |010|RegistryID| ProviderID | SubscriberID | Intra-Subscriber | +---+----------+------------+--------------+--------------------+
Regional Registry Value (binary) Multi-Regional (IANA) 10000 RIPE NCC 01000 INTERNIC 11000 APNIC 00100 All other values of the Registry ID are reserved by the IANA." 14
64 bits are to be reserved for the Intra-Subscriber part of the address. It is anticipated that subscribers will use this space for autoconfiguration of devices using the 48 bit IEEE-802 MAC (i.e. Ethernet) addresses as interface IDs leaving 16 bits for a subnet mask.
The structure of the Provider ID and Subscriber ID, including the dividing line between them, however, will be left up to local jurisdictions. National sub-registries could be created at discretion of the regional registries. The RFC suggests that the 5 bit (Regional) Registry ID be followed by an additional level of hierarchy consisting of a National-Registry ID field to be carved out of the provider ID space. The size of this layer (if it exists) would be determined by the Regional registry based on the number of countries in the region and the number of providers expected in each country.
It is clear that the introduction of the gigantic address space defined by IPv6 will require much more hierarchy than the two minimum levels required by IPv4. Reconciling the need for topological significance and flexible hierarchial levels with the Internet's decentralized, self-organizing structure is a formidable challenge: one that the IPv6 group is well on the way to meeting.