Digital Subscriber Line is a technology based on the idea that data transmitted over twisted pair (P.O.T.S.) lines does not need to be converted from digital to analog and then back to digital, which is how modems work. Instead the data is transmitted across the lines as digital data, which allows the phone company to use much wider bandwith in transmitting the data.
The Digital Subscriber Line Access Multiplexer (DSLAM)
To interconnect multiple DSL users to a high-speed backbone network, the telephone company uses a Digital Subscriber Line Access Multiplexer (DSLAM). Typically, the DSLAM connects to an asynchronous transfer mode (ATM) network that can aggregate data transmission at gigabit data rates. At the other end of each transmission, a DSLAM demultiplexes the signals and forwards them to appropriate individual DSL connections.
Types of DSL
The variation called ADSL (Asymmetric Digital Subscriber Line) is the form of DSL that will become most familiar to home and small business users. ADSL is called "asymmetric" because most of its two-way or duplex bandwidth is devoted to the downstream direction, sending data to the user. Only a small portion of bandwidth is available for upstream or user-interaction messages. However, most Internet and especially graphics- or multi-media intensive Web data need lots of downstream bandwidth, but user requests and responses are small and require little upstream bandwidth. Using ADSL, up to 6.1 megabits per second of data can be sent downstream and up to 640 Kbps upstream. The high downstream
bandwidth means that your telephone line will be able to bring motion video, audio, and 3-D images to your computer or hooked-in TV set. In addition, a small portion of the downstream bandwidth can be devoted to voice rather data, and you can hold phone conversations without requiring a separate line.
To create multiple channels, ADSL modems divide the available bandwidth of a telephone line in one of two ways – Frequency Division Multiplexing (FDM) or Echo Cancellation. FDM assigns one band for upstream data and another band for downstream data. The downstream path is then divided by time division multiplexing into one or more high speed channels and one or more low speed channels. The upstream path is also multiplexed into corresponding low speed channels. Echo Cancellation assigns the upstream band to over-lap the downstream, and separates the two by means of local echo cancellation, a technique well know in V.32 and V.34 modems. With either technique, ADSL splits off a 4 kHz region for POTS at the DC end of the band.
An ADSL modem organizes the aggregate data stream created by multiplexing downstream channels, duplex channels, and maintenance channels together into blocks, and attaches an error correction code to each block. The receiver then corrects errors that occur during transmission up to the limits implied by the code and the block length. The unit may, at the users option, also create superblocks by interleaving data within subblocks; this allows the receiver to correct any combination of errors within a specific span of bits. This allows for effective transmission of both data and video signals alike.
CDSL (Consumer DSL) is a trademarked version of DSL that is somewhat slower than ADSL (1 Mbps downstream, probably less upstream) but has the advantage that a "splitter" does not need to be installed at the user's end. Rockwell, which owns the technology and makes a chipset for it, believes that phone companies should be able to deliver it in the $40-45 a month price range. CDSL uses its own carrier technology rather than DMT or CAP ADSL technology.
G.Lite or DSL Lite
G.Lite (also known as DSL Lite, splitterless ADSL, and Universal ADSL) is essentially a slower ADSL that doesn't require splitting of the line at the user end but manages to split it for the user remotely at the telephone company. This saves the cost of what the phone companies call "the truck roll." G.Lite, officially ITU-T standard G-992.2), provides a data rate from 1.544 Mbps to 6 Mpbs downstream and from 128 Kbps to 384 Kbps upstream. G.Lite is expected to become the most widely installed form of DSL.
The earliest variation of DSL to be widely used has been HDSL (High bit-rate DSL) which is used for wideband digital transmission within a corporate site and between the telephone company and a customer. The main characteristic of HDSL is that it is symmetrical: an equal amount of bandwidth is available in both directions. For this reason, the maximum data rate is lower than for ADSL. HDSL can carry as much on a single wire of twisted-pair as can be carried on a T1 line in North America or an E1 line in Europe (2,320 Kbps).
IDSL (ISDN DSL) is somewhat of a misnomer since it's really closer to ISDN data rates and service at 128 Kbps than to the much higher rates of ADSL. IDSL bonds both B channels and the single D channel of ISDN into a point-to-point style connection that provides a 144Kbps bi-directional bandwidth.
RADSL (Rate-Adaptive DSL) is an ADSL technology from Westell in which software is able to determine the rate at which signals can be transmitted on a given customer phone line and adjust the delivery rate accordingly. Westell's FlexCap2 system uses RADSL to deliver from 640 Kbps to 2.2 Mbps downstream and from 272 Kbps to 1.088 Mbps upstream over an existing line.
SDSL (Single-line DSL) is apparently the same thing as HDSL with a single line, carrying 1.544 Mbps (U.S. and Canada) or 2.048 Mbps (Europe) each direction on a duplex line.
UDSL (Unidirectional DSL) is a proposal from a European company. It's a unidirectional version of HDSL.
VDSL (Very high data rate DSL) is a developing technology that promises much higher data rates over relatively short distances (between 51 and 55 Mbps over lines up to 1,000 feet or 300 meters in length). It's envisioned that VDSL may emerge somewhat after ADSL is widely deployed and co-exist with it. The transmission technology (CAP, DMT, or other) and its effectiveness in some environments is not yet determined. A number of standards organizations are working on it.
Factors Affecting the Experienced Data Rate
DSL has two factors that affect both its connectivity and bandwidth. Line quality is a measure of the signal-to-noise ratio on a given line. This measures the amplitude of a signal on a line and then compares it to the amplitude of static on a line without signal. The higher the ratio, the more bandwidth that can be transmitted on the line. The other factor that affects signal transmission is distance from the central office. Given equal lines, the farther a customer is from the central office, the lower bandwidth a given line can support. The maximum distance without repeaters that DSL can support is 18,000 feet. Now, that is not to say that if you are beyond that distance you cannot have DSL. The telephone company can put the DSL signal onto optical connections to extend the distance.
Data Rate Downstream/ Upstream
18,000 feet on 24 gauge wire
1 Mbps downstream; less
18,000 feet on 24 gauge wire
DSL Lite (same as G.Lite)
without the "truck
From 1.544 Mbps to 6 Mbps downstream, depending on the
18,000 feet on 24 gauge wire
"Splitterless" DSL without the "truck roll"
From 1.544 Mbps to 6 Mbps , depending on the subscribed service
18,000 feet on 24 gauge wire
High bit-rate Digital
1.544 Mbps duplex on two
twisted-pair lines; 2.048 Mbps duplex on three twisted-pair lines
12,000 feet on 24 gauge wire
1.544 Mbps duplex (U.S. and Canada); 2.048 Mbps (Europe) on a single duplex line downstream and upstream
12,000 feet on 24 gauge wire
1.544 to 6.1 Mbps downstream;
16 to 640 Kbps upstream
1.544 Mbps at 18,000 feet;
2.048 Mbps at16,000 feet;
6.312 Mpbs at 12,000 feet;
8.448 Mbps at 9,000 feet
Adapted to the line, 640 Kbps to 2.2 Mbps downstream; 272 Kbps to 1.088 Mbps upstream
Unidirectional DSL proposed by a company in Europe
Very high Digital
12.9 to 52.8 Mbps downstream;
1.5 to 2.3 Mbps upstream; 1.6 Mbps to 2.3 Mbps downstream
4,500 feet at 12.96 Mbps;
3,000 feet at 25.82 Mbps;
1,000 feet at 51.84 Mbps
Integrated Services Digital Network (ISDN) (DS0) (BRI)
- Supports 3 digital channels across a local loop
- Two are digital 64Kbps bearer (B) channels
- One is a digital 16Kbps data (D) channel
- Always ready to use
- Terminal Adapters (TA’s)
- Network-Termination Devices (NT1)
- Provides physical interface between a four wire local network that connects to devices and the two-wire local loop. Needs power.
- Line-Termination Equipment
- The physical component that terminates the line at the telephone company.
- Exchange-Termination Equipment
- The logical device that connects the line to the central office switch
This refers to variations on implementation of the signaling protocols by different switch vendors.
Three switch types are commonly used in North America:
- National ISDN -1 (NI-1)
If the telephone company tells you the switch is 5ESS or DMS-100, ask for the software type. If they say the switch uses National software, then use NI-1 as the switch type.
Centrex (central office exchange service) is a service from local telephone companies in the United States in which up-to-date phone facilities at the phone company's central (local) office are offered to business users so that they don't need to purchase their own facilities. The Centrex service effectively partitions part of its own centralized capabilities among its business customers. The customer is spared the expense of having to keep up with fast-moving technology changes (for example, having to continually update their private branch exchange infrastructure) and the phone company has a new set of services to sell.
In many cases, Centrex has now replaced the private branch exchange. Effectively, the central office has become a huge branch exchange for all of its local customers. In most cases, Centrex (which is sold by different names in different localities) provides customers with as much if not more control over the services they have than PBX did. In some cases, the phone company places Centrex equipment on the customer premises.
Typical Centrex service includes direct inward dialing (DID), sharing of the same system among multiple company locations, and self-managed line allocation and cost-accounting monitoring.
Call Connection Procedures:
T1/Primary Rate Interface (PRI) (DS1)
The T1 system was designed to carry 24 digitized telephone calls. Hence the capacity of a T1 line is divided into 48 channels, 24 in each direction. The 24 lines plus 1 D channel combines to allow a bandwidth of 1.544 Mbps.
CSU/DSU (channel service unit/data service unit)
- Terminates the line from the telecommunications network. A 2 or 4 wire line is used for 56/64 Kbps service, while a 4 wire line is used for T1 service.
- Places the signal on the line and controls the strength of the transmission signal.
- Support loop back tests.
- Provides timing or synchronizes the timing with the timing received on the line.
- Frames the T1 signal. (More on this in a bit)
A specialized DSU device is used to connect to a T3 line as a T3 uses coaxial cable for its transmission media.
Although we often think of these channels as flowing across the line together, the bits are actually transmitted across the line one at a time. One byte from the first call is sent, followed by the next, and so forth. This is done using time division multiplexing. Each call is assigned a 1-byte time slot. The transmitting device sends one byte for a channel each time the channel’s time slow comes around.
24 Sixty-four Kbps channels plus 1 eight Kbps signaling channel (Framing channel).
Can be used for voice calls or data or both.
Uses multiplexors and inverse multiplexors to combine channels as needed.
Multiplexors are typically part of the router or within the PBX.
A DS1 frame consists of a framing bit followed by 24 bytes, one each for each of the 24 channels. Thus, a frame consists of 193 bits. Eight thousand frames are sent per second, the sampling rate used by the telephone company, giving a total signal rate of 1,544,000 bits per second. This signal is called digital signal level 1 (DS1). T1 and DS1 are often used interchangeably, however, T1 is a physical implementation while DS1 defines the format of the signal transmitted on a T1 line.
Packaging the DS1 signal – DS4 and ESF
The original packaging that was defined for a DS1 signal was called a D4 superframe, and consisted of 12 consecutive frames. The 12 framing (F) bits in a D4 superframe contained the pattern:
1 0 0 0 1 1 0 1 1 1 0 0
Telecommunications equipment locked into this pattern to locate D4 superframes and maintain alignment.
An improved extended superframe (ESF) was adopted later. It is made up of 24 consecutive frames. Its framing bits are used for three purposes:
- Alignment – Framing bits from six of the frames repeat the pattern 0 0 1 0 1 1. (This consumes 2Kbps.)
- Error Checking – Framing bits from six of the frames contain a cyclic redundancy check (CRC) computed on the previous extended superframe. (This consumes 2Kbps)
- A messaging link – Framing bits from 12 of the frames are used to form a messaging channel called a facility data link. (This consumes 4Kbps.)
- Troubleshooting Serial Links http://www.cisco.com/univercd/cc/td/doc/cisintwk/itg_v1/itg_seri.htm
Telephone company testing
*Note: When a B8ZS code is injected into a test pattern that contains a long string of zeros, the pattern is no longer testing to the full consecutive zero requirement. Circuit elements, such as line repeaters, that are intended to operate with or without B8ZS should be tested without B8ZS.
Consists of 672 sixty-four Kpbs channels, or 28 DS1’s.
Verio typically uses these as a channelized T3 and sells each of the 28 lines as a separate T1.
Multiplexing Lower Level Signals into a DS3 Signal
Lower level signals can be multiplexed into a payload of a DS3 M-frame in a number of ways. For example, one way is to take the input of 28 complete DS1 signals and send these into a multiplexor. Each consists of 1.544 Mbps, and includes the T1 framing bits. These signals are byte interleaved into a DS3 signal at a multiplexor. This direct multiplexing scheme is called the synchronous DS3 M13 multiplex format.
An alternative way to pack the DS1’s into a DS3 is to bit-interleave groups of four T1’s into DS2 signals, and then bit-interleave seven DS2 signals into a DS3 signal. T his is called the M23 multiplex format.
See T1/PRI troubleshooting above.
Switched Multimegabit Data Service (SMDS) is a connectionless high speed digital network service based on cell relay for end-to-end application usage. This allows for a logical progression to ATM if the need arises. Switched means it can be used to reach multiple destinations with a single physical connection. Originally rolled out in December 1991, SMDS allows transport of mixed data, voice, and video on the same network.
SMDS provides higher speeds (56kbps - 34Mbps) than Frame Relay or ISDN and is a cross between Frame Relay and ATM. It uses the same 53 byte cell transmission technology as ATM but differs from Frame Relay in that destinations are dynamic (not predefined). This allows data to travel over the least congested route. However, it does provide some of the same benefits as Frame Relay including:
- Protocol transparency
- Inexpensive meshed redundancy
- High speeds
There are 6 implimentations of SMDS that currently exist (that I know of):
1) 1.17Mbps SIP (SMDS Interface Protocol) - a special (T1) SMDSU (not
CSU/DSU) must be used. Common SMDSU's are Kentrox and Digital Link
2) 1.536Mbps DXI (a regular T1 CSU/DSU is used)
3) 4,10,16,25,34Mbps A special (T3) SMDSU is used. Common SMDSU's are
Kentrox and Digital Link.
4) T3 DXI (45Mbps) - I don't know much about this because B.A. doesn't sell
it (to my knowledge). I do know that it does not use SIP and a normal T3
CSU/DSU is used.
5) ATM to SMDS (I don't know anything about this. Bell Atlantic is not
selling this to my knowledge, however it exists.)
6) 64Kbps SMDS...not very common.
You can probably skip over the DBDQ section, it's not really necessary
information as I've only had the issue come up once and that was several
years ago. As you'll see from the document, SIP is Layers 1, 2, and a
little bit of 3.
There used to be an organization called the SMDS Interest Group (who I
think wrote the SMDS standards), but the URL I had for their site is no
longer valid and I can not find a new one.
Simple router config (from ucsc1, but made simpler):
description Bell Atlantic SMDS CID: 3QCDQ650002
ip address 18.104.22.168 255.255.255.128 no ip redirects
no ip directed-broadcast
no ip proxy-arp ! THIS IS VERY IMPORTANT.. W/O IT, THE ROTUER ACTS AS A
! PROXY FOR ARP RESPONSES FOR OTHER ROUTERS
! AND GIVES ITS OWN HARDWARE ADDR INSTEAD OF THEIRS
smds address c121.5215.1279 ! Unique "single-cast" address assigned by
smds multicast ARP e101.2150.2129 22.214.171.124 255.255.255.128
smds multicast IP e101.2150.2129 126.96.36.199 255.255.255.128
! The mtulicast address e101...is assigned by telco
! and is used to "group" the circuits together.
! This is what makes ARP work. BOTH IP and ARP lines
! must be present in the config
smds enable-arp ! Tells the router to use ARP
crc 32 ! CRC is set by Telco. on the switch; either 16 or 32.
Customer router has the same config (T1's would be on a serial interface
though). For some reason in the past, I've had to use "no smds dxi-mode"
and that was with a DXI T1 (strange enough as it looks), otherwise, the
interface went up/down. I don't know if that was a bug or not...you might
want to check with the guys currently config'ing customer routers to see if
they have run into that problem more recently.
For more information about SMDS, check out Cisco’s web site:http://www.cisco.com/univercd/cc/td/doc/product/software/ios11/rbook/rsmds.htm http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/smds.htm
Layer 2 – Data Link Layer
Bridging and Switching
Bridges and switches are data communications devices that operate principally at Layer 2 of the OSI reference model. As such, they are widely referred to as data link layer devices. Bridging and switching occur at the link layer, which controls data flow, handles transmission errors, provides physical (as opposed to logical) addressing, and manages access to the physical medium. Bridges provide these functions by using various link-layer protocols that dictate specific flow control, error handling, addressing, and media-access algorithms. Examples of popular link-layer protocols include Ethernet, Token Ring, and FDDI.
Bridges and switches are not complicated devices. They analyze incoming frames, make forwarding decisions based on information contained in the frames, and forward the frames toward the destination. In some cases, such as source-route bridging, the entire path to the destination is contained in each frame. In other cases, such as transparent bridging, frames are forwarded one hop at a time toward the destination.
Upper-layer protocol transparency is a primary advantage of both bridging and switching. Because both device types operate at the link layer, they are not required to examine upper-layer information. This means that they can rapidly forward traffic representing any network-layer protocol.
Bridges are capable of filtering frames based on any Layer 2 fields. A bridge, for example, can be programmed to reject (not forward) all frames sourced from a particular network. Because link-layer information often includes a reference to an upper-layer protocol, bridges usually can filter on this parameter. Furthermore, filters can be helpful in dealing with unnecessary broadcast and multicast packets.
By dividing large networks into self-contained units, bridges and switches provide several advantages. Because only a certain percentage of traffic is forwarded, a bridge or switch diminishes the traffic experienced by devices on all connected segments. The bridge or switch will act as a firewall for some potentially damaging network errors, and both accommodate communication between a larger number of devices than would be supported on any single LAN connected to the bridge. Bridges and switches extend the effective length of a LAN, permitting the attachment of distant stations that were not previously permitted.
Although bridges and switches share most relevant attributes, several distinctions differentiate these technologies. Switches are significantly faster because they switch in hardware, while bridges switch in software and can interconnect LANs of unlike bandwidth. A 10-Mbps Ethernet LAN and a 100-Mbps Ethernet LAN, for example, can be connected using a switch. Switches also can support higher port densities than bridges. Some switches support cut-through switching, which reduces latency and delays in the network, while bridges support only store-and-forward traffic switching. Finally, switches reduce collisions on network segments because they provide dedicated bandwidth to each network segment.
Frame relay is a packet-switching protocol based on X.25 and ISDN standards. Unlike X.25 however, which assumed low speed, error-prone lines and had to perform error correction, frame relay assumes error-free lines. By leaving the error correction and flow control functions to the end points (customer premise equipment), frame relay has lower overhead and can move variable-sized data packets at much higher rates.
Each location gains access to the frame relay network through a Frame Relay Access Device (FRAD). A router with frame relay capability is one example. The FRAD is connected to the nearest carrier point-of-presence (POP) through an access link, usually a leased line. A port on the edge switch, provides entry into the frame relay network.
FRADs assemble the data to be sent between locations into variable-sized frame relay frames, like putting a letter in an envelope. Each frame contains the address of the target site, which is used to direct the frame through the network to its proper destination. Once the frame enters the shared network cloud or backbone, any number of networking technologies can be employed to carry it.
The path defined between the source and the destination sites is known as a virtual circuit. While a virtual circuit defines a path between two sites, no backbone bandwidth is actually allocated to that path until the devices need it. Frame relay supports both permanent and switched virtual circuits.
A Permanent Virtual Circuit (PVC) is a logical point-to-point circuit between sites through the public frame relay cloud. PVCs are permanent in that they are not set up and torn down with each session. They may exist for weeks, months or years, and have assigned end points which do not change. The PVC is available for transmitting and receiving all the time and, in that regard, is analogous to a leased line.
In contrast, a Switched Virtual Circuit (SVC) is analogous to a dial-up connection. It is a duplex circuit, established on demand, between two points. Existing only for the duration of the session, it is set up and torn down like a telephone call. FRADs which support SVCs perform the call establishment procedures. Currently, all public frame relay service providers offer PVCs, while only a very small number offer SVCs.
By supporting several PVCs simultaneously, frame relay can directly connect multiple sites, through a single physical connection. (In contrast, a leased line network would require multiple physical connections, one for each site.)
A Data Link Connection Identifier (DLCI), assigned by the service provider, identifies each PVC. A header in each frame contains the DLCI, indicating which virtual circuit the frame should use.
The real benefit of frame relay comes from its ability to dynamically allocate bandwidth and handle bursts of peak traffic. When a particular PVC is not using backbone bandwidth it is "up for grabs" by another.
When purchasing PVCs, the bandwidth or Committed Information Rate (CIR) must be specified. The CIR is the average throughput the carrier guarantees to be always available for a particular PVC.
A device can burst up to the Committed Burst Information Rate (CBIR) and still expect the data to get through. The duration of a burst transmission should be short, less than three or four seconds. If long bursts persist, then a higher CIR should be purchased.
Devices using the extra free bandwidth available do run a risk: any data beyond the CIR is eligible for discard, depending on network congestion. The greater the network congestion, the greater the risk that frames transmitted above the CIR will be lost. While the risk is typically very low up to the CBIR, if a frame is discarded it will have to be re-sent. Data can even be transmitted at rates higher than the CBIR, but doing this has the greatest risk of lost packets.
The frame relay network does try to police itself and keep congestion and thus packet loss down. It can do this in two ways. It can try to control the flow of packets with Forward Explicit Congestion Notification (FECN), which is a bit set in a packet to notify a receiving interface device that it should initiate congestion avoidance procedures. Backward Explicit Congestion Notification (BECN) is a bit set to notify a sending device to stop sending frames because congestion avoidance procedures are being initiated.
A second way to inform the end devices that there is congestion is through the Local Management Interface (LMI). This specification describes special management frames sent to access devices.
A Discard Eligiblility Bit (DE Bit) is set by the public frame relay network in packets the device is attempting to transmit above the CIR or the CBIR for any length of time. It will also be set if there is high network congestion. This means that if data must be discarded, packets with the DE bit set should be dropped before other packets.
Notice that the network itself has no way to enforce congestion flow control. It is up to the end device to support and obey these codes. When all is said and done, the frame travels to its destination where it is disassembled by the receiving FRAD, and data is passed to the user.
There are major differences between frame relay and X.25 data networks. See the table below for a brief summary:
X.25 has link and packet protocol Frame relay is a simple data link protocol.
The link level of X.25 consists of a Frame relay provides basic data transfer
Data link protocol called LAPB. That doesn’t guarantee reliable delivery of data.
X.25 LAPB information frames are Frame relay frames are not numbered or
numbered and acknowledged. acknowledged.
Circuits are defined at the packet layer, Circuits are identitfied by an address field in the
which runs on top of the LAPB data link frame header.
layer. Packets are numbered and
There are complex rules that govern the Data is packaged into simple frame relay frames
flow of data across an X.25 interface. and transmitted toward its destination.
These rules often interrupt and impede the Data can be sent across a frame relay network
flow of data. Whenever there is bandwidth available to carry it.
ATM is a cell-switching and multiplexing technology that combines the benefits of circuit switching (guaranteed capacity and constant transmission delay) with those of packet switching (flexibility and efficiency for intermittent traffic). It provides scalable bandwidth from a few megabits per second (Mbps) to many gigabits per second (Gbps).
ATM is a layered architecture allowing multiple services like voice, data and video, to be mixed over the network. Three lower level layers have been defined to implement the features of ATM.
The Adaptation layer assures the appropriate service characteristics and divides all types of data into the 48 byte payload that will make up the ATM cell.
The ATM layer takes the data to be sent and adds the 5 byte header information that assures the cell is sent on the right connection.
The Physical layer defines the electrical characteristics and network interfaces. This layer "puts the bits on the wire." ATM is not tied to a specific type of physical transport.
Three types of ATM services exist: permanent virtual circuits (PVC), switched virtual circuits (SVC), and connectionless service (which is similar to SMDS).
A PVC allows direct connectivity between sites. In this way, a PVC is similar to a leased line. Among its advantages, a PVC guarantees availability of a connection and does not require call setup procedures between switches. Disadvantages of PVCs include static connectivity and manual setup.
An SVC is created and released dynamically and remains in use only as long as data is being transferred. In this sense, it is similar to a telephone call. Dynamic call control requires a signaling protocol between the ATM endpoint and the ATM switch. The advantages of SVCs include connection flexibility and call setup that can be handled automatically by a networking device. Disadvantages include the extra time and overhead required to set up the connection.
ATM networks are fundamentally connection oriented, which means that a virtual channel (VC) must be set up across the ATM network prior to any data transfer. (A virtual channel is roughly equivalent to a virtual circuit.)
Two types of ATM connections exist: virtual paths, which are identified by virtual path identifiers, and virtual channels, which are identified by the combination of a VPI and a virtual channel identifier (VCI).
A virtual path is a bundle of virtual channels, all of which are switched transparently across the ATM network on the basis of the common VPI. All VCIs and VPIs, however, have only local significance across a particular link and are remapped, as appropriate, at each switch.
Thus, ATM uses a "cloud" system almost exactly like that of frame relay. Unlike frame, however, ATM uses a 53 bytes fixed cell length, DLCI’s have been switched to VPI’s and VCI’s, and ATM offers a guaranteed service level.
One of the oldest data communications protocols in use today is IBM’s Synchronous Data Link Control (SDLC). SDLC defined rules for transmitting data across a digital line and was used for long distance communications between terminals and computers. IBM submitted SDLC to the standards organizations and they revised it and generalized it into the High Level Data Link Control (HDLC) protocol. HDLC is the basis of a family of related protocols.
- Link Access Procedure on the D-channel (LAPD) – Used with ISDN
- Link Access Protocol Balanced (LAPB) – Used with X.25
- Link Access Procedures to Frame-Mode Bearer (LAPF) – used with Frame Relay
- Point-to-Point (PPP) – Used for general communications access across wide area lines.
LAPD Protocol - Belongs to the High-level Data Link Control (HDLC) family of protocols
Three types of HDLC/LAPD frames:
- Information Frames – Used to carry the ISDN signaling messages.
- SABME – Set asynchronous balanced mode extended - Initiates a link
- UA – Unnumbered acknowledgment of link setup or termination
- DISC – Disconnect. Terminates a link.
- DM – Disconnect mode. Refuses a SABM or SAMBE, or just a announces a disconnected state.
- FRMR – Frame reject. Announces a non-recoverable error. The link must be reset.
- XID – Exchange identification. Used to negotiate parameters when a link is established.
- Supervisory Frames – Used for acknowledgements and flow control, and to report an out-of-sequence frame
- I – Carriers information
- Unnumbered Frames – Used to initiate and terminate a LAPD link, to negotiate parameters, and to report errors.
- RR – Receive ready. Indicates a ready state and acknowledges data.
- RNR – Receive not ready. Indicates a busy state and acknowledges data.
- REJ – Reject. Indicates that one or more frames need to be retransmitted.
PPP – Point-to-Point Protocol
The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for transporting IP traffic over point-to-point links. PPP also established a standard for the assignment and management of IP addresses, asynchronous (start/stop) and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for such capabilities as network-layer address negotiation and data-compression negotiation. PPP supports these functions by providing an extensible Link Control Protocol (LCP) and a family of Network Control Protocols (NCPs) to negotiate optional configuration parameters and facilities. In addition to IP, PPP supports other protocols, including Novell's Internetwork Packet Exchange (IPX) and DECnet. This chapter provides a summary of PPP's basic protocol elements and operations. PPP is capable of operating across any DTE/DCE interface. The only absolute requirement imposed by PPP is the provision of a duplex circuit, either dedicated or switched, that can operate in either an asynchronous or synchronous bit-serial mode, transparent to PPP link-layer frames. PPP does not impose any restrictions regarding transmission rate other than those imposed by the particular DTE/DCE interface in use.
PPP provides a method for transmitting datagrams over serial point-to-point links. PPP contains three main components:
- A method for encapsulating datagrams over serial links---PPP uses the High-Level Data Link Control (HDLC) protocol as a basis for encapsulating datagrams over point-to-point links.
- An extensive LCP to establish, configure, and test the data-link connection.
- A family of NCP’s (Network Control protocols) for establishing and configuring different network-layer protocols---PPP is designed to allow the simultaneous use of multiple network-layer protocols.
To establish communications over a point-to-point link, the originating PPP first sends LCP frames to configure and (optionally) test the data-link. After the link has been established and optional facilities have been negotiated as needed by the LCP, the originating PPP sends NCP frames to choose and configure one or more network-layer protocols. When each of the chosen network-layer protocols has been configured, packets from each network-layer protocol can be sent over the link. The link will remain configured for communications until explicit LCP or NCP frames close the link, or until some external event occurs (for example, an inactivity timer expires or a user intervenes).
The following descriptions summarize the PPP frame fields illustrated in Figure 13-1 :
- Flag ---A single byte that indicates the beginning or end of a frame. The flag field consists of the binary sequence 01111110.
- Address ---A single byte that contains the binary sequence 11111111, the standard broadcast address. PPP does not assign individual station addresses.
- Control ---A single byte that contains the binary sequence 00000011, which calls for transmission of user data in an unsequenced frame. A connectionless link service similar to that of Logical Link Control (LLC) Type 1 is provided.
- Protocol ---Two bytes that identify the protocol encapsulated in the information field of the frame. The most up-to-date values of the protocol field are specified in the most recent Assigned Numbers Request for Comments (RFC).
- Data ---Zero or more bytes that contain the datagram for the protocol specified in the protocol field. The end of the information field is found by locating the closing flag sequence and allowing 2 bytes for the FCS field. The default maximum length of the information field is 1,500 bytes. By prior agreement, consenting PPP implementations can use other values for the maximum information field length.
- Frame Check Sequence (FCS)---Normally 16 bits (2 bytes). By prior agreement, consenting PPP implementations can use a 32-bit (4-byte) FCS for improved error detection.
PPP Link-Control Protocol
The PPP LCP provides a method of establishing, configuring, maintaining, and terminating the point-to-point connection. LCP goes through four distinct phases:
- First, link establishment and configuration negotiation occurs. Before any network-layer datagrams (for example, IP) can be exchanged, LCP first must open the connection and negotiate configuration parameters. This phase is complete when a configuration-acknowledgment frame has been both sent and received.
- This is followed by link-quality determination. LCP allows an optional link-quality determination phase following the link-establishment and configuration-negotiation phase. In this phase, the link is tested to determine whether the link quality is sufficient to bring up network-layer protocols. This phase is optional. LCP can delay transmission of network-layer protocol information until this phase is complete.
- At this point, network-layer protocol configuration negotiation occurs. After LCP has finished the link-quality determination phase, network-layer protocols can be configured separately by the appropriate NCP and can be brought up and taken down at any time. If LCP closes the link, it informs the network-layer protocols so that they can take appropriate action.
- Finally, link termination occurs. LCP can terminate the link at any time. This usually will be done at the request of a user but can happen because of a physical event, such as the loss of carrier or the expiration of an idle-period timer.
Three classes of LCP frames exist. Link-establishment frames are used to establish and configure a link. Link-termination frames are used to terminate a link, while link-maintenance frames are used to manage and debug a link. These frames are used to accomplish the work of each of the LCP phases.
- PPP multilink protocol (MP)
- Allows two channels to bond together to form one single pipe for data at the establishment of a call.
- Bandwidth allocation protocol (BAP)
- Older protocol allowing for the bandwidth to be changed dynamically by need.
- Bandwidth allocation control protocol (BACP)
- The current version of the protocol used between different vendors to negotiate bandwidth dynamically.