September 21, 2011 | WireIE Holdings International | Content Marketing | Author
Ethernet has been in a state of perpetual evolution since its inception – with significant accommodation for backwards compatibility thanks to frame structure standardization. While exponential increases in throughput are perhaps most noteworthy, Ethernet has also seen improvements in the flexibility of Media Access Control (MAC) mechanisms at Layer 1. A number of physical (PHY) sub-layer developments have evolved, not the least of which is the increased breadth of transmission media choices for an Ethernet network.
StarLAN was the first implementation of Ethernet and used twisted pair copper wire. Known as 1BASE5 and developed by the IEEE as 802.3e in the mid-1980s, StarLAN ran at speeds of up to 1 Mbit/s. In light of the circuit switched, voice orientation of networks at that point, developers of 1BASE5 wanted to reuse previously installed cabling for telephony (PBX and/or key systems), thus minimizing the need to rewire office buildings and other enterprises. As the name implies, StarLAN was built around a hub-and-spoke topology – a direct emulation of circuit switched voice systems dominant at the time.
10BASE-T and Beyond
Introduced in the early 1990s, 10BASE-T supported up to 10 Mbit/s on 4 pair (8 conductor) twisted copper terminated on the now universally recognized RJ-45 modular connector. Both half and full duplex is supported as is the case with 100BASE-T (100 Mbit/s), and 1000BASE-T at 1Gbit/s (GigE). More than evolutionary, 10BASE-T arguably ushered in the broad adoption of LANs in the business environment.
10BASE-T was initially delivered over a shared coaxial cable in a bus topology, emulating a data radio network environment not unlike AlohaNet (described in the previous post). Thus the Etherin Ethernet. CSMA/CD played an essential role in managing channel contention resulting from packet collisions. Topologically, it was impractical to segment the network and as such, any number of single points of failure could bring down the entire network.
There were inefficiencies inherent in early Ethernet. Since a single coaxial cable carried all network communication (slotted Aloha), information sent by one device would be received by all devices on the network. It was the job of the Attachment Unit Interface (AUI) – essentially a pre-Network Interface Card (NIC) – to reject all traffic, other than that intended for the device it was connected to. Also, by confining all network traffic to a single shared cable, bandwidth can be quickly exhausted. Exacerbating the finite bandwidth was the broadcast nature wherein all stations on the network were sent all data regardless of whether it was intended for them or not. Finally, while elegant, CSMA/CD by its very nature has an impact on channel efficiency.
As 10BASE-T hubs and bridges matured, the concept of Switched Ethernet developed. Switched Ethernet is significant in that it takes the concept of Token-Ring’s once superior network speed through the concept of one session (i.e.: two network devices) accessing all the LAN bandwidth for a given instant, as opposed to sharing network bandwidth as was the case with the broadcast model. Modern Ethernet switches could manage thousands of concurrent network segments. From the switch’s point of view, the only device on each segment is the end station’s Layer 1 interface (NIC). The switch’s intelligence is dedicated to managing frame delivery over the appropriate segment – often managing hundreds or thousands of segments in concurrence.
The Journey Continues
Ethernet has earned its universal adoption in the enterprise because of its speed, reliability, flexibility, uniformity and operational simplicity. The journey to ubiquitous Ethernet is advancing rapidly with Carrier Ethernet solutions such as WireIE’s Transparent Ethernet Solutions™ leading the way.