Optical Burst Switching
>> Friday, September 19, 2008
Optical Burst Switching
Optical burst switching (OBS) is a switching concept which lies between optical circuit switching and optical packet switching. Firstly, a dynamic optical network is provided by the interconnection of optical cross connects. These optical cross connects (OXC) usually consist switches based on 2D or 3D Micro electro Mechanical mirrorsMEMS which reflect light coming into the switch at an incoming port to a particular outgoing port. The granularity of this type of switching is at a fibre, waveband (a band of wavelengths) or at a wavelength level. The finest granularity offered by an OXC is at a wavelength level. Therefore this type of switching is appropriate for provisioning lightpaths from one node to another for different clients/ services e.g. SDH (Synchronous Digital Hierarchy) circuits.
Optical switching enables routing of optical data signals without the need for conversion to electrical signals and, therefore, is independent of data rate and data protocol.Optical Burst Switching (OBS) is an attempt at a new synthesis of optical and electronic technologies that seeks to exploit the tremendous bandwidth of optical technology, while using electronics for management and control.
In an OBS network the incoming IP traffic is first assembled into bigger entities called bursts. Bursts, being substantially bigger than IP packets are easier to switch with relatively small overhead. When a burst is ready, reservation request is sent to the core network. Transmission and switching resources for each burst are reserved according to the one-pass reservation scheme, i.e. data is sent shortly after the reservation request without receiving an acknowledgement of successful reservation.
The reservation request (control packet) is sent on a dedicated wavelength some offset time prior to the transmission of the data burst. This basic offset has to be large enough to electronically process the control packet and set up the switching matrix for the data burst in all nodes. When a data burst arrives in a node the switching matrix has been already set up, i.e. the burst is kept in the optical domain. The reservation request is analysed in each core node, the routing decision is made, and sent to the next node. When the burst reaches its destination node it is disassembled, and the resulting IP packets are sent to their respective destinations.
The benefit of OBS over circuit switching is that there is no need to dedicate a wavelength for each end-to-end connection. OBS is more viable than optical packet switching because the burst data does not need to be buffered or processed at the cross-connect. Advantages
* Greater transport channel capacity
* No O-E-O conversion
* Cost effective
Disadvantages
* Burst dropped in case of contention
* Lack of effective technology
Optical Burst Switching operates at the sub-wavelength level and is designed to better improve the utilisation of wavelengths by rapid setup and teardown of the wavelength/lightpath for incoming bursts. In OBS, incoming traffic from clients at the edge of the network are aggregated at the ingress of the network according to a particular parameter (commonly destination, type of service (TOS bytes) class of service and quality of service(e.g. profiled Diffserv code points)). Therefore, at the OBS edge router, different queues represent the various destinations or class of services. Therefore based on the assembly/aggregation algorithm, packets are assembled into bursts using either a time based or threshold based aggregation algorithm. In some implementations, Aggregation is based on a Hybrid of Timer and Threshold. From the aggregation of packets, a burst is created and this is the granularity that is handled in OBS.
Also important about OBS is the fact that the required electrical processing is decoupled from the Optical process. Therefore the burst header generated at the edge of the network is sent on a separate control channel which could be a designated out-of-band control wavelength. At each switch the control channel is converted to the electrical domain for the electrical processing of the header information. The header information precedes the burst by a set amount known as an offset time. Therefore giving enough time for the switch resources to be made available prior to the arrival of the burst. Different reservation protocols have been proposed and their efficacy studied and published in numerous research publications. Obviously the signalling and reservation protocols depends of the network architecture, node capability, network topology and level of network connectivity. The reservation process has implications on the performance of OBS due to the buffering requirements at the edge. The one-way signalling paradigm obviously introduces a higher level of blocking in the network as connections are not usually guaranteed prior to burst release. Again numerous proposals have sought to improve these issues.
Optical burst switching has many flavours determined by the current available technologies such as the switching speed of available core optical switches. Most optical cross connects have switching times of the order of milliseconds but require tens of milliseconds to set up the switch and perform switching. New switch architectures and faster switches of the order of micro and nano second switching times can help to reduce the path setup overhead. Similarly, control plane signalling and reservation protocols implemented in hardware can help to speed up processing times by several clock cycles.
The initial phase of introducing optical burst switching would be based on an acknowledged reservation protocol i.e. two-way signalling: after burstification process, based on a forwarding table bursts of a particular destination are mapped to a wavelength. As the burst requests a path across the network, the request is sent on the control channel, at each switch, if it is possible to switch for the wavelength, the path is set up and an acknowledge signal is sent back to the ingress. The burst is then transmitted. Under this concept, the burst is held electronically at the edge and the bandwidth and path is guaranteed prior to transmission. This reduces the amount of bursts dropped. The effects of dropping bursts can be detrimental to a network as each burst is an amalgamation of IP packets which could be carrying keepalive messages between IP routers. If lost, the IP router would be forced to retransmit and reconverge.
Under the GMPLS control plane, forwarding tables are used to map the bursts and the MPLS (Multiprotocol Label Switching) base 'PATH' and 'RESV' signals are used for requesting a path and confirming setup respectively. This is a two way signalling process which can be inefficient in terms of network utilisation. However for increasingly bursty traffic, the conventional OBS is the preferred choice.
Under this conventional OBS, a one way signalling concept as mentioned previously is used. The idea is to hold the burst at the edge for an offset period while the control header traverses across the network setting up the switches, the burst follows immediately without confirmation of burst setup. There is an increased likelihood for bursts to be dropped but contention resolution mechanisms can be used to ensure alternative resources are made available to the burst if the switch is blocked ( being used by another burst for the incoming or outgoing switch port). An example contention resolution solution is deflection routing, where blocked bursts are routed to alternative port until the required port becomes available. This requires optical buffering which is implemented mainly by fibre delay lines.
One way signalling makes more efficient use of the network and the burst probability of blocking can be reduced by increasing the offset time, thereby increasing the likely hood of switch resources being available for burst.
A potential disadvantage of lambda switching is that, once a wavelength has been assigned, it is used exclusively by its “owner.” If 100 percent of its capacity is not in use for 100 percent of the time, then clearly there is an inefficiency in the network.
One solution to this problem is to allocate the wavelength for the duration of the data burst being sent. Historically, this has always been recognized as a challenge because the amount of time used in setting up and tearing down connections is typically very large compared to the amount of time the wavelength is “occupied” by the data burst. This is because traditional signaling techniques (eg. ATM, RSVP, X.25, ISDN) have tended to use a multi-way handshaking process to ensure that the channel really is established before data is sent. These techniques could not be applied to optical burst switching because they take far too long.
For this reason, a simplex “on the fly” signaling mechanism is the current favorite for optical burst switching, and there is no explicit confirmation that the connection is in place before the data burst is sent. Given that, at the time of writing, most optical burst research has been confined to computer simulation, it’s still not totally clear what the impact of this unreliable signaling will be on real network performance.
Here's a more detailed comparison of lambda switching and optical burst switching (OBS):
In a lambda switch, which we can also describe as an LSC interface with a GMPLS control plane, the goal is to reduce the time taken to establish optical paths from months to minutes. Once established, the wavelengths will remain in place for a relatively long time – perhaps months or even years. In this timescale, it’s quite acceptable to use traditional, reliable signaling techniques – notably RSVP (resource reservation protocol) and CR-LDP (constraint-based routing-label distribution protocol), which are being extended for use in GMPLS. Signaling can be out of band, using a low-speed overlay such as fast Ethernet.
In OBS, the goal is to set up lambdas so that a single burst of data can be transmitted. As noted previously, a 1-Mbyte file transmitted at 10 Gbit/s only requires a lambda for 1ms. The burst has to be buffered by the OEO edge device while the lambda is being set up, so the signaling has to be very fast indeed, and it looks as though we won’t have time for traditional handshakes.
The signaling itself can be out of band, but it must follow the topology required by the lambda path. If this seems confusing, think of a primary rate ISDN line. In this technology we use a single D-channel (a signaling channel) to control up to 30 B-channels (the channels carrying payload). The B and D channels share the same physical cable, and therefore the same topology. In the optical context we could use a dedicated signaling wavelength on a given fiber, and run this wavelength at speeds where economic network processors are available (eg., gigabit Ethernet).
Posted by ashwin at 7:23 AM 1 comments Links to this post
Labels: Electronics and communication
Pixie dust
Pixie dust is the informal name that IBM is using for its antiferromagnetically-coupled (AFC) media technology, which can increase the data capacity of hard drives to up to four times the density possible with current drives. AFC overcomes limits of current hard drives caused by a phenomenon called the superparamagnet effect.
AFC allows more data to be packed onto a disk. The pixie dust used is a 3- atom thick magnetic coating composed of the element ruthenium sandwiched between two magnetic layers. The technology is expected to yield 400 GB hard drives for desktop computers, and 200 GB hard drives for laptops.
IBM s use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives. IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing.
In information technology, the term "pixie dust" is often used to refer to a technology that seemingly does the impossible. IBM's use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives. Hard drive capacities have more or less doubled in each of the last five years, and it was assumed in the storage industry that the upper limit would soon be reached. The superparamagnetic effect has long been predicted to appear when densities reached 20 to 40 gigabits per square inch - close to the data density of current products. AFC increases possible data density, so that capacity is increased without using either more disks or more heads to read the data. Current hard drives can store 20 gigabits of data per square inch. IBM began shipping Travelstar hard drives in May 2001 that are capable of storing 25.7 gigabits per square inch. Drives shipped later in the year are expected to be capable of 33% greater density. Because smaller drives will be able to store more data and use less power, the new technology may also lead to smaller and quieter devices.
IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing. The company, which plans to implement the process across their entire line of products, chose not to publicize the technology in advance. Many companies have focused research on the use of AFC in hard drives; a number of vendors, such as Seagate Technology and Fujitsu, are expected to follow IBM's lead.
0 comments:
Post a Comment