Iontophoresis

>> Friday, September 19, 2008

Iontophoresis
The method of iontophoresis was described by Pivati in 1747.Galvani and Volta, two well-known scientists working in the 18th century, combined the knowledge that electricity can move different metal ions, and that movements of ions produce electricity. The method of administrating pharmacological drugs by iontophoresis became popular at the beginning of the 20th century due to the work of Leduc (1900) who introduce the word iontotherapy and formulated the laws for this process.

Read more...

Neuroprosthetics

Neuroprosthetics
Neuroprosthetics is an area of neuroscience concerned with neural prostheses, that is, artificial devices used to replace or improve the function of an impaired nervous system. The neuroprosthetic seeing the most widespread use is the cochlear implant, which is in approximately 85,000 people worldwide as of 2005.An early difficulty in the development of neuroprosthetics was reliably locating the electrodes in the brain, originally done by inserting the electrodes with needles and breaking off the needles at the desired depth. Recent systems utilize more advanced probes, such as those used in deep brain stimulation to alleviate the symptoms of Parkinsons Disease. The problem with either approach is that the brain floats free in the skull while the probe does not, and relatively minor impacts, such as a low speed car accident, are potentially damaging. Some researchers, such as Kensall Wise at the University of Michigan, have proposed tethering 'electrodes to be mounted on the exterior surface of the brain' to the inner surface of the skull. However, even if successful, tethering would not resolve the problem in devices meant to be inserted deep into the brain, such as in the case of deep brain stimulation [DBS].

Read more...

JINI

JINI
Sun engineers have been working quietly on anew Java technology called Jini since 1995. Part of the original vision for Java, it was put on the back burner while Sun waited for Java to gain widespread acceptance. As the Jini project revved up and more than 30technology partners signed on, it became impossible to keep it under wraps. So Sun cofounder Bill Joy, who helped dream up Jini, leaked the news to the media earlier this month. It was promptly smothered in accolades and hyperbolic prose.
HOW DOES IT WORK?
When you plug a new Jini-enabled device into a network, it broadcasts a message to any lookup service on the network saying, in effect, Here I am. Is anyone else out there? The lookup service registers the new machine, keeps a record of its attributes and sends a message back to the Jini device, letting it know where to reach the lookup service if it needs help. So when it comes time to print, for example, the device calls the lookup service, finds what it needs and sends the job to the appropriate machine. Jini actually consists of a very small piece of Java code that runs on your computer or device.
WHY WILL JINI BE THE FUTURE OF DISTRIBUTED COMPUTING?
Jini lets you dynamically move code, and not just data, from one machine to another. That means you can send a Java program to any other Jini machine and run it there, harnessing the power of any machine on your network to complete a task or run a program.
WHY WON T JINI BE THE FUTURE OF DISTRIBUTED COMPUTING?
So far, Jini seems to offer little more than basic network services. Don t expect it to turn your household devices into supercomputers; it will take some ingenious engineering before your stereo will start dating your laptop. Jini can run on small handheld devices with little or no processing power, but these devices need to be network-enabled and need to be controlled by another Jini-enabled hardware or software piece by proxy.

Read more...

Graphics tablet

Graphics tablet
A graphics tablet is a computer peripheral device that allows one to hand-draw images directly into a computer, generally through an imaging program. Graphics tablets consist of a flat surface upon which the user may 'draw' an image using an attached stylus, a pen-like drawing apparatus. The image generally does not appear on the tablet itself but, rather, is displayed on the computer monitor.
It is interesting to note that the stylus, as a technology, was originally designed as a part of the electronics, but later it simply took on the role of providing a smooth, but accurate 'point' that would not damage the tablet surface while 'drawing'.

Read more...

Serial Attached SCSI

Serial Attached SCSI
In computer hardware, Serial Attached SCSI (SAS) is a computer bus technology primarily designed for transfer of data to and from devices like hard disk, cd-rom and so on. SAS is a serial communication protocol for direct attached storage (DAS) devices. It is designed for the corporate and enterprise market as a replacement for parallel SCSI, allowing for much higher speed data transfers than previously available, and is backwards-compatible with SATA. Though SAS uses serial communication instead of the parallel method found in traditional SCSI devices, it still uses SCSI commands for interacting with SAS End devices. SAS protocol is developed and maintained by T10 committe. The current draft revision of SAS protocol can be downloaded from SAS 2 draft

Read more...

MAC address

MAC address
In computer networking a Media Access Control address (MAC address) is a unique identifier attached to most forms of networking equipment. Most layer 2 network protocols use one of three numbering spaces managed by the IEEE: MAC-48, EUI-48, and EUI-64, which are designed to be globally unique. Not all communications protocols use MAC addresses, and not all protocols require globally unique identifiers. The IEEE claims trademarks on the names 'EUI-48' and 'EUI-64'. (The 'EUI' stands for Extended Unique Identifier.)
ARP/RARP is commonly used to map the layer 2 MAC address to an address in a layer 3 protocol such as Internet Protocol (IP). On broadcast networks such as Ethernet the MAC address allows each host to be uniquely identified and allows frames to be marked for specific hosts. It thus forms the basis of most of the layer 2 networking upon which higher OSI Layer protocols are built to produce complex, functioning networks.

Read more...

PolyBot - Modular, self-reconfigurable robots

PolyBot - Modular, self-reconfigurable robots

Modular, self-reconfigurable robots show the promise of great versatility, robustness and low cost. Polybot is a modular, self-reconfigurable system that is being used to explore the hardware reality of a robot with a large number of interchangeable modules. Three generations of Polybot have been built over the last three years which include ever increasing levels of functionality and integration. Polybot has shown versatility, by demonstrating locomotion over a variety of terrain and manipulating a variety of objects.

Polybot is the first robot to demonstrate sequentially two topologically distinct locomotion modes by self-reconfiguration. Polybot has raised issues regarding software scalability and hardware dependency and as the design evolves the issues of low cost and robustness are being addressed while exploring the potential of modular, self-reconfigurable robots.

Read more...

BiCMOS

BiCMOS

The history of semiconductor devices starts in 1930’s when Lienfed and Heil first proposed the mosfet. However it took 30 years before this idea was applied to functioning devices to be used in practical applications, and up to the late 1980 this trend took a turn when MOS technology caught up and there was a cross over between bipolar and MOS share.CMOS was finding more wide spread use due to its low power dissipation, high packing density and simple design, such that by 1990 CMOS covered more than 90% of total MOS scale.

In 1983 bipolar compatible process based on CMOS technology was developed and BiCMOS technology with both the MOS and bipolar device fabricated on the same chip was developed and studied. The objective of the BiCMOS is to combine bipolar and CMOS so as to exploit the advantages of both at the circuit and system levels. Since 1985, the state-of-the-art bipolar CMOS structures have been converging. Today BiCMOS has become one of the dominant technologies used for high speed, low power and highly functional VLSI circuits especially when the BiCMOS process has been enhanced and integrated in to the CMOS process without any additional steps. Because the process step required for both CMOS and bipolar are similar, these steps cane be shared for both of them.

Read more...

Polymer memory

Polymer memory
Imagine a time when your mobile will be your virtual assistant and will need far more than the 8k and 16k memory that it has today, or a world where laptops require gigabytes of memory because of the impact of convergence on the very nature of computing. How much space would your laptop need to carry all that memory capacity? Not much, if Intel s project with Thin Film Electronics ASA (TFE) of Sweden works according to plan. TFE s idea is to use polymer memory modules rather than silicon-based memory modules, and what s more it s going to use architecture that is quite different from silicon-based modules.
While microchip makers continue to wring more and more from silicon, the most dramatic improvements in the electronics industry could come from an entirely different material plastic. Labs around the world are working on integrated circuits, displays for handheld devices and even solar cells that rely on electrically conducting polymers—not silicon—for cheap and flexible electronic components. Now two of the world’s leading chip makers are racing to develop new stock for this plastic microelectronic arsenal: polymer memory. Advanced Micro Devices of Sunnyvale, CA, is working with Coatue, a startup in Woburn, MA, to develop chips that store data in polymers rather than silicon. The technology, according to Coatue CEO Andrew Perlman, could lead to a cheaper and denser alternative to flash memory chips—the type of memory used in digital cameras and MP3 players. Meanwhile, Intel is collaborating with Thin Film Technologies in Linkping, Sweden, on a similar high capacity polymer memory.
Penetration usually involves a change of some kind, like a new port has been opened or a new service. The most common change you can see is that a file has changed. If you can identify the key subsets of these files and monitor them on a daily basis, then we will be able to detect whether any intrusion took place. Tripwire is an open source program created to monitor the changes in a key subset of files identified by the user and report on any changes in any of those files. When changes made are detected, the system administrator is informed. Tripwire ‘s principle is very simple, the system administrator identifies key files and causes tripwire to record checksum for those files. He also puts in place a cron job, whose job is to scan those files at regular intervals (daily or more frequently), comparing to the original checksum.
Any changes, addition or deletion, are reported to the administrator. The administrator will be able to determine whether the changes were permitted or unauthorized changes. If it was the earlier case the n the database will be updated so that in future the same violation wouldn’t be repeated. In the latter case then proper recovery action would be taken immediately

Read more...

Tagged Command Queuing

Tagged Command Queuing
TCQ stands for the Tagged Command Queuing technology built into certain PATA and SCSI hard drives. It allows the operating system to send multiple read and write requests to a hard drive. TCQ is almost identical in function to Native Command Queuing (NCQ) used by SATA drives.
Before TCQ, an operating system was only able to send one request at a time. In order to boost performance, it had to decide the order of the requests based on its own, possibly incorrect, idea of what the hard drive was doing. With TCQ, the drive can make its own decisions about how to order the requests (and in turn relieve the operating system from having to do so). The result is that TCQ can improve the overall performance of a hard drive.

Read more...

DLP Projector

DLP Projector
Powered by digital electronics, this optical solution facilitates the entire digital connection between a graphic or video source and the screen, in movie video projectors, televisions, home theatre systems and business video projectors. Digital Micro mirror Device or DLP® chip which is a rectangular array of up to 2 million hinge- mounted microscopic mirrors (each of these micro mirrors measures less than one-fifth the width of a human hair) controls the light in optical semiconductor of the DLP projection system.A digital image is projected on to the screen or surface using the mirror just by synchronising the digital video or graphic signal, a light source, and a projection lens. The edge of DLP projectors compared with the conventional projection systems are. 1) digital grayscale and color reproduction is carried out because of the digital nature, which makes DLP the final link in the digital video infrastructure 2) More efficient than competing transmissive LCD technologies as DLP has its roots in reflective DMD.3) Capacity to create seamless, film like images. So with DLP, EXPERIENCE THE DIGITAL REVLOUTION.

Read more...

Differential signaling

Differential signaling
Differential signaling is a method of transmitting information over pairs of wires (as opposed to single-ended signalling, which transmits information over single wires).
Differential signaling reduces the noise on a connection by rejecting common-mode interference. Two wires (referred to here as A and B) are routed in parallel, and sometimes twisted together, so that they will receive the same interference. One wire carries the signal, and the other wire carries the inverse of the signal, so that the sum of the voltages on the two wires is always constant.
At the end of the connection, instead of reading a single signal, the receiving device reads the difference between the two signals. Since the receiver ignores the wires' voltages with respect to ground, small changes in ground potential between transmitter and receiver do not affect the receiver's ability to detect the signal. Also, the system is immune to most types of electrical interference, since any disturbance that lowers the voltage level on A will also lower it on B.

Read more...

CCD vs. CMOS – Image

CCD vs. CMOS – Image
Posing a great challenge to the traditional Charge Coupled Devices (CCD) in various applications, CMOS image sensors have improvised themselves with time, finding solutions for the problems related with the noise and sensitivity. The use of Active Pixel Sensors having its foundation with the sub-micron technologies have helped to attain low power, low voltage and monolithic integration allowing. The manufacture of miniaturised single-chip digital cameras is an example of this technology.
The incorporation of advanced techniques at the chip or pixel level has opened new dimensions for the technology. Now after a decade, the initial tussle over the advocacy regarding the emergence of complementary-metal-oxide-semiconductor (CMOS) technology over the charge-coupled device (CCD) have slowly dropped showing the strengths and weakness of the technologies.

Read more...

Abstract Cell

Abstract Cell
This project of a multihearted CPU or the super computer on a chip, which has opened a new dimension in the era of devices, is a combined undertaking by the corporates giants like the Sony, Thosiba and the IBM (STI). This innovative technology will find its applications in Playstation 3 video game console of Sony, in replacing the existing processors, in the broadband network technology, in boosting the performance of the existing electronic devices. As per the details available this, single chip can perform I trillion floating point operations per second, 1 TFLOP, which is several hundred times faster than a high-end personal computer.
Give more room space for additional hardware resources to perform parallel computations rather than allowing the single threaded performance is the core concept of this project. This means, only minimum resources are allocated to perform the single threaded operations compared to performing more parallelizable multimedia-type computations like the multiple DSP-like processing elements.

Read more...

New Sensor Technology

New Sensor Technology
The invention of new fluorescence-based chemical sensor has facilitated the myriad potential applications such as monitoring oxygen, inorganic gases, volatile organic compounds, biochemical compounds etc, as the technology is versatile, compact and inexpensive in nature. Depending upon the vital criteria’s of accuracy, precision, cost and the ability to meet the environmental range of the intended application, proper sensor can be chosen for the military control based subsystem. Sensor web and video sensor technology are two widely applied sensor techniques, in which Sensor Web is a type of sensor network or geographic information system (GIS) well suited for environmental monitoring and control, where as video sensor technology is used for digital image analysis. In sensor web technology, we have a wirelessly connected shapeless network of unevenly distributed sensor platform or pods, which is very much different from the TCP/IP-like network with respect to its synchronous and router-free nature.
Because of this unique architecture every pod in the network knows what is happening with every other pod throughout the Sensor Web at each measurement cycle. The main requirements for a video sensor technology is an application software and a computer that acts as the carrier platform, which is usually equipped with a Linux or Microsoft operating system upon which the application software works. By programming the digital algorithms the interpretation of digital images and frame rates can be carried out. The video sensor is very much helpful in evaluating the scenes und sequences within an image section of a (CCD) camera.
We can use this technology to detect chemical and biological agents and also to determine if a country is using its nuclear reactors to produce material for nuclear weapons or to track the direction of a chemical or radioactive plume to evacuate an area," explained Paul Raptis, section manager. Raptis is developing these sensors with Argonne engineers Sami Gopalsami, Sasan Bakhtiari and Hual-Te Chien.
Argonne engineers have successfully performed the first-ever remote detection of chemicals and identification of unique explosives spectra using a spectroscopic technique that uses the properties of the millimeter/terahertz frequencies between microwave and infrared on the electromagnetic spectrum. The researchers used this technique to detect spectral "fingerprints" that uniquely identify explosives and chemicals.
The Argonne-developed technology was demonstrated in tests that accomplished three important goals:
* Detected and measured poison gas precursors 60 meters away in the Nevada Test Site to an accuracy of 10 parts per million using active sensing.* Identified chemicals related to defense applications, including nuclear weapons, from 600 meters away using passive sensing at the Nevada Test Site.* Built a system to identify the spectral fingerprints of trace levels of explosives, including DNT, TNT, PETN, RDX and plastics explosives semtex and C-4.
Current research involves collecting a database of explosive "fingerprints" and, working with partners Sarnoff Corp., Dartmouth College and Sandia National Laboratory, testing a mail- or cargo-screening system for trace explosives.
Argonne engineers have been exploring this emerging field for more than a decade to create remote technology to detect facilities that may be violating nonproliferation agreements by creating materials for nuclear weapons or making nerve agents.How it works
The millimeter/terahertz technology detects the energy levels of a molecule as it rotates. The frequency distribution of this energy provides a unique and reproducible spectral pattern – its "fingerprint" – that identifies the material. The technology can also be used in its imaging modality – ranging from concealed weapons to medical applications such as tumor detection.
The technique is an improvement over laser or optical sensing, which can be perturbed by atmospheric conditions, or X-rays, which can cause damage by ionization. Operating at frequencies between 0.1 and 10 terahertz, the sensitivity is four to five orders of magnitude higher and imaging resolution is 100 to 300 times more than possible at microwave frequencies.Other homeland security sensors
To remotely detect radiation from nuclear accidents or reactor operations, Argonne researchers are testing millimeter-wave radars and developing models to detect and interpret radiation-induced effects in air that cause radar reflection and scattering. Preliminary results of tests, in collaboration with AOZT Finn-Trade of St. Peterspurg, Russia, with instruments located 9 km from a nuclear power plant showed clear differences between when the plant was operating and when it was idling. This technology can also be applied to mapping plumes from nuclear radiation releases.
Argonne engineers have also applied this radar technology for remote and rapid imaging of gas leaks from natural gas pipelines. The technique detects the fluctuations in the index-of-refraction caused by leaking gas into surrounding air.
Early warnings of biological hazards can be made using another Argonne-developed sensing system that measures dielectric signatures. The systems sense repeatable dielectric response patterns from a number of biomolecules. The method holds potential for a fast first screening of chemical or biological agents in gases, powders or aerosols.
Other tests can detect these agents, but may take four hours or longer. "While this method may not be as precise as other methods, such as bioassays and biochips, it can be an early warning to start other tests sooner," said Raptis.
These Argonne sensor specialists will continue to probe the basics of sensor
technology and continue to develop devices that protect the nation's security interests.
Other potential applications for these technologies, in addition to security, include nondestructive evaluation of parts, environmental monitoring and health, including testing human tissue and replacing dental X-rays.
In addition to DOE, the U.S. Department of Defense and the National Aeronautics and Space Administration have provided support for this research.
The nation's first national laboratory, Argonne National Laboratory conducts basic and applied scientific research across a wide spectrum of disciplines, ranging from high-energy physics to climatology and biotechnology. Since 1990, Argonne has worked with more than 600 companies and numerous federal agencies and other organizations to help advance America's scientific leadership and prepare the nation for the future. Argonne is managed by the University of Chicago for the U.S. Department of Energy's Office of Science.

Read more...

Femtotechnology

Femtotechnology

Femtotechnology is a term used by some futurists to refer to structuring of matter on a femtometre scale, by analogy with nanotechnology and picotechnology. This involves the manipulation of excited energy states within atomic nuclei to produce metastable (or otherwise stabilized) states with unusual properties. In the extreme case, excited states of nucleons are considered, ostensibly to tailor the behavioral properties of these particles (though this is in practice unlikely to work as intended).
Practical applications of femtotechnology are currently considered to be unlikely. The spacings between nuclear energy levels require equipment capable of efficiently generating and processing gamma rays, without equipment degradation. The nature of the strong interaction is such that excited nuclear states tend to be very unstable (unlike the excited electron states in Rydberg atoms), and there are a finite number of excited states below the nuclear binding energy, unlike the (in principle) infinite number of bound states available to an atom's electrons. Similarly, what is known about the excited states of individual nucleons seems to indicate that these do not produce behavior that in any way makes nucleons easier to use or manipulate, and indicates instead that these excited states are even less stable and fewer in number than the excited states of atomic nuclei.
The most advanced form of molecular nanotechnology is often imagined to involve self-replicatingnucleons rather than atoms. For example, the astrophysicist Frank Drake once speculated about the possibility of self-replicating organisms composed of such nuclear molecules living on the surface of a neutron star, a suggestion taken up in the science fiction novel Dragon's Egg by the physicist Robert Forward. It is thought by physicists that nuclear molecules may be possible, but they would be very short-lived, and whether they could actually be made to perform complex tasks such as self-replication, or what type of technology could be used to manipulate them, is unknown. molecular machines, and there have been some very speculative suggestions that something similar might in principle be possible with "molecules" composed of
The hypothetical hafnium bomb can be considered a crude application of femtotechnology.

Read more...

Microvia Technology

Microvia Technology

Microvias are small holes in the range of 50 -100 µm. In most cases they are blind vias from the outer layers to the first innerlayer.The development of very complex Integrated Circuits (ICs) with extremely high input/output counts coupled with the steadily increasing clock rates has forced the electronic manufacturer to develop new packaging and assembly techniques. Components with pitches less then 0.30 mm, chip scale packages, and flip chip technology are underlining this trend and highlight the importance of new printed wiring board technologies able to cope with the requirement of modern electronics.In addition, more and more electronic devices have to be portable and consequently systems integration, volume and weight considerations are gaining importance.These portables are usually battery powered resulting in a trend towards lower voltage power supplies, with their implication in PCB (Printed Circuit Board) complexity.As a result of the above considerations, the future PCB will be characterized by very high interconnection density with finer lines and spaces, smaller holes and decreasing thickness. To gain more landing pads for small footprint components the use of microvias becomes a must.
They are essential in setting the pace of electronic developments and the circuit board is under pressure to keep up.
From the automobile industry through to industrial electronics, the importance of microelectronics has risen enormously in recent years. At the same time, it has developed into an essential feature of intelligent devices and systems. The demand for reduced volume and weight, enhanced system performance with shorter signal transit times, increased reliability and minimised system costs have become progressively more important. And as a consequence, this means that heightened demands are placed on developers and layout engineers.
While microvia technology has long since become a telecommunications manufacturing standard, it is now penetrating other market segments. Here microvia technology offers the potential to completely fulfil the demands for technically perfect solutions and rational production. In other words - this technology unites modern technology and economics. Looking at the circuit board industry in the cold light of day, it is apparent that it has a cost-efficient, safe and proven technology at its disposal. With the aid of microvias, the integration of modern components on the boards requires only minor modifications to the multi-layer architecture. Many of the requirements placed on electronic products can be realised without problems as a result. HDI (High Density Interconnect) involves using microvias for high density interconnection of numerous components and functions within a confined space. Microvia circuit boards manage without conventional mechanically drilled through contacts and use the appropriate laser drilling machines as drilling tools.
The drivers for HDI microvia technology are the various component formats, such as COB (Chip on Board), Flip Chip, CSP (Chip Size Packaging) and BGA (Ball Grid Arrays), which are described in terms of "footprint" or pitch. The footprint characterises the overall solder surface, connection surface or landing sites for SMD components. Pitch denotes the separation between the midpoints of the individual solder surfaces. Many new components arrive on the market with a large number of connections and a low pitch which demand a further increase in wiring density on the circuit board. This demonstrates why the challenges facing the technical knowledge of the circuit board designer and the implementation options are so infinitely important for new components. Because even at this early stage, the profitability, as well as the rational technical feasibility and process compatibility of the boards, are decided. This highlights how strongly circuit board development is influenced by the development of components and their geometric design.
In the past, microvias were still staggered relative to one another as a means of achieving contact over several layers. New techniques, with which microvias generate connections across two layers, have become established as particularly cost-effective and efficient in their manufacturing technology. These holes can be produced in one program starting from the outer layer.
Cu-filled microvias represent the latest development ready for series production. Special feature of this technology: The vias can be set directly on top of each other. With this method it is possible to layout components even in very confined geometries.
When are microvias worthwhile?
No textbook specifies where the transition between mechanical drill holes and laser holes is to be found. After all, the application of microvias is not only determined by the technology or the geometry of the components and consequently the circuit board geometry. However, questions concerning profitability can be clearly answered through the application of microvias. In the light of Würth Elektronik's experience from today's perspective, a clear technical boundary can be drawn at a BGA pitch of 0.8 mm. Here conventional technology, with mechanically drilled vias, meets its limitations and the use of microvias (laser drilled blind vias) is necessary.
Naturally however, economic considerations for or against play a significant role. A comparison of variable drilling costs reveals the superiority of microvia technology over mechanical drilling (Ø 0.3 mm) even with a relatively small number of holes. The 100x faster drilling speed and the tool costs approaching ZERO make laser drilling extremely fast and cheap. This effect becomes more pronounced as the number of drill holes increases. The comparison clearly illustrates the cost saving potential via technology has to offer. Experience at Würth Elektronik shows that the proper application of this technology results in savings of between eight and ten percent of the overall costs of "conventional circuits". The advantage of via technology grows beyond measure if smaller drills have to be used for geometrical reasons. The drill unit costs rise dramatically. And the service life of the drills plummets. The cost differential opens up enormously for Ø 0.1 mm mechanically drilled vias compared with Ø 0.1 mm laser drilled microvias. Here the variable costs are in a ratio of around 500:1.
As the need for high-density, handheld products increases, the electronic packaging industry has been developing new technologies, such as chip-scale packaging and §ip-chip assembly, to pack more information-processing functions per unit volume. Many system designers, however, believe that the circuit board technology to accommodate packages with high I/O densities has not kept pace. Even though printed wiring board fabricators have been developing new, higher-density circuit fabrication methods, the system designers perceive today's advanced technology as unproven, low reliability, and high cost. IBIS Associates applies Technical Cost Modeling in this paper to examine the cost issues of implementing microvia technology.
CSPs and microvias go hand-in-hand: What is the value of high-I/O-density, chip-scale packaging without a high- density substrate to connect these chips? Alternatively, why have a circuit board with ultrafine features if coarse-pitch devices will be used?
Yet, many system designers believe that either CSPs or microvia technology (or both) mean higher system costs. Certainly, it is wise to be cautious about employing new technologies. But if there are proven technologies that can offer system cost reduction as well as system performance improvements and size reduction, what are you waiting for?
IBIS Associates has studied the cost impact of microvia technologies on circuit board fabrication1, and of CSP technologies on IC packaging2. This paper shows some of these cost analyses, revealing the cost savings possible through the use of these advanced technologies.
CSPs and microvias go hand-in-hand: What is the value of high-I/O-density, chip-scale packaging without a high- density substrate to connect these chips? Alternatively, why have a circuit board with ultrafine features if coarse-pitch devices will be used?
Yet, many system designers believe that either CSPs or microvia technology (or both) mean higher system costs. Certainly, it is wise to be cautious about employing new technologies. But if there are proven technologies that can offer system cost reduction as well as system performance improvements and size reduction, what are you waiting for?
IBIS Associates has studied the cost impact of microvia technologies on circuit board fabrication1, and of CSP technologies on IC packaging2. This paper shows some of these cost analyses, revealing the cost savings possible through the use of these advanced technologies. Methodology-Technical Cost Modeling
Technical Cost Modeling (TCM), a methodology pioneered by IBIS Associates, provides the method for analyzing cost1. The goal of TCM is to understand the costs of a product and how these costs are likely to change with alterations to the product and process.
Specifically, TCM includes the breakdown of cost into its constituent elements (listed below), and ranking cost items on the basis of their contribution:Materials and energy Direct and overhead labor Equipment, tooling and building Other costs
Once these costs are established, sensitivity analysis can be performed to understand the impact of changes to key parameters such as annual production volume, process yield and material pricing.
In short, TCM provides an understanding not only of current costs but also of how these costs might differ in the face of future technological or economic developments.High-Density Packaging Technologies
Much has been published on microvia technology3,4 and chip-scale packaging5,6. Microvia technology, also called build-up technology, allows high-density circuitry on the outer layers of a circuit board, with lower, conventional-density circuitry on the inside layers.
These high-density circuit boards contain a conventional core, for rigidity and cost reasons, among others. Since the materials used in creating microvias tend not to have glass reinforcement, the core layers, which are glass reinforced, provide the rigidity needed for handling and end-use structural requirements.
Creating vias smaller than 6 to 8 mils (150 to 200 microns) in diameter, allows higher-density circuit layers to be created than with conventional technology in general. These vias are created through a myriad of technologies, including the following:Advanced mechanical drilling Lasers Photoimageable dielectric layers Plasma etching Yields
Microvia technologies have been adopted by most large board fabricators and are being used by some OEMs, mainly in Japan. Reported yields achievable with microvia technology range from 50% to 95%, depending on the technology, how long the fabricator has been learning fabrication techniques and many other factors. Further details of each technology are presented elsewhere1.
Most §ash memory devices are being offered in CSPs for use in portable electronic products. Uses are on the horizon for many other ICs, but CSPs are just beginning to be employed outside of memory.
In summary, CSPs and microvias have "burst onto the scene" due to the demand for complex handheld products and other compact electronics. Since it can be construed that their implementation is driven mainly by the need for smaller form factors and not by cost, both microvias and CSPs have suffered from perceptions of high cost among potential users.But is this necessarily true?
When a new technology is introduced, it tends to cost more, with the promise that, eventually, costs will be lower than they are today.
This situation occurs because volume production is necessary for costs to come down, and new technologies are usually introduced at low-volume levels. At their beginning, as customers "test the water" these low volumes often do not allow the new technology to cost less than the incumbent technology. This is happening today with CSPs and microvias.
Cost models can show if new technologies will, in fact, cost less at higher production volumes. This analysis shows some of the cost results from recent work at IBIS.

Read more...

Genomic Signal Processing

Genomic Signal Processing


Genomic Signal Processing (GSP) is the engineering discipline that studies the processing of genomic signals.The theory of signal processing is utilized in both structural and functional understanding. The aim of GSP is to integrate the theory and methods of signal processing with the global understanding of functional genomics, with special emphasis on genomic regulation.
Gene prediction typically refers to the area of computational that is concerned with algorithmically identifying biology genomic DNA, that are stretches of sequence, usually biologically functional. This especially includes protein-genes, but may also include other functional coding RNA genes and regulatory regions. Gene elements such as finding is
one of the first and most important steps in understanding the genome of a species once it has been sequenced.
Genomic signal processing (GSP) is the engineering dis-
cipline that studies the processing of genomic signals. Ow-
ing to the major role played in genomics by transcriptional
signaling and the related pathway modeling, it is only nat-
ural that the theory of signal processing should be utilized
in both structural and functional understanding. The aim of
GSP is to integrate the theory and methods of signal process-
ing with the global understanding of functional genomics,
with special emphasis on genomic regulation. Hence, GSP
encompasses various methodologies concerning expression
profiles: detection, prediction, classification, control, and sta-
tistical and dynamical modeling of gene networks. GSP is
a fundamental discipline that brings to genomics the struc-
tural model-based analysis and synthesis that form the basis
of mathematically rigorous engineering.
Application is generally directed towards tissue classifi-
cation and the discovery of signaling pathways, both based
on the expressed macromolecule phenotype of the cell. Ac-
complishment of these aims requires a host of signal process-
ing approaches. These include signal representation relevant
to transcription, such as wavelet decomposition and more
general decompositions of stochastic time series, and system
modeling using nonlinear dynamical systems. The kind of
correlation-based analysis commonly used for understand-
ing pairwise relations between genes or cellular effects can-
not capture the complex network of nonlinear information
processing based upon multivariate inputs from inside and
outside the genome. Regulatory models require the kind of
nonlinear dynamics studied in signal processing and con-
trol, and in particular the use of stochastic dataflow networks
common to distributed computer systems with stochastic
inputs. This is not to say that existing model systems suf-
fice. Genomics requires its own model systems, not simply
straightforward adaptations of currently formulated mod-
els. New systems must capture the specific biological mecha-
nisms of operation and distributed regulation at work within
the genome. It is necessary to develop appropriate mathe-
matical theory, including optimization, for the kinds of ex-
ternal controls required for therapeutic intervention as well
as approximation theory to arrive at nonlinear dynamical
models that are sufficiently complex to adequately represent
genomic regulation for diagnosis and therapy while not be-
ing overly complex for the amounts of data experimentally
feasible or for the computational limits of existing computer
hardware.
A cell relies on its protein components for a wide variety of
its functions, including energy production, biosynthesis of
component macromolecules, maintenance of cellular archi-
tecture, and the ability to act upon intra- and extra-cellular
stimuli. Each cell in an organism contains the information
necessary to produce the entire repertoire of proteins the
organism can specify. Since a cell’s specific functionality is
largely determined by the genes it is expressing, it is logical
that transcription, the first step in the process of convert-
ing the genetic information stored in an organism’s genome
into protein, would be highly regulated by the control net-
work that coordinates and directs cellular activity. A primary
means for regulating cellular activity is the control of pro-
tein production via the amounts of mRNA expressed by in-
dividual genes. The tools to build an understanding of ge-
nomic regulation of expression will involve the characteriza-
tion of these expression levels. Microarray technology, both
cDNA and oligonucleotide, provides a powerful analytic tool
for genetic research. Since our concern in this paper is to ar-
ticulate the salient issues for GSP, and not to delve deeply
into microarray technology, we confine our brief discussion
to cDNA microarrays.
Complementary DNA microarray technology combines
robotic spotting of small amounts of individual, pure nu-
cleic acid species on a glass surface, hybridization to this array
with multiple fluorescently labeled nucleic acids, and detec-
tion and quantitation of the resulting fluor-tagged hybrids
by a scanning confocal microscope. A basic application is
quantitative analysis of fluorescence signals representing the
relative abundance of mRNA from distinct tissue samples.
Complementary DNA microarrays are prepared by print-
ing thousands of cDNAs in an array format on glass micro-
scope slides, which provide gene-specific hybridization tar-
gets. Distinct mRNA samples can be labeled with different
fluors and then co-hybridized onto each arrayed gene. Ratios
(or sometimes the direct intensity measurements) of gene
expression levels between the samples can be used to detect
meaningfully different expression levels between the samples
for a given gene. Given an experimental design with multiple
tissue samples, microarray data can be used to cluster genes
based on expression profiles, to characterize and classify dis-
ease based on the expression levels of gene sets, and for other

Read more...

Pixie dust

Pixie dust

Pixie dust is the informal name that IBM is using for its antiferromagnetically-coupled (AFC) media technology, which can increase the data capacity of hard drives to up to four times the density possible with current drives. AFC overcomes limits of current hard drives caused by a phenomenon called the superparamagnet effect.
AFC allows more data to be packed onto a disk. The pixie dust used is a 3- atom thick magnetic coating composed of the element ruthenium sandwiched between two magnetic layers. The technology is expected to yield 400 GB hard drives for desktop computers, and 200 GB hard drives for laptops.
IBM s use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives. IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing.
In information technology, the term "pixie dust" is often used to refer to a technology that seemingly does the impossible. IBM's use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives. Hard drive capacities have more or less doubled in each of the last five years, and it was assumed in the storage industry that the upper limit would soon be reached. The superparamagnetic effect has long been predicted to appear when densities reached 20 to 40 gigabits per square inch - close to the data density of current products. AFC increases possible data density, so that capacity is increased without using either more disks or more heads to read the data. Current hard drives can store 20 gigabits of data per square inch. IBM began shipping Travelstar hard drives in May 2001 that are capable of storing 25.7 gigabits per square inch. Drives shipped later in the year are expected to be capable of 33% greater density. Because smaller drives will be able to store more data and use less power, the new technology may also lead to smaller and quieter devices.
IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing. The company, which plans to implement the process across their entire line of products, chose not to publicize the technology in advance. Many companies have focused research on the use of AFC in hard drives; a number of vendors, such as Seagate Technology and Fujitsu, are expected to follow IBM's lead.

Read more...

Optical Burst Switching

Optical Burst Switching
Optical burst switching (OBS) is a switching concept which lies between optical circuit switching and optical packet switching. Firstly, a dynamic optical network is provided by the interconnection of optical cross connects. These optical cross connects (OXC) usually consist switches based on 2D or 3D Micro electro Mechanical mirrorsMEMS which reflect light coming into the switch at an incoming port to a particular outgoing port. The granularity of this type of switching is at a fibre, waveband (a band of wavelengths) or at a wavelength level. The finest granularity offered by an OXC is at a wavelength level. Therefore this type of switching is appropriate for provisioning lightpaths from one node to another for different clients/ services e.g. SDH (Synchronous Digital Hierarchy) circuits.

Optical switching enables routing of optical data signals without the need for conversion to electrical signals and, therefore, is independent of data rate and data protocol.Optical Burst Switching (OBS) is an attempt at a new synthesis of optical and electronic technologies that seeks to exploit the tremendous bandwidth of optical technology, while using electronics for management and control.

In an OBS network the incoming IP traffic is first assembled into bigger entities called bursts. Bursts, being substantially bigger than IP packets are easier to switch with relatively small overhead. When a burst is ready, reservation request is sent to the core network. Transmission and switching resources for each burst are reserved according to the one-pass reservation scheme, i.e. data is sent shortly after the reservation request without receiving an acknowledgement of successful reservation.

The reservation request (control packet) is sent on a dedicated wavelength some offset time prior to the transmission of the data burst. This basic offset has to be large enough to electronically process the control packet and set up the switching matrix for the data burst in all nodes. When a data burst arrives in a node the switching matrix has been already set up, i.e. the burst is kept in the optical domain. The reservation request is analysed in each core node, the routing decision is made, and sent to the next node. When the burst reaches its destination node it is disassembled, and the resulting IP packets are sent to their respective destinations.

The benefit of OBS over circuit switching is that there is no need to dedicate a wavelength for each end-to-end connection. OBS is more viable than optical packet switching because the burst data does not need to be buffered or processed at the cross-connect. Advantages

* Greater transport channel capacity

* No O-E-O conversion

* Cost effective

Disadvantages

* Burst dropped in case of contention

* Lack of effective technology

Optical Burst Switching operates at the sub-wavelength level and is designed to better improve the utilisation of wavelengths by rapid setup and teardown of the wavelength/lightpath for incoming bursts. In OBS, incoming traffic from clients at the edge of the network are aggregated at the ingress of the network according to a particular parameter (commonly destination, type of service (TOS bytes) class of service and quality of service(e.g. profiled Diffserv code points)). Therefore, at the OBS edge router, different queues represent the various destinations or class of services. Therefore based on the assembly/aggregation algorithm, packets are assembled into bursts using either a time based or threshold based aggregation algorithm. In some implementations, Aggregation is based on a Hybrid of Timer and Threshold. From the aggregation of packets, a burst is created and this is the granularity that is handled in OBS.

Also important about OBS is the fact that the required electrical processing is decoupled from the Optical process. Therefore the burst header generated at the edge of the network is sent on a separate control channel which could be a designated out-of-band control wavelength. At each switch the control channel is converted to the electrical domain for the electrical processing of the header information. The header information precedes the burst by a set amount known as an offset time. Therefore giving enough time for the switch resources to be made available prior to the arrival of the burst. Different reservation protocols have been proposed and their efficacy studied and published in numerous research publications. Obviously the signalling and reservation protocols depends of the network architecture, node capability, network topology and level of network connectivity. The reservation process has implications on the performance of OBS due to the buffering requirements at the edge. The one-way signalling paradigm obviously introduces a higher level of blocking in the network as connections are not usually guaranteed prior to burst release. Again numerous proposals have sought to improve these issues.

Optical burst switching has many flavours determined by the current available technologies such as the switching speed of available core optical switches. Most optical cross connects have switching times of the order of milliseconds but require tens of milliseconds to set up the switch and perform switching. New switch architectures and faster switches of the order of micro and nano second switching times can help to reduce the path setup overhead. Similarly, control plane signalling and reservation protocols implemented in hardware can help to speed up processing times by several clock cycles.

The initial phase of introducing optical burst switching would be based on an acknowledged reservation protocol i.e. two-way signalling: after burstification process, based on a forwarding table bursts of a particular destination are mapped to a wavelength. As the burst requests a path across the network, the request is sent on the control channel, at each switch, if it is possible to switch for the wavelength, the path is set up and an acknowledge signal is sent back to the ingress. The burst is then transmitted. Under this concept, the burst is held electronically at the edge and the bandwidth and path is guaranteed prior to transmission. This reduces the amount of bursts dropped. The effects of dropping bursts can be detrimental to a network as each burst is an amalgamation of IP packets which could be carrying keepalive messages between IP routers. If lost, the IP router would be forced to retransmit and reconverge.

Under the GMPLS control plane, forwarding tables are used to map the bursts and the MPLS (Multiprotocol Label Switching) base 'PATH' and 'RESV' signals are used for requesting a path and confirming setup respectively. This is a two way signalling process which can be inefficient in terms of network utilisation. However for increasingly bursty traffic, the conventional OBS is the preferred choice.

Under this conventional OBS, a one way signalling concept as mentioned previously is used. The idea is to hold the burst at the edge for an offset period while the control header traverses across the network setting up the switches, the burst follows immediately without confirmation of burst setup. There is an increased likelihood for bursts to be dropped but contention resolution mechanisms can be used to ensure alternative resources are made available to the burst if the switch is blocked ( being used by another burst for the incoming or outgoing switch port). An example contention resolution solution is deflection routing, where blocked bursts are routed to alternative port until the required port becomes available. This requires optical buffering which is implemented mainly by fibre delay lines.

One way signalling makes more efficient use of the network and the burst probability of blocking can be reduced by increasing the offset time, thereby increasing the likely hood of switch resources being available for burst.

A potential disadvantage of lambda switching is that, once a wavelength has been assigned, it is used exclusively by its “owner.” If 100 percent of its capacity is not in use for 100 percent of the time, then clearly there is an inefficiency in the network.

One solution to this problem is to allocate the wavelength for the duration of the data burst being sent. Historically, this has always been recognized as a challenge because the amount of time used in setting up and tearing down connections is typically very large compared to the amount of time the wavelength is “occupied” by the data burst. This is because traditional signaling techniques (eg. ATM, RSVP, X.25, ISDN) have tended to use a multi-way handshaking process to ensure that the channel really is established before data is sent. These techniques could not be applied to optical burst switching because they take far too long.

For this reason, a simplex “on the fly” signaling mechanism is the current favorite for optical burst switching, and there is no explicit confirmation that the connection is in place before the data burst is sent. Given that, at the time of writing, most optical burst research has been confined to computer simulation, it’s still not totally clear what the impact of this unreliable signaling will be on real network performance.

Here's a more detailed comparison of lambda switching and optical burst switching (OBS):

In a lambda switch, which we can also describe as an LSC interface with a GMPLS control plane, the goal is to reduce the time taken to establish optical paths from months to minutes. Once established, the wavelengths will remain in place for a relatively long time – perhaps months or even years. In this timescale, it’s quite acceptable to use traditional, reliable signaling techniques – notably RSVP (resource reservation protocol) and CR-LDP (constraint-based routing-label distribution protocol), which are being extended for use in GMPLS. Signaling can be out of band, using a low-speed overlay such as fast Ethernet.

In OBS, the goal is to set up lambdas so that a single burst of data can be transmitted. As noted previously, a 1-Mbyte file transmitted at 10 Gbit/s only requires a lambda for 1ms. The burst has to be buffered by the OEO edge device while the lambda is being set up, so the signaling has to be very fast indeed, and it looks as though we won’t have time for traditional handshakes.

The signaling itself can be out of band, but it must follow the topology required by the lambda path. If this seems confusing, think of a primary rate ISDN line. In this technology we use a single D-channel (a signaling channel) to control up to 30 B-channels (the channels carrying payload). The B and D channels share the same physical cable, and therefore the same topology. In the optical context we could use a dedicated signaling wavelength on a given fiber, and run this wavelength at speeds where economic network processors are available (eg., gigabit Ethernet).
Posted by ashwin at 7:23 AM 1 comments Links to this post
Labels: Electronics and communication
Pixie dust

Pixie dust is the informal name that IBM is using for its antiferromagnetically-coupled (AFC) media technology, which can increase the data capacity of hard drives to up to four times the density possible with current drives. AFC overcomes limits of current hard drives caused by a phenomenon called the superparamagnet effect.

AFC allows more data to be packed onto a disk. The pixie dust used is a 3- atom thick magnetic coating composed of the element ruthenium sandwiched between two magnetic layers. The technology is expected to yield 400 GB hard drives for desktop computers, and 200 GB hard drives for laptops.

IBM s use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives. IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing.

In information technology, the term "pixie dust" is often used to refer to a technology that seemingly does the impossible. IBM's use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives. Hard drive capacities have more or less doubled in each of the last five years, and it was assumed in the storage industry that the upper limit would soon be reached. The superparamagnetic effect has long been predicted to appear when densities reached 20 to 40 gigabits per square inch - close to the data density of current products. AFC increases possible data density, so that capacity is increased without using either more disks or more heads to read the data. Current hard drives can store 20 gigabits of data per square inch. IBM began shipping Travelstar hard drives in May 2001 that are capable of storing 25.7 gigabits per square inch. Drives shipped later in the year are expected to be capable of 33% greater density. Because smaller drives will be able to store more data and use less power, the new technology may also lead to smaller and quieter devices.

IBM discovered a means of adding AFC to their standard production methods so that the increased capacity costs little or nothing. The company, which plans to implement the process across their entire line of products, chose not to publicize the technology in advance. Many companies have focused research on the use of AFC in hard drives; a number of vendors, such as Seagate Technology and Fujitsu, are expected to follow IBM's lead.

Read more...

Surface conduction Electron emitter Display (SED)

Surface conduction Electron emitter Display (SED)
The SED technology has been developing since 1987. The flat panel display technology that employs surface conduction electron emitters for every individual display pixel can be referred to as the Surface-conduction Electron-emitter Display (SED). Though the technology differs, the basic theory that the emitted electrons can excite a phosphor coating on the display panel seems to be the bottom line for both the SED display technology and the traditional cathode ray tube (CRT) televisions.

When bombarded by moderate voltages (tens of volts), the electrons tunnel across a thin slit in the surface conduction electron emitter apparatus. Some of these electrons are then scattered at the receiving pole and are accelerated towards the display surface, between the display panel and the surface conduction electron emitter apparatus, by a large voltage gradient (tens of kV) as these electrons pass the electric poles across the thin slit. These emitted electrons can then excite the phosphor coating on the display panel and the image follows.

The main advantage of SED’s compared with LCD’s and CRT’s is that it can provide with a best mix of both the technologies. The SED can combine the slim form factor of LCD’s with the superior contrast ratios, exceptional response time and can give the better picture quality of the CRT’s. The SED’s also provides with more brightness, color performance, viewing angles and also consumes very less power. More over, the SED’s do not require a deflection system for the electron beam, which has in turn helped the manufacturer to create a display design, that is only few inches thick but still light enough to be hung from the wall. All the above properties has consequently helped the manufacturer to enlarge the size of the display panel just by increasing the number of electron emitters relative to the necessary number of pixels required. Canon and Toshiba are the two major companies working on SED’s. The technology is still developing and we can expect further breakthrough on the research.

A surface-conduction electron-emitter display (SED) is a flat panel display technology that uses surface conduction electron emitters for every individual display pixel. The surface conduction emitter emits electrons that excite a phosphor coating on the display panel, the same basic concept found in traditional cathode ray tube (CRT) televisions. This means that SEDs use tiny cathode ray tubes behind every single pixel (instead of one tube for the whole display) and can combine the slim form factor of LCDs and plasma displays with the superior viewing angles, contrast, black levels, color definition and pixel response time of CRTs. Canon also claims that SEDs consume less power than LCD displays

The surface conduction electron emitter apparatus consists of a thin slit across which electrons tunnel when excited by moderate voltages (tens of volts). When the electrons cross electric poles across the thin slit, some are scattered at the receiving pole and are accelerated toward the display surface by a large voltage gradient (tens of thousands of volts) between the display panel and the surface conduction electron emitter apparatus. Canon Inc. working with Toshiba uses inkjet printing technology to spray phosphors onto the glass. The technology has been in development since 1986.

How it Works
SED technology works much like a traditional CRT except instead of one large electron gun firing at all the screen phosphors that light up to create the image you see, SED has thousands of tiny electron guns known as "emitters" for each phosphor sub-pixel. Remember, a sub-pixel is just one of the three colors (red, green, blue) that make up a pixel. So it takes three emitters to create one pixel on the screen and over 6 million SED emitters to produce a true high definition (HDTV) image! It's sort of like an electron Gatling gun with a barrel for every target positioned at point-blank range. An army of electron guns, if you will.



This may bode well for video purists who feel that CRTs offer the best picture quality, bar none. One prototype has even attained a contrast ratio of 100,000:1. Its brightness of 400cd/m2 is a tad on the low side for an LCD TV and nowhere close to a plasma. This is expected to increase in the future, but still works out to about 116 ftL (foot Lamberts) or more than twice a regular TV. To put this in perspective, a movie theater shows a film at about 15 ftL.

Life Expectancy
It does look like SED TVs will last a good while as it has been reported that the electron emitters have been shown to only drop 10% after 60,000 hours, simulated by an "accelerated" test. This means that it is likely the unit will keep working as long as the phosphors continue to emit light. That can be a while. Maybe yours will even show up on the Antiques Roadshow in working condition in the far distant future. Time will tell but "accelerated" testing results should always be taken with a grain of salt as it only imitates wear and tear over time.

SED TV Compared to CRT
SED is flat. A traditional CRT has one electron gun that scans side to side and from top to bottom by being deflected by an electromagnet or "yoke". This has meant that the gun has had to be set back far enough to target the complete screen area and, well, it starts to get ridiculously large and heavy around 36". CRTs are typically as wide as they are deep. They need to be built like this or else the screen would need to be curved too severely for viewing. Not so with SED, where you supposedly get all the advantages of a CRT display but need only a few inches of thickness to do it in. Screen size can be made as large as the manufacturer dares. Also, CRTs can have image challenges around the far edges of the picture tube, which is a non-issue for SED.

SED TV Compared to Plasma TV
Compared to plasma the future looks black indeed. As in someone wearing a black suit and you actually being able to tell it's a black suit with all those tricky, close to black, gray levels actually showing up. This has been a major source of distraction for this writer for most display technologies other than CRT. Watching the all-pervasive low-key (dark) lighting in movies, it can be hard to tell what you're actually looking at without the shadow detail being viewable. Think Blade Runner or Alien. SED's black detail should be better, as plasma cells must be left partially on in order to reduce latency. This means they are actually dark gray – not black. Plasma has been getting better in this regard but still has a way to go to match a CRT. Hopefully, SED will solve this and it's likely to. Also, SED is expected to use only half the power that a plasma does at a given screen size although this will vary depending on screen content.

SED TV Compared to LCD
LCDs have had a couple of challenges in creating great pictures but they are getting better. Firstly, latency has been a problem with television pictures with an actual 16ms speed needed in order to keep up with a 60Hz screen update. That needs to happen all the way through the grayscale, not just where the manufacturers decide to test. Also, due to LCD's highly directional light, it has a limited angle of view and tends to become too dim to view off axis, which can limit seating arrangements. This will not be an issue for SED's self illuminated phosphors. However, LCD does have the advantage of not being susceptible to burn-in which any device using phosphors will, including SED. SED is likely to use about two-thirds the power of a similarly sized LCD. Finally, LCD generally suffers from the same black level issues and solarization, otherwise known as false contouring, that plasma does. SED does not.

SED TV Compared to RPTV
SED is flat and RPTVs aren't. RPTV also has limitations as to where it can be viewed from, particularly being vertically challenged with regard to viewing angles. A particular RPTV's image quality is driven by its imaging technology such as DLP, LCoS, 3LCD or, more rarely recently, CRT. With the exception of CRT, these units need to have their lamps changed at various times but usually at around 6,000 hours, costing an average of $250.

Pricing
The cost of flat panels is largely dependent on production yields of saleable product. Nobody really knows for sure what this will be until real production starts, but new technology is always expensive in early production. If it works, the use of inkjet technology to make SED displays rather than the more expensive photolithography process used in LCD panels should help cost management. The first product release will be a 55" version at full HD resolution (1920x1080) priced comparably to today's plasma display panel (PDP) of similar size. That could be a big dollar difference by early 2007, as the price of plasma displays is expected to continue to drop.

Taking a look at the current crop of display technologies, one reality is hard to escape; we haven’t drastically improved on the nearly antique Cathode Ray Tube (CRT) televisions of years past. Sure we now have flat panels that can display resolutions of up to 1920×1080 pixels or higher in rare instances, but the often shunned CRT technology is capable of resolutions of 2560×1920 and higher, well within the future-proof 1080p spec.

Ok so flat panels don’t beat CRT’s on resolution, and to be honest they don’t really look better, with comparable resolutions. In addition both Plasma and LCD displays often fall short of CRT black levels, so why all the fuss? The flat screen of course, specifically screens less than 3 inches in depth.

What if a new display technology could combine the best attributes of both CRT’s and flat panel displays? Well I haven’t written this far to say wouldn’t that be nice, enter SED (Surface-Conduction Electron-Emitter Display). Spearheaded by Canon and Toshiba back in the mid eighties, SED appears to offer an excellent balance between cost, resolution and screen depth.

The inner workings of SED borrow from both LCD and Plasma technologies; a glass plate is embedded with electron emitters, one for each pixel on the display. The emitters on this plate face a fluorescent coating on a second plate. Between the two plates is a vacuum, and an ultra-fine particle film that forms a slit several nanometers wide. By applying voltage to this slit, the sets can produce a tunneling effect that generates electron emission. The panel emits light as the voltage accelerates some of the electrons toward the fluorescent coating.

SED displays offer brightness, color performance, and viewing angles on par with CRTs. However, they do not require a deflection system for the electron beam. Engineers as a result can create a display that is just a few inches thick; while still light enough for wall-hanging designs. The manufacturer can enlarge the panel merely by increasing the number of electron emitters relative to the necessary number of pixels. Canon and Toshiba believe their SED’s will be cost-competitive with other flat panel displays.
Technology Overview & Description

SED, or Surface-conduction Electron-emitter Displays are a new, emerging technology co-developed by Canon and Toshiba Corporation. The hope for this technology is a display which reproduces vivid color, deep blacks, fast response times and almost limitless contrast. In fact, if you take all of the claims made by the backers of SED you would think that there should be no reason to buy any other type of display. A long life filled with bitter disappointments and lengthy product-to-market times have increased my skepticism and lowered my tendency to act as a cheerleader until products start to hit the market. As far as the specs go, this is one hot technology.

An SED display is very similar to a CRT (and now we come full circle) in that it utilizes an electronemitter which activates phosphors on a screen. The electron emission element is made from an ultra-thin electron emission film that is just a few nanometers thick. Unlike a CRT, which has a single electron emitter that is steered, SEDs utilize a separate emitter for each color phosphor (3 per pixel, or 1 per sub-pixel) and therefore do not require an electron beam deflector (which also makes screen sizes of over 42" possible). Just for clarity that means a 1920 x 1080 panel has 6.2 million electron "guns". The emitter takes roughly 10V to fire and is accelerated by 10kV before it hits the phosphor lined glass panel. Sound like a lot of power? It's all relative as a typical SED display is expected to use about 2/3 the power of a typical plasma panel (and less than CRTs and LCD displays).

OK, here's the real interesting news. SED display electron emitters are supposed to be printable using inkjet printing technology from Canon while the matrix wiring can be created with a special screen printing method. The obvious result is the potential for extremely low production costs at high volumes once the technology is perfected.
What's Next?

Canon debuted an SED display prototype at the la Defense in Paris in October 2005. The specs referenced a <>
SED Display Advantages
CRT-matching black levels
Excellent color and contrast potential
Relatively inexpensive production cost
Wide viewing angle
SED Display Disadvantages
Unknown (though optimistic) life expectancy
Potential for screen burn-in
Currently prototype only

Read more...