Thursday, 30 August 2012

Mirasol Display


Qualcomm engineers have developed MEMS (Micro Electro Mechanical System) based Mirasol display. It mimics the way butterfly or peacock feathers produce brilliant, iridescent and shimmering colours. Light reflected from an image, for instance paper is more appealing to human eye than the backlit displays [1]. Due to the use of light reflection, mirasol display's readability does not diminish even in the presence of direct sunlight. A report from Pike research [2] states that mirasol is more energy efficient than other display device technologies. Mirasol displays are capable of displaying video. The only hiccup is cost. Let us hope it will come down in future. I was introduced to mirasol display by an article published in MIT's Technology Review magazine [7].

Microprocessors, memory chips diligently follow Moore’s law, so in short duration we get phenomenal capacity improvement. In the case of display system, improvement is in snail's pace [4]. Matured technologies like Liquid Crystal Displays (LCD) and Light Emitting Diodes (LED) are lit from back. The display market is dominated by back lit (60%) and transflective (40%) type LCDs. Combination of back lit and reflective technology is called transflective. The remaining displays use OLED and they constitute 5 percentage of entire display market [5].  Above discussed technologies consumes more energy than the reflective type. E-ink's Triton is another reflective display technology that rivals mirasol [3] with good colour capability. Earlier e-ink based ereaders like Kindles and Nooks were limited to black and white only.


Interferometric Modulator (IMOD) is the building block of mirasol display. IMOD is made up of top thin-film and height adjustable (deformable) reflective membrane supported by a transparent substrate. Incident light is reflected from thin-film and as well as from reflective layer. Depending upon the height (i.e. distance between thin film and reflective membrane) constructive interference and destructive interference occurs. So few colors are amplified and others are diminished due to destructive interference. For example, if red colour gets constructive interference then that spot will appear as red. This arrangement can be thought of a optical resonator. IMOD can take only two states or positions.Height can be adjusted(either minimum level or maximum level) by applying voltage between the reflective layers. When all RGB subpixels are in minimal position then ultra-violet ray only will be reflected and other colours are lost in interference. As humans cannot perceive ultra-violet, it appear as a black dot for them. The deformation required will be in the range of few hundred nanometers and time taken will be in the range of microsecond. Due to this only displaying a video is perfectly possible.

A typical mirasol display will be 5.3" (measuring the diagonal of the screen) with 800x480 with 223 pixels per inch. With same dimension but with XGA resolution screens are also available in the market.  But the cost is not pocket friendly. Following products use mirasol displays: Kyobo ereader, Hanvon C18, Bambook Sunflower, Koobe Jin Yong reader, and Bichrome displays. In one blog it was mentioned Kyobo ereader has stopped using mirasol. One has to check its veracity.

Environment friendliness of the product is not measured by usage power consumption alone. The entire lifecycle of the product starting from mining the ore for minerals, manufacturing, assembly, packaging, shipping and ending with disposal, amount of energy used is monitored and noted. In usage and life cycle analysis IMOD based mirasol displays outperform conventional LCDs and LEDs [5]. It is estimated that there are four billion mobiles devices in the year 2008, with LCD and OLED displays. If all of them switched to IMOD display then 2.4 Terawatt-hour of power can be conserved per year. It also noted only 10% of light generated reaches the human eye in LCD and remaining is absorbed by components present in the system itself.

Source:


  1.  Mirasol display, http://www.mirasoldisplays.com
  2.  Pike Research, http://www.pikeresearch.com 
  3. Qualcomm's Mirasol Display Could Mean New Color Nooks and Kindles, by Sascha Segan, http://www.pcmag.com/article2/0,2817,2400889,00.asp
  4. Mirasol Display Technology Could Be the Screens of the Future, http://www.tomshardware.com/news/mirasol-mems-e-ink-display-screens,14867.html
  5.  Energy Efficient Displays for Mobile Devices, Published 4Q 2009, (Pike Research - Energy Efficient Displays_Final.pdf) http://www.mirasoldisplays.com/sites/default/files/resources/doc/Pike%20Research%20-%20Energy%20Efficient%20Displays_Final.pdf
  6. (Picture Courtesy) Qualcomm Mirasol display for color e-readers inspired by butterflies http://www.robaid.com/bionics/qualcomm-mirasol-display-for-color-e-readers-inspired-by-butterflies.htm 
  7. MIT Technology Review magazine, http://www.technologyreview.com/magazine/


Saturday, 25 August 2012

GigE Vision

       GigE Vision  is a camera standard for real-time machine vision.  Automated Imaging Association (AIA)  developed this standard and was released in May 2006. Within a span of four years the number of units shipped was comparable to the rival 'Firewire' and 'Camera link'  standards.  Camera link is from AIA and firewire is Apple's version of IEEE 1394 standard. After the inception of GigE Vision, revision 1.1 and 1.2 was released. In 2011 Gig E Vision 2.0 was released. It supports 10 GigE, IEEE 1588 Precision Time Protocol, JPEG, JPEG2000 and H.264 image compression standards.

GigE Vision Merits

  • Supports common camera control interface GenICam. European Machine Vision Association (EMVA) has developed GenIcam.
  • It has plug & play, high data transfer rate and low cost cabling. All the above helps system integrators a lot.
  • It has wide range of camera for various applications
  • cable length supported is around 100m. This feat is not possible by other standards like Firewire, USB3, camera link and coaXpress.


Camera capture system
          Real time applications  do not  necessarily need  ultra fast acquisition. But images should be acquired and processed within the stipulated time.  Reliability of  a real time system depends upon the parameters like jitter and latency. Latency is normally understood as time delay.  Here it means the time taken to complete a task from start to finish. Jitter gives the time variation when  the same task is repeated multiple times.

         Camera capture system consists of a PC with Network Interface Card (NIC), camera and Ethernet link to connect the PC and camera.  Hardware or software trigger can initiate camera to capture image. As expected, hardware trigger has  lesser latency.  Camera-head process the trigger and start the sensor to accumulate incoming light and convert into electrical charges. These accumulated charges are converted into digital and to be placed in the camera buffer memory. This process is called 'readout'. Entire buffer content is transferred to the PC by breaking them into small chunks and adding Ethernet header for each chunk. NIC receives the packet and raise an interrupt to CPU. If CPU is not busy then it will process the packet  and put the chunk into the computer memory. The time taken from start of trigger to reception of last packet of the image is included to calculate the latency.

GigE Standard
          Single GigE camera connected to PC via direct Ethernet link or multiple GigE camera can be connected to PC through an Ethernet switch. Avoid using hub to multiple cameras

          A dedicated wire or electronic signal which is directly connected to input pin of the camera can act as a hardware trigger. To avoid false start,  trigger debouncing method is incorporated. The price we pay for safety is one microsecond latency. An application software can send a trigger via camera configuration channel and it has lesser responsiveness than camera pin. If a software trigger comes from an application that runs on a non real-time operating systems (ex. Microsoft Windows)  then jitter may vary from few hundredth of microseconds to few milliseconds. So it better to avoid software trigger mode. There are three types of exposures  viz. free running mode, horizontal synchronous mode, reset mode and jitter varies from one frame to one pixel depending upon the type of exposure. Latency of camera is depends on exposure time and sensor readout time. Biggest contributor of latency will be readout time. A 60 frame per second camera takes 16ms to do readout.

           The normal size of Ethernet frame (A packet in physical layer is called frame) will be 1500 bytes. Jumbo packets with a size of 9000 to 16000 bytes are available. A chunk inside a frame will be called as payload. Then GVSP (GigE Vision Stream Protocol) header, UDP (User Datagram Protocol) header, IP header and at last Ethernet header are added to payload. Appended four byte Cyclic Redundancy Code (CRC) will help to detect any errors that creped while the packet was in transit. 8000 byte sized packet will take 16.3 microsecond to get transferred over network.

            Without the involvement of CPU, transfer of data from NIC to memory can be accomplished using Frame Grabber. It contains powerful Direct Memory Access (DMA) engine that helps to reduce latency and jitter to a minimum. Fortunately or unfortunately GigE standard do not have frame grabber. GigE software driver takes care of the role of frame grabber. So choice of GigE software driver plays a vital role in the performance.

 Performance Improvement Tips
  • Few network adapter allow 'interrupt moderation'. This instead of raising an interrupt for every packet arrival, it waits for certain number packets to arrive then it raises an interrupt. This helps to reduce CPU overload.
  • 9000 byte sized jumbo packets are best even though networks may support 16000 byte size jumbo packets. The reason is CRC calculation above 9000 bytes is very cumbersome.
  •  Increase the receiver buffer size as much as possible. This in turn will reduce CPU usage.


A typical GigE camera will have physical dimension of  5cm x 3cm  x 7cm , with 1400 x 1024 image resolution capable taking 75 frames per second (fps). Image exposure duration will be 100 microseconds. The data can be transported over 100m using CAT-5e or CAT-6 cables. It will have a mass of around 120 grams. Monochrome, colour and high speed cameras are available.

Source:

Monday, 13 August 2012

Digital Visual Interface


         Digital video generated by computers are converted into analog signals (Red, Green, Blue video signals) by video graphics card and fed to CRT monitor. As present day plasma, LCD flat panels are digital in nature, generated analog signals are once again converted into digital and fed to display devices. This method is inefficient due to following reasons. First, Digital-to-Analog and Analog-to-Digital process causes loss of image quality. Second, a digital interface can make entire conversion process as well as associated hardware redundant.  A low cost, universally accepted as well as versatile digital interface evolved and it was called Digital Visual Interface (DVI). This was extended for high end devices and called as High-Definition Multimedia Interface (HDMI).


Before getting into the details of DVI technology we have to learn about the need for the technology.

Resolution Name                         Pixel Resolution
Video Graphics Array (VGA)             640 x 480
WVGA                                              854 x 480
Super VGA (SVGA)                         800 x 600
Extended Graphics Array (XGA)     1024 x 768
WXGA                                           1280 x 768
Super XGA (SXGA)                      1280 x 1024
WSXGA                                        1600 x 1024
Ultra XGA                                     1600 x 1200
High Definition TV (HDTV)           1920 x 1080
Quad XGA (QXGA)                     2048 x 1536

Table 1. Resolution Name and Pixel Resolution (Ref. [1], [3])

Resolution name and other details are specified by Video Electronics Standards Association (VESA). The monitor refreshing rates available are 60Hz, 75Hz and 85Hz. Higher the refreshing rate is always better. Now we will calculate the amount of data digital interface has to carry from the computer to display device. 

Data carried = No of horizontal pixels x No of vertical pixels x refreshing rate x Blanking

For a monitor with SXGA resolution and 85Hz refreshing rate, will generate 55 Mega pixels(Mp) data per second for one colour. For three colours it will be 155 Mp per second. This will amount to whopping 1.6 Gbps data rate (155 M pixels and each pixel with 10 bit representation; Yes, 10 bits). Beyond two Gbps it is not possible to send through twisted pairs. This phenomenon is called "Copper Barrier". Data generated by QXGA monitor with 85 Hz refreshing rate is 350 Mp per second. The required bit rate exceeds the copper barrier. So two links are used instead of one. Coaxial cables, Waveguide are other transmission media that can handle two Gbps data rate with ease. But they are expensive compared to twisted pair. In DVI 1.0 specification they have not mentioned the term "twisted pair" explicitly. This term is used in the reference material [1].




                   
In April 1999 DVI 1.0 specification was released by Digital Display Working Group (DDWG).  Its Promoters are Intel, Compaq, Fujitsu, HP, IBM, NEC and Silicon Image. Transition Minimized Differential Signaling (TMDS) technology used in DVI was developed by Silicon Image Inc and connecters were developed by Molex Inc. The first digital standard "Plug and Play" was developed by Video Electronics Standards Association (VESA).  Few years later, "Digital Flat Panel" interface was developed by consortium of Compaq Corporation and its associates. Due to various reasons both standards were not very successful. DVI is backward compatible with analog VGA, Plug and Play and Digital Flat Panel.


DVI have two types of connectors namely DVI-Integrated (DVI-I) and DVI-Digital (DVI-D). 29 pin DVI-I have allotted five pins for analog video and 24 pins for two digital video links. Analog video pins are Red, Green, Blue, Horizontal sync and analog ground. Digital video pins can be grouped into data channels and control signals. There are six pair of data channels to carry  two R’, G', B' colour signals. The difference between RGB and R'G'B' will be discussed in upcoming blog post. Remaining 12 pins carry clock signals and other things. 24 pin DVI-D is designed to carry digital video only. 

TMDS is an electrical technology used to transmit data from computer to display device. Twisted pairs are susceptible to noise and electromagnetic interference (EMI). In differential signaling, one and zero are encoded not in absolute terms but in relative terms. This makes them to be immune to noise. A sharp spike in one twisted pair can create an EMI in adjacent twisted pair. So it becomes necessary to reduce the steep transition in signals.  This is done at the cost of 25 percent increase bit representation (10 bits instead of 8 bits). Earlier to TMDS, Low Voltage Differential Signaling (LVDS) was used in digital interface standards. LVDS was developed by National Semiconductors to transfer data between notebook computer’s CPU to LCD display. This is optimized for short cable length and not for long lengths.

References
  1. “White paper on DVI”, by Infocus Incorporation, available Online from http://electro.gringo.cz/DVI-WhitePaper.pdf
  2.  DVI specification from DDWG,  available Online from http://www.ddwg.org/lib/dvi_10.pdf
  3.  Keith Jack, “Video Demystified: A handbook for the digital engineer”, 5th edition, Publishers:- Newnes , 2007. ISBN: 978-0-7506-8395-1, Indian reprint 978-81-909-3566-1. Rs. 800.
  4.  Pin diagrams of DVI, available Online from http://www.te.com/catalog/Presentations/dvipresentation.pdf