Sunday, 30 September 2012

Multicore Processors and Image Processing - I


         Quad-core based desktops and laptops have become order of the day. These multi-core processors after long years of academic captivity have come to the limelight. Study of multi-core and multi-processors, come under the field of High Performance Computing.
          Two or more processors are fabricated in a single package then it is called multi-core processor. Multi-processors and multi-process (single processor but multiple application are running simultaneously) are different from multi-core processors. Quad (four) and Duo (two) are commonly used multi-core processor by general public. With prevailing fabrication technology, race to increase raw clock power beyond 3GHz is nearing the end.  Further increase in computing power, is possible with deploying Parallel computing concept.  This is the reason why, all the manufacturers are introducing multi-core processors.

           Image processing algorithms are exhibits high degree parallelism.  Most of algorithms have loop(s) and they iterate through a pixel, row or an image region.  Let us take an example loop, which has to iterate 200 times.  Single processor will iterate 200 times and in a quad-core each processor will iterate 50 times only. Obviously quad-core will finish the task much faster than single processor. To achieve the stated speed, programs have to be slightly modified and multi-core optimized compilers are required.


Image size         Time to execute in (ms)
                    
        (Single-core)   (Multi-core)
256x256                   18                          8
512x512                   65                        21
1024x1024             260                        75

       Amount of time needed to twist  Lenna image is given in the above table. Lenna image with 1024x 1024 takes 260ms in single core and 75ms in multi-core to perform twist operation. From this it is very clear, in a single-core processor, as image size increases execution time increases exponentially but in multi-core processor in goes in linear fashion [1].

        Algorithms will exhibit either fine grain or medium grain or course grain parallelism. Smoothing, sharpening, filtering and convolution image processing functions operate on entire image and they are classified as fine-grain systems.  Medium grain parallelism is exhibited by Hough transform, motion analysis and the functions operate on part of an image.  Position estimation and object recognition comes under the class of course grain and parallelism exhibited is very less [2].  Algorithms can be split into memory-bound and CPU bound algorithms. In CPU bound algorithms no of cores helps to achieve linear-speed up subject to Amdahl's law [3].

       Multiprocessors are developed by Intel and AMD are very popular. Following table content got from [3].
Multicore Processor        No of core    Clock speed
Intel Xeon E5649                    12                2.53GHz
AMD Opteron 6220               16                3.00GHz
AMD Phenom X4(9550)          4                2.20GHz

        Intel Xeon was launched in 2002.  Intel introduced simultaneous multiple threading capability and named it as Hyper Threading. It has also launched Core 2 Duo T7250, which is a low voltage mobile processor runs at 2GHz. Sony’s Playstation 3 has a very powerful multi-core processor.  This is called Cell Broadband Engine and it is developed jointly by Sony, Toshiba and IBM.  It has a 64bit PowerPC processor connected by a circular bus to eight RISC (Reduced Instruction Set Computing) with 128 SIMD (Single Instruction Multiple Data) architecture based co-processors. SIMD is well suited  to exploit  fine grain parallelism.


In the part two series GPU and OpenMP Application Program Interface (API) wil l be discussed.


Source

[1] Greg Slabaugh, Richard Boyes, Xiaoyun Yang, "Multicore Image Processing with OpenMP", Appeared in IEEE Signal Processing Magazine, March 2010, pp134-138. But it can be downloaded from  http://www.soi.city.ac.uk/~sbbh653/publications/OpenMP_SPM.pdf (PDF, 1160KB )
[2] Trupti Patil, "Evaluation of Multi-core Architecture for Image Processing,"  MS Thesis,  year 2009, at Graduate School of Clemson University (PDF, 1156KB) www.ces.clemson.edu/~stb/students/trupti_thesis.pdf

[3] OpenMP in GraphicsMagick,  http://www.graphicsmagick.org/OpenMP.html


Note
  • Please read [1], as it is insightful and language used is simpler compared to standard technical articles.
  • Warped Lena and above table was adapted from [1] and not copied.  GIMP software was used to create the warped effect on the Lenna image.

Saturday, 22 September 2012

RoboRetina


        RoboRetina™ image sensor is capable of producing satisfactory images even in non-uniform illumination conditions. Existing cameras requires uniform illumination to produce satisfactory results. Photographers vary camera's shutter speed to capture brightly illuminated or poor lit scenes. Amount of light falls on the image sensor (in olden days it was film) proportional to duration of time shutter is open. Thus sun-lit scenes need short shutter opening duration. In natural light conditions both bright and shadow regions (poor lit) will be available simultaneously. But camera can be made to capture either bright region or dark region and not both. Our eyes are able to adjust to natural light or non-uniform light with ease. This feat is barely noticed by us, until this article is read.



A surveillance camera system that monitors airport will fail to detect persons lurking under the shadows because of non-uniform illumination. Intrigue Technologies Incorporation having head quarters in Pittsburgh, Pennsylvania, USA has come out with RoboRetina™ image sensor that tries to mimic our human eye. They have developed a prototype with a resolution of 320x240. This is capable of seeing things under the shadow. They have used standard CMOS fabrication process to build the prototype. The 320x240 resolution is sufficient for a robot mounted with RoboRetina to navigate on a cloudy weather. Brightness adaptation operation is carried out without the use of traditional number crunching. This feature will amuse as all, as we are conditioned to think performing digital processing only.  Array of photoreceptors is called image sensor. In the prototype each photoreceptor is added with an analog circuit that is stimulated by the light and they control the functioning of photoreceptor.

Silicon based integrated chips that tries to mimic working of eye is called Neuromorphic vision sensors. The term 'neuromorphic engineering' was coined by Mr.Carver Mead in mid 1980. He was working at California Institute Technology, Pasadena, USA. Analog circuit based vision chips were developed by University of Pennsylvania, in Philadelphia, USA and Johns Hopkins University, in Baltimore, USA. Analog circuit present in the chip vary the sensitivity of the detector depending upon the light fallen on the detector. This concept only is extended in RoboRetina. Here light falls on the surrounding detectors also play a vital role in sensitivity adjustment of photo detector. Success of RoboRetina depends on the accurate estimation of illumination field.

Around 2005 itself Intregue's eye was available as an Adobe Photoshop plug-in. It was rightly named 'Shadow Illuminator'. Medical X-ray images were taken as input and software was able to reveal unclear portions of medical image. Photographers use this software to do correction in their photos which is technically called as 'Enhancement'. Software that does not use RoboRetina technique produces "halo" effect on the sharp discontinuities.

        CEO of Intrigue Mr. Vladimer Brajovic is an alumnus of The Robotics Institute at Carnegie Mellon University, Pittsburgh, USA. RoboRetina got the Frost & Sullivan Technology Innovation Award for the year 2006. Frost & Sullivan [4] is a growth consulting company with more than 1000 clients all over the world. This award is a feather in the cap for Intrigue Technology. After the emergence of neuromorphic sensor concept RoboRetina is the first breakthrough. Let us hope this will lead to the Autonomous Vision System revolution which will greatly enhance the performance of automotive systems, surveillance systems and unmanned systems.


Source: 

[1] Intrigue Technologies, The Vision Sensor Company, Press Release, http://www.intriguetek.com/PR_020407.htm
[2] Robotic Vision gets Sharper by Prachi Patel Predd - IEEE Spectrum March 2005  http://spectrum.ieee.org/biomedical/imaging/robotic-vision-gets-sharper
[3] Photo Courtesy: https://www.intrigueplugins.com/moreInfo.php?pID=CBB
[4] Frost & Sullivan, http://www.frost.com

Thanks:
       I want to personally thank Mr. B. Shrinath for emailing  me 'RoboRetina'  article, that was published in spectrum online.

Wednesday, 12 September 2012

Biscuit Inspection Systems

         We all know if a company wants to stay in business then it has to manufacture quality products. Visual inspection of products is well known method to check quality. Earlier days trained human beings were used for inspection. Nowadays machine vision systems are employed. The reasons are many. It can work throughout day and night without a sign of fatigue. It can surpass the human inspection speed. It can have a wide dynamic range camera which will help to differentiate even a small change in colour. With human inspection system to check all the manufactured products  is not cost effective.  Few samples from a batch of products are taken and quality testing on the samples are carried out. Statistical methods are employed to estimate the amount of failed products from the failed samples. But with online one can check each product individually [1].

Consumers expect high quality biscuits to have consistent size, shape, colour and flavour. Size and shape improves the aesthetics of biscuits. Colour and flavour has role on the taste of biscuits. Electronic nose can be used detect flavour. In tea processing industry electronic nose is employed and reported in scientific papers. Articles related to employability of electronic nose in biscuit manufacturing are to be found in Internet. It is a common knowledge that any biscuit that is over baked will be in dark brown in colour and under baked will posses light brown colour. It is technically called as 'Baking Curve'. Image processing techniques are used find the shades of biscuit colour and classification is carried out by artificial neural networks. This method was developed in way back of 1995 by Mr. Leonard G. C. Hamey [8]. In a typical cream sandwiched biscuit, top and bottom layers are biscuits and middle layer is made up of filling like cream or chocolate. Cost of filling is more than the biscuits. Over filling means less profit for the company. So much care is taken to maintain correct size of biscuit and filling height.


In a typical production line, every minute 30 rows (120 biscuits form a row) of baked biscuits passes on a conveyor and all of them has to be inspected. This results in checking of 3600 biscuits per minute. Length, width and thickness are measured with an accuracy of ± 0.17 mm. In addition to this check for cracks and splits are carried out. If a biscuit fails to qualify the then it is discarded. 

In a typical biscuit inspection system, three cameras will be used to grab the image of moving biscuits which are illuminated by special fluorescent lights. Grabbed images are processed to get size and shape. A fourth camera that is in 45 degree angle used to capture the laser light that falls on the biscuits.  When multiple laser line images are combined it gives a 3D shape of biscuit. [2, 4]. For sample inspection pictures go to [7] and download the pdf file.  The cameras are required to operate in a 45 degree centigrade ambient temperature. The captured images are transferred via GigE to the inspection room which is 100m away from the baking system. Special software displays the captured images on the computer screen with necessary controls. The images are stored for four years.

                                   
List of Vision system manufacturers
o Machine Vision Technology in United Kingdom [2] ,
o Hamey Vision Private Limited in Australia [4]
o Q-Bake from EyePro systems [6]

In India way back in 2002, CEERI (Central Electronics Engineering Research Institute) present in CSIR Madras complex developed a Biscuit Inspection system with a budget of Rs. 20.7 Lakhs. (1 Lakh =100,000 and   Rs. 50 approx. 1US$). It got the fund from Dept. of Science and Technology, Govt. of India and partnered with Britannia Industries, Chennai to get requirements [5].

Source
3. Biscuit Bake Colour Inspection System - Food Colour Inspection, http://www.hameyvision.com.au/biscuit-colour-inspection.html
4. Simac Masic,  http://www.simac.com 
5. CMC News, July – December, 2002, http://www.csirmadrascomplex.gov.in/jd02.pdf 
6. Q-Bake, Inspection Machine for Baked Goods, http://www.eyeprosystem.com/q-bake/index.html 
8. Pre-processing Colour Images with a Self-Organising Map:Baking Curve Identification and Bake Image Segmentation,  http://www.hameyvision.com.au/hamey-icpr98-som-baking.pdf

Courtesy
I want to thank Dr. A. Gopal, Senior principle scientist, CEERI, CSIR Madras Complex, Chennai, who gave a lecture on Biscuit inspection system in National level workshop on “Embedded Processors and Applications” held at SRM University, Chennai on 31-Aug-2012. He inspired me to write this article.