Saturday, 31 December 2016

VLC media player

      VLC media player is freely downloadable software [1] having a size of 28 MB and two billion downloads. It works on 14 platforms like Windows, Linux, Android, Apple TV, ChromeOS and so on. As expected, VLC is regarded as the world's most preferred media player [2].  It is difficult to believe that VLC player developer team consists of 12 active members including five core developers [3]. The team thrive on the donations they collect. Building a very successful software using small band of developers with shoe string budget is really fascinating. Adequate documentation regarding the VLC player project and about the people involved are not available in Internet. This post tries to summarize the scrambled pieces of information about VLC player.

     VLC stands for VideoLAN Client. VLS i.e. VideoLAN Server existed in the initial stages of development. Later it was merged with VLC. Technically it is possible to stream a video over a network using VLC player. VLC is software developed by a non-profit organization named VideoLAN. As the name suggests the VLC project was developed to send video over Local Area Network (LAN). The software was developed by students of École Centrale, (a famous engineering college in France, located in Paris) to deliver videos from satellite across the computer network present in the campus. [4]. It was code named as "Network 2000". The project was started in the year 1996. The birth date of VLC is not 1996 instead it is 1st February 2001. On this day only the VLC project was made as an Open Source by École Centrale authorities. General public uses “free software and  Open software interchangeably. Free software means the price of the software is zero. Open software permits the user to access the code (programs). Thus users can fine tune the software according to their wish. Invariably open software comes with zero price tag and general public think these two terms are synonymous.
 
                                              Photo of Mr. Jean Baptiste Kempf.             Image Courtesy:  ocsmag.com

Wednesday, 30 November 2016

Rise of Anti-science


I recently stumbled upon a promotional website for a book titled “Unscientific America” [1]. I read the material presented in the promotional website and the websites that reviewed the book “Unscientific America”[3-4]. In this post views of authors and reviewers are summarized and presented.

At the time of World War II (1939-1945), scientists were regarded as superstars. Their contribution was very vital for war machinery and in turn paved way to victory. Over the decades the disenchantment with science proliferated among the public. Scientists were unaware of this phenomenon and went on business as usual [3].

National Academy of Sciences (NAS) is a non-profit society having distinguished scientists as its members. It was created by act of Congress (American parliament) and the draft was signed by Abraham Lincoln in 1863. It functions as an advisory board to America in the matters of science and technology. NAS has produced a report titled “Rising Above the Gathering Storm” in 2005. In that it has acknowledged the erosion of America’s supremacy in science and Technology. Dramatic increase in science students is offered as a solution to stem the erosion.

Theme of Unscientific America’ revolves around decline in science and technology temper and its consequences. Authors of the book lament that the urgent problems of current century need science-based solutions. But science illiteracy or dislike is increasing as decades passes by. For example only one minute is devoted to science out of five hour news reporting [1]. Over the decades the size of science weekly sections in newspaper has shrunken by two-thirds. Around 46 percentage of Americans deny evolution. Above examples is tip of the iceberg in anti-science stand.

Monday, 31 October 2016

Benefits of Engaging with Public

Science helps us to understand the world around us. Next, science can be used for specific problems (like extending the shelf life of vegetables and meat, making smokeless stove etc). Science cannot solve problems that has significant human influence. Problems like malnutrition, corruption and poverty are classic examples.

Application of science has produced many useful things to human beings. One such thing is vaccination technique. Using vaccination dreaded diseases like diphtheria, small pox and polio can be prevented from occurring. Thereby improving the human being's living conditions. But there is resistance to use  vaccination due to misinformation (fraudulent research publications [1], public apathy towards government [2]). A research paper appeared in Lancet journal linking autism and vaccination use. Subsequent research finding proved that vaccination has no bearing on autism [1]. Anyway the misinformation reached the public well than the subsequent research finding.

Nowadays affluent people prefer 'Organic farm produce' rather than produce enriched by chemical fertilizer. The problem is not chemical fertilizers but over use of fertilizers by ill trained farmers. But, public blame science for spoiling the agricultural lands and food.  So, it becomes imperative for scientists to communicate with common people and debunk the misinformation they hold.

Friday, 30 September 2016

Science of Blur

A picture professionally shot will convey a message or concept. Professionals shoot in such a way that selected portions of photo is unsharp (blur). Normally viewers will stay away from blurred portions and keep their focus of attention on sharp portions of photo. Thus, photographers' decide what the viewers should see and how the photo is interpreted.

In this post, we will discuss about science of blur image formation. We have studied in our childhood, a convex lens will bend the rays. So, parallel (collimated) rays entering the convex lens will get bent and converge into a point. That point is called is 'focal point'. The distance between the centre of lens and the focal point is called 'focal length'.

Figure 1. Science of Blur explaining through scenario



Wednesday, 31 August 2016

​‌Role of Lens in Light Capture

A device that captures the light is Camera. Shutter, lens and film are the essential component in a camera. In digital camera film is replaced by CCD or CMOS sensor array. Shutter decides the quantum of light falling on the film.  Top layer of film is made up light sensitive granules. When film is exposed to light, granules get oxidized with respect to intensity of light. The exposed film is chemically washed to make negative film. The negatives are used to create photos. The sensitivity of film is described using ISO number. Higher the number results in more sensitivity towards light.

Lens plays a dominant role in determining the cost and quality of camera. Lion share of camera cost is attributed to lens. Professional photographers will have lenses ranging from 35 mm, 50 mm, 85 mm up to 200 mm. But, one can take photos without lens. The statement may evoke surprise but it is a fact. A camera without lens is called 'Pin Hole Camera'. The construction of the camera is simple. Take a rectangular metal or wooden box. Place a film in one side of the box and close it. Make a very small hole on the other side. This hole is technically called as aperture.  Via aperture, light from outside scene (say tree) falls on the film and an inverted image is formed. After getting sufficient exposure to light the aperture has to be closed with a shutter. Otherwise due to over exposure film will be washed out.

Sunday, 31 July 2016

Photography: Confluence of Physics and Art

The word 'photography' made of two Greek root words 'photo' and 'graph'. Photo means light and graph means drawing and together means "drawing with light." Capturing light using camera is a technology but conveying a message via captured light is an art.  Photographers (invariably from non-science background) use the camera to take stunning visuals. The science behind the photographic technology is called optics. Optical scientists and photographers are two sides of same coin. Both of them explain their side very beautifully. A unified view if provided will help the general public to grasp both sides easily. 

  Camera captures the reflected light from the subject (human being, animal or object of interest). The light may have originated from Sun or artificial light source. The reflected light is channelized via opaque diaphragm with an aperture (a small hole). Present day digital camera with aid of lens, (actually multi-lens arrangement) makes the light to fall on to the two-dimensional array of light sensors. The sensor may be made up of CMOS and CCD technology. Light sensors convert the light into electrical signals and given to Analog-to-Digital converters to provide digital output. This raw digital data is compressed and stored as JPEG image file. The JPEG file can be viewed on a screen present in the camera or can be transferred to computer for further processing. 

Monday, 27 June 2016

Photography - Visual SMS

​‌      A picture that is professionally shot will convey a message or concept. Let the stated maxim be put to test. The picture in the figure 1. is taken as test image. In that image one can infer following things: "a man", "sitting posture" and "a park or woods or forest". Shirt, pant and body shape suggests the subject should be a man. The male subject's head is turned right and upper body  is blurred (i.e. not clearly visible). The sitting on the ground with stretched legs, bare feet and upper body supported by hands evokes a "relaxed" emotion in viewers' mind. Looking at the greenery behind the man, one can assume the location as forest or wood. But common sense tells us a commoner will not sit casually in a forest. So, the location must be a park only.

Figure 1. Relaxed man in the park.   Image Courtesy: Wikipedia

      

Monday, 23 May 2016

Article Review - Kiss your TV Goodbye

In the past seven decades functionality of the television is not changed much. It just pushes the video content from the studio to the viewers. Viewers are expected to be passive recipient of video content and be content with 'surfing the channels'. 

In 1950s television set had 12" to 14" size Cathode Ray Tube (CRT) screen, Radio Frequency (RF) tuner, RF-to-Baseband converter, Video processing electronics, display electronics and loudspeaker. Television signals are composite in nature and made up of video, audio and synchronization signals. Video and synchronization signals are AM modulated and audio signals are FM modulated. Both signals are multiplexed and transmitted in the Very High Frequency (VHF) and Ultra High Frequency (UHF) range. The TV receives the signals via antenna and feed it to RF tuner. The RF tuner selects one television channel out of 12 (up to 1980s 12 channels only available). They are amplified, demodulated and results in baseband signal (composite signal). From composite signal audio and video signals are extracted or demultiplexed. The video signal will be in YCrCb format. It has to be converted into Red, Green and Blue signals (primary colours). This is done by Video processing electronics. The CRT screen operates at very high voltage and video signals has to be projected on the screen in a proper order (technical term is scanning). The ordering information is present in the synchronization signals. Projecting the signals on to the screen is the task of display electronics.

The article in review [1], i.e. "Kiss your TV goodbye" deals with the transformation occurred in the television in a span of 70 years. It tells the screen size has increased from 12" to 80". Thickness of the television has come down heavily and nowadays TV looks like a wall mounted painting. The pressure to reduce the thickness of the TV has forced the manufacturers to take out lot of components that were inside the TV. First RF tuner, loud speaker and at last video processing board are taken out. Thus in today's TV display electronics and TV screen (made up of either LCD or LED) are the components which reside inside the TV.

Saturday, 30 April 2016

High Efficiency Video Coding

       High Efficiency Video Coding (HEVC) is the latest buzzword in the video compression. HEVC envisions to attain the same visual quality as H.264/MPEG-4 Advanced Video Coding (AVC) with just 50 %  of the bit rate requirement of current video standard (H.264/MPEG-4). Two to ten time increase in computational complexity is permitted. HVEC relies on VLSI and parallel computing to achieve the 'bit rate reduction' feat. There is no paradigm shift in the way video compression is carried out. Instead HEVC employs computationally intensive but compression efficient algorithms. Reference [1] is suited for enthusiasts and [2] for researchers.
        In an MPEG video encoder, video is divided into frames, slices, macroblock and block. Block is the smallest unit that contains 8x8 pixels. MPEG exploit temporal (literally time; frame-to-frame) and spatial redundancies (like JPEG). Because of this very high compression ratio is achieved. To improve compression, MPEG uses a concept of motion vector. It employs Discrete Cosine Transform (DCT). Redundancy removed video is encoded by Huffman coding (popular variable length coding technique) to form bit stream.  Importantly MPEG specify the format of compressed video only. Encoding the video and implementation of hardware is left to the manufacturers.

Thursday, 31 March 2016

Market and technology trends in video broadcasting

Long ago television was used by commoners and networked computers were used by programming nerds. The proliferation of digital technology made Internet (computer networks) to offer digital video content and become competitor for TV.

Traditionally cost of television content production and broadcasting is borne by advertisers, viewers and public taxation.  Advent of Internet created a new path in the  delivery of video content to the viewers. It raised the aspirations of viewers to have 'anytime and anywhere' TV experience. TV no longer is the de facto display medium and other display devices like tablet, personal computer and mobile phone become main contenders.  Advertisers  aspires to take their product or service to the target audience rather than to the general public at large. Online video delivery platforms are most suited for targeted advertisement.

Digital technology helped to increase number of TV channels, picture resolution, and market penetration. It also helped the Internet to grow leaps of bounds and to become contender for TV. So, digital technology can be seen as a double agent.

Several questions arise in our mind; like, Whether smart phones with with 4G technology can make a dent in traditional TV revenues? and What is the future for the pay TV in the Netflix era? Answers for the above questions can be got from the referenced article [1].

Reference

1. Current Market and Technology Trends in the Broadcasting Sector, Published by IHS Technology, May 2015. (PDF, 81 pages, 724 KB)
    Weblink:: http://www.wipo.int/edocs/mdocs/copyright/en/sccr_30/sccr_30_5.pdf

Monday, 29 February 2016

Back to square one


        First post of this blog was in December 2011. First few post of this blog contained links to interesting websites pertaining to Digital Image Processing (DIP). The blog was created to mimic an online bookmark facility. Next, the online bookmark was thrown open to the public. This move was in consonance with the tradition of science. The growth of science can be attributed to two reasons. Objective reporting backed by sufficient data and timely dissemination of knowledge (inference of the experiments) to the scientific community. Because of this, hypothesis were tested thoroughly and hypothesis which passed the test made into theory and others were summarily rejected. This feature improved the credibility of science and became a 'brand' per se.

         In early days of the blog, finding an interesting website and posting it on blog was highly exciting. Over a period of time excitement after an upload started dwindling. To regain the excitement, articles on latest topics were written and uploaded in the blog. Article titled 'Super Hi-vision' which was written on July 2012 was the first article in this blog. It attracted more visitors and created a sense of accomplishment in me. Writing an article required undivided attention and few hours of time. So, weekend were sacrificed for writing and uploading an article. As usual due to passage of time 'excitement' level started dwindling. In the year 2014 on wards I pushed myself to write thousand worded abstract articles.

           I stumbled upon the interrelationship between technology and society. The wide gap created between technology and society by the present pedagogy made me really sick. I felt it is my duty to propagate the idea of 'holistic approach' rather than 'parochial approach' followed by present educational system. Few Socio-Technical (ST) articles were written and uploaded. I regained the sense of 'high' after writing the article. But amount of effort and time required was very high. So, number of post per year was reduced to six from prevailing 12. Writing ST articles brought few problems. First, weekends were lost. Second, one cannot set hard deadline to get an idea. It has to emerge on its' own. Forcing ourselves will be counter productive and create a mental depression. This phenomenon is common among the artists and writers. Number of visitors to ST articles are relatively low. Writing ST articles under the blog titled 'A to Z Digital Image Processing'  does not augur well. Setting aside the above stated problems, If I continue to write then I will end up as a masochist (a person who likes to inflict himself or herself).

Thus it is decided to go back to the roots i.e. find interesting website and upload it in the blog. You may call it as U turn. I will call it as course correction. Yes. View differs.

U turn or Course correction ?  image courtesy: www.the-future-of-commerce.com