Why yuv instead of rgb
The most important component for YUV capture is always the luminance, or Y component. For this reason, Y should have the highest sampling rate, or the same rate as the other components.
Generally, YUV format equal sample per component is a waste of bandwidth because the chrominance can be adequately sampled at half the sample rate of the luminance without the eye being able to notice the difference. YUV and formats are generally used with the preference on Capturing to YUV or YCrCb , besides being more efficient, is best when the images are to be rendered by a standard video display directly. With the correct format, you can render directly to the video card without transforming the image with MMX instructions or other preferably optimized code.
The byte ordering for YUY2 is as follows:. NV12 is a quasi-planar format and has all the Y components first in the memory, followed by an array of UV packed components. For a x NV12 image, the layout is as follows:. The diagram below illustrates how color can be coded with the U and V components and how the Y component codes the intensity of the signal. This type of conversion is also known as YUV sampling.
To reduce the average amount of data transmitted per pixel from 24 bits to 16 bits, it is more common to include the color information for only every other pixel. This type of sampling is also known as YUV sampling. Since the human eye is much more sensitive to intensity than it is to color, this reduction is almost invisible even though the conversion represents a real loss of information. YUV digital output from a Basler color camera has a depth that alternates between 24 bits per pixel and 8 bits per pixel for an average bit depth of 16 bits per pixel.
As shown in the table below, when a Basler camera is set for YUV output, each quadlet of image data transmitted by the camera will contain data for two pixels. K represents the number of a pixel in a frame and one row in the table represents a quadlet of data transmitted by the camera.
For every other pixel, both the intensity information and the color information are transmitted and this results in a 24 bit depth for those pixels. For the remaining pixels, only the intensity information is preserved and this results in an 8 bit depth for them. As you can see, the average depth per pixel is 16 bits.
Create a free Team What is Teams? Collectives on Stack Overflow. Learn more. Asked 2 years, 11 months ago. Active 2 years, 11 months ago. Viewed times.
Improve this question. Renjith Raju. Renjith Raju Renjith Raju 13 13 bronze badges. Did you see the comment on that question you linked to? I reopened this question, even though I'm not sure enough of what an answer should be for now.
0コメント