专利汇可以提供Method for processing video pictures for false contours and dithering noise compensation专利检索,专利查询,专利分析的服务。并且,下面是Method for processing video pictures for false contours and dithering noise compensation专利的具体信息内容。
The present invention relates to a method and an apparatus for processing video pictures especially for dynamic false contour effect and dithering noise compensation.
The plasma display technology now makes it possible to achieve flat colour panels of large size and with limited depth without any viewing angle constraints. The size of the displays may be much larger than the classical CRT picture tubes would have ever allowed.
Plasma Display Panel (or PDP) utilizes a matrix array of discharge cells, which could only be "on" or "off'. Therefore, unlike a Cathode Ray Tube display or a Liquid Crystal Display in which gray levels are expressed by analog control of the light emission, a PDP controls gray level by a Pulse Width Modulation of each cell. This time-modulation will be integrated by the eye over a period corresponding to the eye time response. The more often a cell is switched on in a given time frame, the higher is its luminance or brightness. Let us assume that we want to dispose of 8 bit luminance levels i.e 255 levels per color. In that case, each level can be represented by a combination of 8 bits with the following weights:
1 - 2 - 4 - 8 - 16 - 32 - 64 - 128
To realize such a coding, the frame period can be divided in 8 lighting sub-periods, called subfields, each corresponding to a bit and a brightness level. The number of light pulses for the bit "2" is the double as for the bit "1"; the number of light pulses for the bit "4" is the double as for the bit "2" and so on... With these 8 sub-periods, it is possible through combination to build the 256 gray levels. The eye of the observers will integrate over a frame period these sub-periods to catch the impression of the right gray level. The figure 1 shows such a frame with eight subfields.
The light emission pattern introduces new categories of image-quality degradation corresponding to disturbances of gray levels and colors. These will be defined as "dynamic false contour effect" since it corresponds to disturbances of gray levels and colors in the form of an apparition of colored edges in the picture when an observation point on the PDP screen moves. Such failures on a picture lead to the impression of strong contours appearing on homogeneous area. The degradation is enhanced when the picture has a smooth gradation, for example like skin, and when the light-emission period exceeds several milliseconds.
When an observation point on the PDP screen moves, the eye will follow this movement. Consequently, it will no more integrate the same cell over a frame (static integration) but it will integrate information coming from different cells located on the movement trajectory and it will mix all these light pulses together, which leads to a faulty signal information.
Basically, the false contour effect occurs when there is a transition from one level to another with a totally different code. The European patent application EP 1 256 924 proposes a code with n subfields which permits to achieve p gray levels, typically p=256, and to select m gray levels, with m<p, among the 2n possible subfields arrangements when working at the encoding or among the p gray levels when working at the video level so that close levels will have close subfields arrangements. The problem is to define what "close codes" means; different definitions can be taken, but most of them will lead to the same results. Otherwise, it is important to keep a maximum of levels in order to keep a good video quality. The minimum of chosen levels should be equal to twice the number of subfields.
As seen previously, the human eye integrates the light emitted by Pulse Width Modulation. So if you consider all video levels encoded with a basic code, the temporal center of gravity of the light generation for a subfield code is not growing with the video level. This is illustrated by the figure 2. The temporal center of gravity CG2 of the subfield code corresponding a video level 2 is superior to the temporal center of gravity CG3 of the subfield code corresponding a video level 3 even if 3 is more luminous than 2. This discontinuity in the light emission pattern (growing levels have not growing gravity center) introduces false contour. The center of gravity is defined as the center of gravity of the subfields 'on' weighted by their sustain weight:
The center of gravity SfCGi of the seven first subfields of the frame of figure 1 are shown in figure 3.
So, with this definition, the temporal centers of gravity of the 256 video levels for a 11 subfields code with the following weights, 1 2 3 5 8 12 18 27 41 58 80, can be represented as shown in figure 4. As it can be seen, this curve is not monotonous and presents a lot of jumps. These jumps correspond to false contour. The idea of the patent application EP 1 256 924 is to suppress these jumps by selecting only some levels, for which the gravity center will grow smoothly. This can be done by tracing a monotone curve without jumps on the previous graphic, and selecting the nearest point. Such a monotone curve is shown in figure 5. It is not possible to select levels with growing gravity center for the low levels because the number of possible levels is low and so, if only growing gravity center levels were selecting, there will not be enough levels to have a good video quality in the black levels since the human eye is very sensitive in the black levels. In addition the false contour in dark area is negligible. In the high level, there is a decrease of the gravity centers. So, there will be a decrease also in the chosen levels, but this is not important since the human eye is not sensitive in the high level. In these areas, the eye is not capable to distinguish different levels and the false contour level is negligible regarding the video level (the eye is only sensitive to relative amplitude if we consider the Weber-Fechner law). For these reasons, the monotony of the curve will be necessary just for the video levels between 10% and 80% of the maximal video level.
In this case, for this example, 40 levels (m=40) will be selected among the 256 possible. These 40 levels permit to keep a good video quality (grayscale portrayal). This is the selection that can be made when working at the video level, since only few levels, typically 256, are available. But when this selection is made at the encoding, there are 2n different subfield arrangements, and so more levels can be selected as seen on the figure 6, where each point corresponds to a subfield arrangement (there are different subfield arrangements giving a same video level).
The main idea of this Gravity Center Coding, called GCC, is to select a certain amount of code words in order to form a good compromise between suppression of false contour effect (very few code words) and suppression of dithering noise (more code words meaning less dithering noise).
The problem is that the whole picture has a different behavior depending on its content. Indeed, in area having smooth gradation like on the skin, it is important to have as many code words as possible to reduce the dithering noise. Furthermore, those areas are mainly based on a continuous gradation of neighboring levels that fits very well to the general concept of GCC as shown on figure 7. In this figure, the video level of a skin area is presented. It is easy to see that all levels are near together and could be found easily on the GCC curve presented. The figure 8 shows the video level range for Red, Blue and Green mandatory to reproduce the smooth skin gradation on the woman forehead. In this example, the GCC is based on 40 code words. As it can be seen, all levels from one color component are very near together and this suits very well to the GCC concept. In that case we will have almost no false contour effect in those area with a very good dithering noise behavior if there are enough code words, for example 40.
However, let us analyze now the situation on the border between the forehead and the hairs as presented on the figure 9. In that case, we have two smooth areas (skin and hairs) with a strong transition in-between. The case of the two smooth areas is similar to the situation presented before. In that case, we have with GCC almost no false contour effect combined with a good dithering noise behavior since 40 code words are used. The behavior at the transition is quite different. Indeed, the levels required to generate the transition are levels strongly dispersed from the skin level to the hair level. In other words, the levels are no more evolving smoothly but they are jumping quite heavily as shown on the figure 10 for the case of the red component.
In the figure 10, we can see a jump in the red component from 86 to 53. The levels in-between are not used. In that case, the main idea of the GCC being to limit the change in the gravity center of the light cannot be used directly. Indeed, the levels are too far each other and, in that case, the gravity center concept is no more helpful. In other words, in the area of the transition the false contour becomes perceptible again. Moreover, it should be added that the dithering noise will be also less perceptible in strong gradient areas, which enable to use in those regions less GCC code words more adapted to false contour.
It is an object of the present invention to disclose a method and a device for processing video pictures enabling to reduce the false contour effects and the dithering noise whatever the content of the pictures.
This is achieved by the solution claimed in independent claims 1 and 10.
The main idea of this invention is to divide the picture to be displayed in areas of at least two types, for example low video gradient areas and high video gradient areas, to allocate a different set of GCC code words to each type of area, the set allocated to a type of area being dedicated to reduce false contours and dithering noise in the area of this type, and to encode the video levels of each area of the picture to be displayed with the allocated set of GCC code words.
In this manner, the reduction of false contour effects and dithering noise in the picture is optimized area by area.
Exemplary embodiments of the invention are illustrated in the drawings and in more detail in the following description
In the figures :
According to the invention, we use a plurality of sets of GCC code words for coding the picture. A specific set of GCC code words is allocated to each type of area of the picture. For example, a first set is allocated to smooth areas with low video gradient of the picture and a second set is allocated to high video gradient areas of the picture. The values and the number of subfield code words in the sets are chosen to reduce false contours and dithering noise in the corresponding areas.
The first set of GCC code words comprises q different code words corresponding to q different video levels and the second set comprises less code words, for example r code words with r<q<n. This second set is preferably a direct subset of the first set in order to make invisible any change between one coding and another.
The first set is chosen to be a good compromise between dithering noise reduction and false contours reduction. The second set, which is a subset of the first set, is chosen to be more robust against false contours.
Two sets are presented below for the example based on a frame with 11 sub-fields: 1 2 3 5 8 12 18 27 41 58 80
The first set, used for low video level gradient areas, comprises for example the 38 following code words. Their value of center of gravity is indicated on the right side of the following table.
The temporal centers of gravity of these code words are shown on the figure 11.
The second set, used for high video level gradient areas, comprises the 11 following code words.
The temporal centers of gravity of these code words are shown on the figure 12.
These 11 code words belong to the first set. In the first set, we have kept 11 code words from the 38 of the first set corresponding to a standard GCC approach. However, these 11 code words are based on the same skeleton in terms of bit structure in order to have absolutely no false contour level.
Let us comment this selection:
Levels 1 and 4 will introduce no false contour between them since the code 1 (1 0 0 0 0 0 0 0 0 0 0) is included in the code 4 (1 0 1 0 0 0 0 0 0 0 0). It is also true for levels 1 and 9 and levels 1 and 17 since both 9 and 17 are starting with 1 0. It is also true for levels 4 and 9 and levels 4 and 17 since both 9 and 17 are starting with 1 1, which represents the level 4. In fact, if we compare all these levels 1, 4, 9 and 17, we can observe that they will introduce absolutely no false contour between them. Indeed, if a level M is bigger than level N, then the first bits of level N up to the last bit to 1 of the code of the level N are included in level M as they are.
This rule is also true for levels 37 to 163. The first time this rule is contravened is between the group of levels 1 to 17 and the group of levels 37 to 163. Indeed, in the first group, the second bit is 0 whereas it is 1 in the second group. Then, in case of a transition 17 to 37, a false contour effect of a value 2 (corresponding to the second bit) will appear. This is negligible compared to the amplitude of 37.
It is the same for the transition between the second group (37 to 163) and 242 where the first bit is different and between 242 and 255 where the first and sixth bits are different.
The two sets presented below are two extreme cases, one for the ideal case of smooth area and one for a very strong transition with high video gradient. But it is possible to define more than 2 subsets of GCC coding depending on the gradient level of the picture to be displayed as shown on figure 13. In this example, 6 different subsets of GCC code words are defined which are going from standard approach (level 1) for low gradient up to a strongly reduced code word set for very high contrast (level 6). Each time the gradient level is increased, the number of GCC code words is decreased and in this example, it goes from 40 (level 1) to 11 (level 6)
Besides the definition of the set and subsets of GCC code words, the main idea of the concept is to analyze the video gradient around the current pixel in order to be able to select the appropriate encoding approach.
Below, you can find a standard filter approaches in order to extract current video gradient values:
The three filters presented above are only example of gradient extraction. The result of such a gradient extraction is shown on the figure 14. Black areas represent region with low gradient. In those regions, a standard GCC approach can be used e.g. the set of 38 code words in our example. On the other hand, luminous areas will correspond to region where reduced GCC code words sets should be used. A subset of code words is associated to each video gradient range. In our example, we have defined 6 non-overlapping video gradient ranges.
Many other types of filters can be used. The main idea in our concept is only to extract the value of the local gradient in order to decide which set of code words should be used for encoding the video level of the pixel.
Horizontal gradients are more critical since there are much more horizontal movement than vertical in video sequence. Therefore, it is useful to use gradient extraction filters that have been increased in the horizontal direction. Such filters are still quite cheap in terms of on-chip requirements since only vertical coefficient are expensive (requires line memories). An example of such an extended filter is presented below:
In that case, we will define gradient limits for each coding set so that, if the gradient of the current pixel is inside a certain range, the appropriate encoding set will be used.
A device implementing the invention is presented on figure 15. The input R, G, B picture is forwarded to a gamma block 1 performing a quadratic function under the form
标题 | 发布/更新时间 | 阅读量 |
---|---|---|
一种动态偏振激光回波信号模拟系统 | 2020-05-12 | 674 |
大角度非均匀转动空间目标ISAR成像方法 | 2020-05-13 | 480 |
屏幕状态控制方法、装置、移动终端以及存储介质 | 2020-05-13 | 794 |
一种基于直调激光器的跳频通信系统和跳频信号生成方法 | 2020-05-08 | 432 |
一种基于噪声隔离的自适应智能开关 | 2020-05-14 | 420 |
马赫带效应模拟的点光源金属钢印文字检测方法 | 2020-05-12 | 18 |
一种抑制风速波动干扰的风电机组变桨控制方法 | 2020-05-12 | 743 |
一种金属氧化物薄膜的制备方法 | 2020-05-14 | 77 |
一种用于延迟锁相环的电荷泵电路 | 2020-05-08 | 699 |
一种基于受激布里渊散射效应的高稳定微波信号生成方法 | 2020-05-12 | 731 |
高效检索全球专利专利汇是专利免费检索,专利查询,专利分析-国家发明专利查询检索分析平台,是提供专利分析,专利查询,专利检索等数据服务功能的知识产权数据服务商。
我们的产品包含105个国家的1.26亿组数据,免费查、免费专利分析。
专利汇分析报告产品可以对行业情报数据进行梳理分析,涉及维度包括行业专利基本状况分析、地域分析、技术分析、发明人分析、申请人分析、专利权人分析、失效分析、核心专利分析、法律分析、研发重点分析、企业专利处境分析、技术处境分析、专利寿命分析、企业定位分析、引证分析等超过60个分析角度,系统通过AI智能系统对图表进行解读,只需1分钟,一键生成行业专利分析报告。