专利汇可以提供Motion Compensated De-interlacing and Noise Reduction专利检索,专利查询,专利分析的服务。并且A video processing system for de-interlacing a video signal comprises a motion estimation block, a refinement motion estimation block, and a de-interlacer. The motion estimation block generates integer motion vectors for the video signal. The refinement motion estimation block generates fractional motion vectors as a function of the generated integer motion vectors and select frames of the video signal. The de-interlacer generates an output as a function of the generated fractional motion vectors and the selected frames of the video signal.,下面是Motion Compensated De-interlacing and Noise Reduction专利的具体信息内容。
We claim:
This application claims priority from a provisional patent application entitled “Motion Compensated De-Interlacing and Noise Reduction” filed on Oct. 3, 2013 and having an application Ser. No. 61/886,595. Said application is incorporated herein by reference.
This disclosure generally relates to video processing, and, in particular, to methods, systems, and apparatuses for motion compensated de-interlacing and noise reduction of interlaced video frames.
De-interlacing is the process of converting interlaced video frames, such as common analog television signals, into a non-interlaced video frames for display. Interlaced video frames consist of two sub-fields taken in sequence, where the fields are sequentially scanned at odd and even lines of the image sensor. The advantage of interlaced frames is that it requires less transmission bandwidth than transmitting the entire image frame, which is a critical factor when transmitting video data.
Many current display systems, e.g., liquid crystal displays (“LCDs”), plasma screens, and other high definition (“HD”) displays, generate displays using a progressive scan format. In the progressive scan format, the lines of each frame are sequentially displayed. Thus, sub-fields of interlaced video data need to be combined into single frames by a de-interlacing process so that the video data can be displayed in the progressive scan format. Several de-interlacing techniques exist for converting interlaced video into progressive scan video. However, each of the techniques has significant drawbacks.
For instance,
An interlaced frame of the video signal F(n) and a noise reduced frame of the video signal F′(n−2)can be inputted to a motion estimation (“ME”) block 10, a motion estimation block 12 and a three dimensional (“3D”) motion compensation noise reduction (“MCNR”) block 14. The video signal F(n) can have even frames or odd frames for every consecutive number of n. An example is when n=0, 2, 4, 6, 8, 10, etc., the video frames F(n) are all even frames, meaning each even frame contains pixel information for even numbered lines of the video. When n=1, 3, 5, 7, 9, etc., the video frames F(n) are all odd frames, meaning each odd frame contains pixel information for odd numbered lines.
The motion estimation block 10 receives the frame data F(n) and F′(n−2), consecutive odd numbered frames or even numbered frames, and generates motion vectors MV(n−1) for the frame n−1. Next, the motion vectors MV(n−1), interlaced frame F(n), and interlaced frame F′(n−2) are inputted to the noise reduction block 14. The noise reduction block 14 generates a noise reduced frame F′(n), which is stored in memory 24 for later retrieval. The memory 24 can be a buffer, cache or other memory location for the noise reduced frames F′(n) to be stored. In fact, the previous noise reduced frame F′(n−2) is retrieved from memory to be inputted to the ME block 10, the ME block 12, and the noise reduction block 14.
The previous noise reduced frame F′(n−2) and the current noise reduced frame F′(n) are inputted to the ME block 12 to generate motion vectors MV(n−1). The generated motion vectors MV(n−1) can be further post processed using a MV post processing block 16, and then stored in a memory 26 for motion vectors MV( ). The memory 26 can be a buffer, cache, or other memory for storing motion vectors.
A de-interlacer 18 can retrieve motion vectors from the memories 24 and 26 and from an edge interpolator 22 to blend the various inputs for the de-interlacer output F′(n−k, x, y), where k is greater than one and is the frame number delay relative to n, and x and y are coordinate positions of pixels to be displayed. The variable k can be typically 1, 2, or 3 based on the system requirements, where the smaller the k, the smaller the latency will be.
The problem with such system is that the ME blocks 10 and 12 require an enormous amount of resources for the ME calculations to be performed. In this example, there needs to be two dedicated ME blocks for generating an acceptable de-interlaced frame, where each ME block requires a large number of line buffers. In fact, the motion estimation is the most computationally demanding process in image processing applications. Accordingly, the motion estimation engine is very crucial to the performance of video compression and video processing. Therefore, there exists a desire to provide new methods and systems for video processing of interlaced video data that can reduce the complexity and resources used to generate a de-interlaced frame of the video data.
Briefly, the disclosure relates to a video processing system for de-interlacing a video signal, comprising: a motion estimation block, wherein the motion estimation block generates integer motion vectors for the video signal; a refinement motion estimation block, wherein the refinement motion estimation block generates fractional motion vectors as a function of the generated integer motion vectors and select frames of the video signal; and a de-interlacer, wherein the de-interlacer generates an output as a function of the generated fractional motion vectors and the selected frames of the video signal.
The foregoing and other aspects of the disclosure can be better understood from the following detailed description of the embodiments when taken in conjunction with the accompanying drawings.
In the following detailed description of the embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration of specific embodiments.
An interlaced frame F(n) of the video signal and a noise reduced frame F′(n−2)of the video signal can be inputted to a motion estimation block 40 and a noise reduction block 44. The motion estimation block 40 receives the F(n) and F′(n−2), two consecutive frames of odd or even frames, are used to generate motion vectors MV(n−1) for the frame n−1 by using ME algorithms. The generated motion vectors MV(n−1) can be further post processed using a MV post processing block 42, which can be stored in a memory 30 for later retrieval. The memory 30 can be a buffer, cache, or other memory for storing motion vectors.
The motion vectors MV(n−1), interlaced frame F(n), and interlaced frame F′(n−2) are inputted to the noise reduction block 44. The noise reduction block 44 can generate a noise reduced frame F′(n), which is stored in a memory 20 for later retrieval. The memory 20 can be a buffer, cache or other memory location for the noise reduced frames F′(n) to be stored. In fact, the previous noise reduced frame F′(n−2) is retrieved from memory to be inputted to the ME block 40 and the noise reduction block 44.
A motion compensated de-interlacer 46 can retrieve motion vectors MV(n−k) from the memory 30, noise reduced frames F′(n−k−1) and F′(n−k+1) from the memory 20, and an edge interpolated frame Fint(n−k) from an edge interpolator 48 to blend the various inputs for the de-interlacer output F′(n−k, x, y), where k is great than one and is the frame number delay relative to n, and x and y are coordinate positions of pixels to be displayed. Edge interpolation can be accomplished by an edge based interpolation method, including a simple vertical interpolation.
The ME engine 100 comprises an ME block 80, a refinement ME block 82, and a multiplexer 96. The ME block 80 can provide integer motion vectors. The refinement ME block 82 can provide refined motion vectors of fractional motion vectors. Assuming the delay between the input of the block diagram and the output of the de-interlacer 102 is k=2 and n=0, then a current frame of video data is F(0) and a noised reduced frame F′(−2), i.e., two frames away from F(0) and for the same even or odd lines of pixels at different times, is inputted to the ME block 80. The ME block 80 calculates the motion vectors for the current frame MV(0) and outputs the values to the refinement ME block 82 and the multiplexer 96. The refinement ME block 82 also receives the frame video data for F(0), F′(−1), and F′(−2) to generate motion vectors MV(−1) for a previous frame from the current frame MV(0). The motion vectors MV(−1) for the previous frame is inputted to the multiplexer 96 and to the MV post processing block 104. The MV processing block 104 further refines the motion vectors MV(−1), which is then stored in the memory 84 for later use.
The multiplexer 96 can be controlled to select either the motion vectors MV(0) or MV(−1) for output to the noise reduction block 92. Due to user defined requirements and/or system requirements, integer motion vectors may be more desirable to use for noise reduction 92, and would thus be selected for output by the multiplexer 96.
The current frame F(0) and the noised reduced frame F′(−2) are inputted to the noise reduction block 92 for generating a noise reduced current frame F′(0). The noise reduced current frame F′(0) can be stored in the memory 94 for video data for later use as a previous frame for future calculations and video processing (e.g., de-interlacing and noise reduction. The generated motion vector MV(−1) can be further post processed by the MV post processing block 104, and then stored to the memory 84 for later retrieval by the de-interlacer 102.
The de-interlacer 102 comprises a motion compensation (“MC”) block 86, a motion adaptive (“MA”) block 90, and a blender 88. The frames F′(−1) and F′(−3) and the motion vector MV(−2) are inputted to the MC block 86 to generate motion compensated frame data MC(−2). Consecutive frame data F′(−1), F′(−2), and F′(−3) are inputted to the MA block 90 to generate motion adaptive frame data MA(−2). The blender 88 receives the frame data MC(−2) and MA(−2) to generate a de-interlacer output di_out(−2). The blender 88 can determine if the motion compensated frame MC(−2) is below a predefined confidence level. If it is, then the motion adapted frame MA(−2) is used as the de-interlacing output di_out(−2). Otherwise, the motion compensated frame MC(−2) is used as the output di_out(−2). Additionally, other methods can be used to blend the motion compensated frame and the motion adapted frame to generate the de-interlacer output.
While the disclosure has been described with reference to certain embodiments or methods, it is to be understood that the disclosure is not limited to such specific embodiments or methods. Rather, it is the inventor's contention that the disclosure be understood and construed in its broadest meaning as reflected by the following claims. Thus, these claims are to be understood as incorporating not only the apparatuses, methods, and systems described herein, but all those other and further alterations and modifications as would be apparent to those of ordinary skilled in the art.
标题 | 发布/更新时间 | 阅读量 |
---|---|---|
一种仿鸽子智能的无人机自主导航系统及方法 | 2020-05-23 | 4 |
基于小波描述子的目标跟踪方法 | 2020-08-03 | 6 |
基于航空遥感的森林火灾探测方法 | 2020-06-18 | 1 |
一种环境自适应视频图像降噪方法及装置 | 2020-10-07 | 5 |
Method and apparatus for motion compensated video coding | 2022-03-20 | 5 |
Framework for fine-granular computational-complexity scalable motion estimation | 2021-11-13 | 5 |
Unit for and method of estimating a motion vector | 2022-01-06 | 3 |
Method for non exhaustive motion estimation which times out | 2023-02-09 | 3 |
영상 부호기의 움직임 추정 장치 | 2022-10-29 | 4 |
FALLBACK DETECTION IN MOTION ESTIMATION | 2020-07-14 | 2 |
高效检索全球专利专利汇是专利免费检索,专利查询,专利分析-国家发明专利查询检索分析平台,是提供专利分析,专利查询,专利检索等数据服务功能的知识产权数据服务商。
我们的产品包含105个国家的1.26亿组数据,免费查、免费专利分析。
专利汇分析报告产品可以对行业情报数据进行梳理分析,涉及维度包括行业专利基本状况分析、地域分析、技术分析、发明人分析、申请人分析、专利权人分析、失效分析、核心专利分析、法律分析、研发重点分析、企业专利处境分析、技术处境分析、专利寿命分析、企业定位分析、引证分析等超过60个分析角度,系统通过AI智能系统对图表进行解读,只需1分钟,一键生成行业专利分析报告。