专利汇可以提供Video signal processing专利检索,专利查询,专利分析的服务。并且A digital Wiener filtering method is proposed which applies motion compensation by successive approximations of increasing resolution to determine displacement vectors for two successive image frames, a vector defining a real displacement only being applied to a block of one of the frames when the mean absolute error associated with that vector is not greater than a given fraction of the mean absolute error between that block and the corresponding block of the other frame. The current frame is then filtered block-by-block, with the blocks overlapping. In this filtering, each block is a 3D volume of pixels from the current frame and blocks selected from the preceding and succeeding frames so as to correspond according to the displacement vectors. In one embodiment, the filtering comprises applying a 3D FFT to convert to a power spectrum in the frequency domain, followed by attenuation according to a Wiener filter and conversion from the frequency domain by the inverse FFT.,下面是Video signal processing专利的具体信息内容。
This invention relates to video signal processing and is concerned with removing noise from a sequence of images such as a sequence of frames comprising a motion picture.
Many papers discussing noise reduction have been presented during the last two decades or so. Many motion compensated noise reduction methods have also been proposed. The types of noise that we are particularly concerned with are noise such as white Gaussian noise, noise which is equivalent to what one would see on a television if the channel were not tuned properly, and noise such as the graininess one sees when watching an old movie (strictly speaking this graininess is due in part to film grain noise which is not necessarily Gaussian distributed).
Suppose one has a set of images (called herein frames) of an image sequence corrupted by additive Gaussian noise. If the value of a pixel at coordinate (i,j) in frame n of the clean image sequence is I(i,j,n), then the value in the corrupted image g(i,j,n) is
where η(i,j,n) is additive white Gaussian noise. By Gaussian distributed noise, one means that if one had a frame containing only noise, and then made a histogram of the pixel values, the histogram would have a Gaussian shape. By white noise one means that the noise value at a particular pixel is uncorrelated with the values at any other pixel in any other frame or in any other part of the same frame. This latter constraint is perhaps the more important one for the following methods to function optimally.
One simple way to reduce the noise in the frames is just to average the frames as they arrive. Therefore a recursive averager can be used as set out below in equation (2), where, În(i,j) represents the value of the output pixel which is supposed to be an estimate of the clean image. În-1(i,j) is the previous output image pixel and gn(i,j) the current noisy image pixel. Therefore the output frame at time k is the average of all the previous k frames. If the frame contains a stationary scene, this would be fine, since the noise would be averaged out. However this is not normally the case and moving objects are blurred by this operation.
A motion compensated averager, such as is disclosed in the article by J. Boyce, 'Noise Reduction of Image Sequences Using Adaptive Motion Compensated Frame Averaging' in IEEE ICASSP, Volume 3, pages 461-464, 1992, works much better, as expected. The averaging operation is then directed along motion trajectories and so does not blur motion. However this operation is a purely temporal one and so is sensitive to errors in motion estimation. Furthermore, greater noise attenuation can be obtained by using the spatial information in each frame. An article entitled 'Motion-Adaptive Weighted Averaging for Temporal Filtering of Noisy Image Sequences', by M.Ozkan et al, SPIE Image Processing Algorithms and Techniques III, pages 201-212, February 1992, and that entitled 'Motion Compensated Enhancement of Noisy Image Sequences', IEEE ICASSP, vol.1, pages 2121-2124, 1990, are two papers which introduce spatio-temporal noise reduction tactics that achieve better results than the motion compensated frame averager.
An alternative noise reduction method is disclosed by Ozkan et al in their article entitled 'Efficient Multiframe Wiener Restoration of Blurred and Noisy Image Sequences', IEEE Transactions on Image Processing, Vol. 1, No. 4, pages 453-476, Oct. 1992. This method obtains the Fourier transform of each frame of the sequence to be restored to obtain a sequence of 2D Fourier frequency frames which are processed to reduce noise.
According to a first aspect of the present invention there is provided a method of removing noise from a current frame f of an image sequence having a plurality of frames, the method comprising applying a 3-dimensional Wiener filtering operation involving 3-dimensional correlations to a data set corresponding to or being produced from the values of a plurality of pixels in the current frame f and the values of a corresponding plurality of pixels in a succeeding frame f+1 and/or a preceding frame f-1 of the image sequence, whereby the values resulting from the filtering operation are each a function of said data set.
By 'frames' we mean to include individual images of a sequence, which may be, for example, frames of a cinematographic film or frames or fields of a television picture.
Said data set may correspond on a one to one basis with the values of pixels in the current frame and the values of pixels in a succeeding and/or preceding frame in which case the filtering operation may be of the finite impulse response (FIR) type involving 3D correlations of the pixel values in its processing.
Preferably however, the filtering operation is carried out in the frequency domain and may be, or may approximate, an infinite impulse response (IIR) filtering operation, said data set being obtained by Fourier transforming said values of pixels in the current frame and said values of pixels in the succeeding and/or preceding frame. Taking the power spectrum of 3D FFT operations implies 3D autocorrelations.
Thus according to a second aspect of the invention there is provided a method of filtering noise from a video image sequence, wherein the images are subject to a Wiener filtering operation in a 3D Fourier transform domain.
In one embodiment implemented as a digital filter, the space/time image sequence is transformed into the 3D frequency domain and the various frequency components are then attenuated by the filter to produce a modified 3D Fourier frequency expression of the signal which represents a noise reduced film sequence. A further step involves the inverse 3D Fourier transform which transforms the 3D Fourier transform expression into the space/time restored image sequence.
The filtering operation in this case may be applied so as to attenuate the power spectral density of frequency components within the data set with the level of attenuation being a function of the power spectral density itself. Frequency components having a spectral power density below a threshold level βPηη, where β is a constant greater than 1 and Pηη is the power spectral density of a noise signal associated with each frequency component, may be attenuated by a a non-negative factor which reduces as Pgg reduces,e.g. a factor which is a function of β.
Thus, according to a third aspect of the invention there is provided a method of removing noise from a current frame f of an image sequence having a plurality of frames, the method comprising applying a Wiener filtering operation to a data set, obtained by Fourier transforming the values of a plurality of pixels in the current frame f, to attenuate frequency components within the data set having a spectral power density below a threshold level βPηη, where β is a constant greater than 1 and Pηη is the power spectral density of a noise signal associated with each frequency component, by a factor which is a function of β and/or a non-negative factor which reduces as Pgg reduces.
Other aspects of the invention concern digital filters for implementing such methods. For example, a digital filter for removing noise from a current frame f of an image sequence having a plurality of frames, the filter may comprise:
Other aspects of the invention are exemplified by the attached claims.
For a better understanding of the invention and to show how the same may be carried into effect, reference will now be made, by way of example, to the accompanying drawings, in which:
Figure 1 is a block diagram of a system suitable for carrying out an embodiment of the invention in the form of a digital filter as represented generally by Figure 4. The system comprises an analogue to digital converter 2 for digitising an analogue input signal representing an image sequence comprising a plurality of frames, supplied by a VCR 1, a memory 3 for storing pixel location and intensity for three consecutive frames, f-1, f, and f+1, of the input image sequence in digital form, and a computer 4 for processing the currently stored frame f of the image sequence. After each cycle, the most recently processed frame f is fed back to the memory as frame f+1 for the next cycle, frame f-1 is designated frame f, and a new frame f-1 is read into the memory via the analogue to digital converter. The most recently processed frame f may be supplied in digital form to a digital memory device which retains all of the processed frames for use later or may be passed through a digital to analogue converter to an analogue storage device, e.g. a further VCR, or directly to a display.
The frames f-1, f, and f+1 are stored in the memory as 2-D arrays of intensity values, the address of the intensity values in memory corresponding to the location of the corresponding pixel in the image frame.
It will be appreciated that the input sequence could convey information additional to intensity values, for example colour information, but, in the interest of clarity, such considerations will be omitted in the following and the discussion limited to black and white signals.
The digital filter is represented in the form of its funtional structure in Figure 4; the main steps implemented by this digital filter can be set out as follows:
Considering firstly step 1, motion estimation may be carried out using any one the known methods, see for example the methods disclosed in the article by J. Biemond et al, entitled 'A Pel-Recursive Wiener based displacement estimation algorithm' in Signal Processing, 13:pages 399-412, 1987. However, a preferred embodiment of the present invention makes use of the method disclosed in United Kingdom patent application No. 9316153.7. For completeness, a description of this method is repeated hereinafter.
The motion estimation algorithm assign to each pixel or cluster of pixels (having coordinates x,y) in the current frame two displacement vectors df-1(x,y,f) and df+1(x,y,f) which define respectively the displacement of that pixel or cluster between the current and preceding frame and between the current and the succeeding frame respectively.
In order to reduce the computational complexity of this process, a multiresolution technique is used which involves generating a number of sublevels L1, L2, L3...Ln (and where L0 represents the level of the original frame) of gradually reduced resolution for each of the three frames. Figure 2 is a flow diagram showing the steps involved. A first estimate of displacement is obtained from a coarse, or low resolution, level Ln. A second improved estimate is obtained from a higher resolution level Ln-1 using the first estimate to reduce the area which must be searched to match areas of the current frame f with corresponding areas of the surrounding frames f-1 and f+1. The estimate is further improved by considering still higher resolution levels until finally a satisfactory estimate is obtained. In the embodiments now to be described n=3.
Level L1 is generated by operating on the original frame with a low pass filter, i.e. in effect passing a low pass filter mask of given window size over the original frame (which corresponds to level L0). The mask is placed over a portion of the frame to be processed and a value is then generated which is a function of the value of pixels in the area extracted by the mask window. The generated value corresponds to the pixel located in the centre of the mask window and is stored in a new array.
A suitable filtering function is a Gaussian function having the following form:
where
and p and q are the local coordinates of pixels within the mask window and the pixel at the centre of the window defines the origin (0,0) of the local coordinate system. For a filter window of 9 x 9 pixels, R = 4 and p and q are in the range -4 to +4. σ is a constant defining the Gaussian function. The mask is moved pixel by pixel across the entire frame until the new array is completely filled. Level LO is extended at its edges by a strip of pixels having intensity values equal to zero, i.e. black, so that the filter mask can be applied to the pixels around the edges. The resulting filtered frame is subsampled in both the horizontal and vertical directions by a factor of two, i.e. only every other pixel is used in each row of the processed level L0. The result is a frame at level L1. Thus, if the original frame comprises an array of 256 x 256 pixels, level L1 would comprise a smaller array of 128 x 128 pixels providing an image which is slightly blurred in comparison to the image provided by level L0 and which has in effect a lower resolution. The process of filtering and subsampling is then carried out on level L1 to produce a level L2 having 64 x 64 pixels and finally on level L2 to produce a level L3 having 32 x 32 pixels.
In order to obtain the displacement vectors df-1(x,y,f) the levels L3 of frames f-1 and f are first subdivided into a number of corresponding A x A pixel blocks. For example, if A=2 each level L3 contains 256 blocks and for each block the following procedure is followed.
Before the final bilinear interpolation step is carried out for level L0 however, the displacement vector field propagated from level L1 is checked for so-called motion haloes. This is achieved by repeating the process of step (a) for all the blocks of level L0 without taking into account the propagated displacement vectors. If MAEO with no motion compensation is less than MAEO with motion compensation then the pixels of the block are assigned a displacement vector of [0,0] and the propagated displacement vector is ignored. This prevents stationary background regions from being assigned erroneous displacement vectors.
A similar process is carried out on the current and succeeding frames f and f+1 in order to obtain the forward displacement vectors df+1(x,y,f).
Before considering steps 2 to 5 of the above method, the Wiener filtering process will be generally discussed.
Wiener filtering is a well known technique for the reduction of noise in degraded signals. It has previously been used to good effect for reducing the noise in archived gramophone recordings. The filter can be applied both in the frequency domain, effectively as an IIR (infinite impulse response) filter, or in the spatio-temporal domain as an FIR (finite impulse response) filter.
Provided good motion estimates can be obtained, we have now realised that a 3D implementation of a filter may provide substantial improvement over a 2D implementation. Therefore, the embodiment shortly to be described uses a digital 3D filtering operation on data from three motion compensated frames in a sequence. The data in a block of N X N pixels in each frame is extracted, compensated for motion as described above, to give a data volume of 3 blocks, one each from frame n-1, n, and n+1, and the block is filtered to suppress noise in the central frame.
The observed (noisy) image sequence is defined by the equation (1) above, i.e.:
where g(i,j,n) is the observed (i.e. noisy) signal grey scale value at position (i,j) in the nth frame, I(i,j,n) is the actual (i.e. clean) signal and η(i,j,n) the added Gaussian noise of variance σηη. The filter is derived by first of all deciding on a filter structure, either FIR or IIR, and then finding the coefficients which minimize the expected value of the squared error between the filter output and the original, clean, signal.
The Wiener filter has been used as an effective method for noise and blur reduction in degraded signals. The form of the filter for image sequences proposed herein is an extension of the 1-D result; however the limited amount of temporal information available implies different considerations in the implementation of the method. This following passage makes explicit the various forms of the filter that can be used. The theory presented here ignores motion between frames as motion has been considered above. The present discussion is also limited to the problem of noise reduction in image sequences, blur is not considered.
The Wiener filter attempts to produce the best estimate for I(i,j,n), denoted Î(i,j,n). It does so by minimizing the expected squared error which is given by equation (7) below. This estimate may be achieved using either an IIR or FIR Wiener filter, which operates on the observed noisy signal. Equations (8) and (9) below show respectively IIR and FIR estimators. In equation (8) there are no limits on the summations, hence IIR, and in equation (9) the filter mask (or support) is a symmetric volume around the filtered location of SIZE(2n₁ + 1) x (2n₂ + 1) x (2n₃ + 1).
Proceeding with the IIR filter leads here to a frequency domain expression for the Wiener filter. Using the principle of orthogonality to bypass some lines of differential algebra, E[(e(...))²] is minimized if the error is made orthogonal to the data samples used in the estimate equation (8) and equation (1) below results.
Substituting for e(i,j,n) in equation (10) results in equation (11) below. The filter coefficients are then chosen to satisfy the condition of equation (11). Substituting for Î(i,j,n) in (11) gives equation (12) below.
The expectations can be recognized as terms from the autocorrelation and crosscorrelation sequences of the observed image sequence g(i,j,n) and the actual sequence I(i,j,n). The solution involves a separate equation for each coefficient, i.e. an infinite set of equations. However, using the 3D DFT gives a tractable result in terms of the power spectra concerned. From equation (11) equation (13) results assuming stationary statistics. Taking the Fourier transform of equation (13) yields equation (14) below. The only unknown quantity in equation (14) is the cross power spectrum Pig. However, using the original assumption of uncorrelated noise and image frames implies the relationships given in equations (15) and (16). From these two relationships, an expression for the 3D frequency domain Wiener filter, A(ω₁, ω₂, ω₃), is as given in equation (17).
The estimated signal, Î(i,j,n) is then given by the equation (18) below.
The IDFT is the inverse 3D DFT, and G(ω₁, ω₂, ω₃) is the 3D DFT of the observed signal, g(i,j,n). The filter is therefore defined by the signal power spectrum and the noise variance.
Using the 3D DFT in this way makes the Wiener filter computationally attractive. However in the temporal direction there are often not many frames involved. Therefore, the temporal Fourier component is less meaningful, having been derived from only a few samples. In the method described herein, 3 frames are involved, hence the DFT in the temporal direction involves just 3 samples. The assumption of an infinite support volume is violated. This phenomenon is also applicable to the spatial components since the image is only stationary over small areas. Therefore, in a practical situation, the 3D IIR filter, implemented in the frequency domain in this way, is no longer IIR since it operates on a finite volume of input data only and not on any past outputs.
The FIR filter gives an estimate of the clean pixel by using only a restricted local set of pixels just around the pixel to be estimated (like the 3D AR model support of UK Patent Application No. 9316153.7). The IIR filter uses all pixels in the data volume to give an estimate of the clean signal. It is found that the FIR filter is generally less successful than the IIR filter and so the preferred embodiment using the IIR will now be discussed.
The IIR form of the filter yields a frequency domain expression which requires an estimate of the Power Spectrum of the original, clean, data to yield the attenuation factor required for each frequency bin. The 3D Wiener filter in the frequency domain is as given by equation (17) below, where A(ω₁, ω₂, ω₃) defines the frequency response of the filter, and Pgg,Pηη refer to the Power spectral densities of the degraded and noise signals respectively. The arguments to the functions refer to discrete frequency bins as gained via the 3D FFT (the n-dimensional FFT is a separable operation and is implemented as a recursive FFT operation in orthogonal directions).
Bearing in mind that the noise is assumed to be uncorrelated, and equation (15) below applies, it follows that the effect of the Wiener filter is to attenuate the frequency components of g(i,j,k) according to the observed power of that particular component. When this is high, Pgg >> Pηη, less filtering is done, and when this power is low, the component is heavily attenuated. This is a useful property for images in particular. In regions of high image activity, such as highly textured areas and edges, there is less attenuation of the signal. In areas of low activity such as uniform areas, more attenuation is achieved. The signal detail is therefore less attenuated than in the uniform regions. The human visual system is known to be less sensitive to noise in regions of high activity. This is a useful bonus since the areas in which the filter will attenuate less noise corresponds to the areas in which noise is less easily observed.
The situation is exactly the same along the temporal axis. The activity along the temporal axis depends on the accuracy of motion estimation. When the motion estimation is not accurate, the filter will automatically reduce the contribution of the temporal information to the noise reduction. This property makes the filter robust to even large errors in motion estimation. Of course the amount by which it reduces the attenuation when this problem occurs may not correspond to an optimal result visually. Never the less, the filter has an advantage in this respect in comparison with motion compensated frame averaging which tends to blur the image when the motion is not correctly estimated.
The derivation of the 3D Frequency domain Wiener filter can be obtained as an extension of the standard 1D framework. However, in a practical implementation, there are the following outstanding considerations.
As indicated by equation (7), the Wiener filter can be defined in terms of the PSD of the observed degraded signal and the noise PSD. The embodiment presented here does not automatically estimate the noise PSD, rather it is a user selectable parameter which is altered to suit the tastes of the viewer. This is due to the fact that the noise PSD is assumed to be a constant level across all the frequency components. The PSD of the degraded signal is estimated by the magnitude of the 3D FFT. To prevent leakage effects, the signal is windowed prior to the FFT operation. Many window functions exist for this purpose. The half-cosine window is chosen here, following the method taught by R. Young and N. Kingsbury in their article entitled 'Video Compression using Lapped Transforms for Motion Estimation/Compensation and Coding, SPIE VCIP, pages 276-288, 1992. For a block size of N X N the window is defined by equation (19) below. This window is called the analysis window since it is used to assist in acquiring an estimate of the Fourier Transform of the image data.
The image information in each frame is therefore windowed with this 2D half-cosine window prior to taking the FFT. There is an argument for implementing a similar analysis window along the temporal axis since again there would be problems in taking the FFT in this direction. In practice the visible effect of using a half-cosine window across the 3 frames (a 3 tap half-cosine window) is found to be small.
The entire image cannot be treated with the same Wiener filter since the image information changes across the frame. Therefore, the image is broken into blocks of size N X N. Each block is compensated for motion across the three frames used so that in restoring the information in a block in the current frame n a data volume of size N X N X 3 is used.
It is common in such a block based algorithm that the non-stationarity of the image would cause blocking artifacts across the edges of the blocks. To suppress this effect the processed blocks can be overlapped. In this implementation the blocks are overlapped by half their horizontal and vertical dimensions, an overlap of 2:1. If the processed blocks were merely placed such that one half of the next block replaced one half of the current block then artifacts can still occur. Overlapped processing implies windowing the output data such that when the overlapped blocks are summed, the new gain is 1. Therefore any output pixel at the edge of a block has a contribution from several blocks around it.
The output or synthesis window must be chosen with regard to the analysis window. The two windows complement each other and taken together must not change the net gain through the noise reduction system. In this case, using a half-cosine window (and 2:1 overlap) as synthesis and analysis windows yields a net gain of unity as required. The net windowing effect is that of a raised cosine function as given by equation (20) below.
The frequency domain Wiener filter involves estimating an attenuation for each frequency component in the degraded signal. In the calculation of each factor it is possible that Pgg (γ₁, γ₂, γ₃) < Pηη (γ₁, γ₂, γ₃) in the numerator of equation (7). This would result in a negative attenuation which is impractical. The most common method for dealing with this problem is to set Pgg - Pηη = 0 when Pgg < Pηη. Note that the frequency arguments have been dropped and the notation simplified.
However, this solution is somewhat drastic and can lead to ringing or patterned noise artifacts in the output frame. To avoid this difficulty, an alternative mapping may be used which reduces the attenuation of the filter for low values of Pgg. This implies that more noise is left in the image but this is preferable to the artifacts that would otherwise occur. The new mapping is as given by equation (21) below, where the Wiener filter is defined as α/(Pgg).
When β = 1 the criterion becomes the same as that used previously in the method disclosed in 13,14. β is a user defined parameter that governs the amount of noise left in the restoration and is therefore called the Noise Margin. The mapping is illustrated in Figure 4. The modified Wiener filter attenuation is given by equation (22) below. Note that since (β -1)/β is a constant, the effect is to define a minimum, non-zero, value of the filter coefficient at frequency bins where α would normally be set to zero.
To produce a noise reduced version of frame n given frames of resolution of M X M pixels the following digital filter steps are appropriate where, for example, the image is 256 X 256 pixels and a restoration of frame 5 is required.
The previous section, which described some of the theory involved, mentioned the use of windowing both for synthesis and analysis. The reason one uses a window prior to frequency transform estimation is to improve the accuracy of the spectral estimate. If one did not do this then one finds that the value of a particular component 'leaks' into the components around it. That is to say the various spectral components do not then reflect the true frequency content. This is a matter taken up in most basic texts on digital signal processing. Because the window is used to help the frequency estimation accuracy, it is called an analysis window: it is used for helping the analysis of the signal in terms of frequency content.
Windowing after processing has a different purpose. First of all, consider what happens if the blocks used do not overlap at all. The block at the upper left hand corner would then have nothing to do with the block next to it or below it or across from it. So say the image had a lot of rapidly varying texture here, then the blocks would not give a continuous restoration but rather a 'blocky' artifact would result. This is simply because the data in each block turned out to be different so the restorations of each block are separate and so at the block boundaries a discontinuity of some kind would result. If the blocks were overlapped, then some of the data inside each one would be also contained in another block or blocks and so the resulting restoration could be made more smooth.
But how does one combine the output of overlapping blocks? There are two ways. One way is to use some block overlap but, when putting back the restoration, put back just the centre of the block. Say there are 2 blocks of 3 X 3 pixels next to each other with a 2 pixel overlap. Then block 1 is at say (0,0), (2,0), (2,2), (0,2) and the block 2 is at (1,0), (3,0), (4,3), (1,3). Both blocks are extracted from the image and then processed to give two restored blocks R1 and R2. One could then just replace the pixel at (1,1) in the restored image with the value of the pixel at (1,1) in block 1; and replace pixel at (2,1) in the restored image with the value at (1,1) in R2. (Recall that once the blocks are extracted their spatial data would be indexed by (0..2,0..2)). This would look alright provided the overlap were large, the reason being that, even though the replaced data do not overlap, the data from which they were derived did contain an overlapping region. But often one finds that unless excessively large overlaps are used Blocking still persists, so the alternative is to window the blocks and then add them together with this partial overlap such that, when the windowed blocks add, the overall gain (even for the edge data) is 1.
The final data, therefore, after this sum/overlap process at the edge of blocks is created by summing windowed data from one block with windowed data from another block. The window used is called the synthesis window because it is used to sum the blocks in the restoration, i.e. synthesize the restoration. Whilst there are more direct ways of solving problems with non-homogeneous signals the above described method is a practical solution which works well.
The reason a half cosine window is used is because it works well. Also, the analysis and synthesis windows are then the same so conserving on memory.
Subtracting the mean is carried out because the Wiener filter used is defined for zero mean signals only. If one adds a random noise to an image patch, the mean value (or average brightness) cannot be affected because taking the mean of the picture elements has the effect of averaging out the noise added.
The DFT (Discrete Fourier Transform) is standard DSP theory and is the discrete equivalent to the Fourier Transform. There exists an extremely fast way of implementing it called the Fast Fourier Transform (FFT) algorithm.
where
where
where
and
and
where
where
标题 | 发布/更新时间 | 阅读量 |
---|---|---|
一种行人导航定位中的自适应零速修正方法 | 2020-05-08 | 121 |
一种多传感器的高精度即时定位与建图方法 | 2020-05-12 | 80 |
输电线路绝缘子红外智能匹配识别与测距定位方法及系统 | 2020-05-13 | 62 |
一种基于深度学习的无人机航拍视频轨迹高精度提取方法 | 2020-05-15 | 190 |
一种基于多目标追踪框架的无人机视频中车速校正方法 | 2020-05-15 | 896 |
煤矿井下斜巷运输矿车监测与防跑车系统及方法 | 2020-05-14 | 876 |
一种基于曲面拟合的图像超分辨率重建方法 | 2020-05-17 | 127 |
基于计算机视觉的电梯轿厢内异常行为的检测方法 | 2020-05-15 | 503 |
一种基于DS证据理论的目标轨迹优化方法 | 2020-05-17 | 597 |
基于模型和点云全局匹配的空间非合作目标位姿估计方法 | 2020-05-16 | 28 |
高效检索全球专利专利汇是专利免费检索,专利查询,专利分析-国家发明专利查询检索分析平台,是提供专利分析,专利查询,专利检索等数据服务功能的知识产权数据服务商。
我们的产品包含105个国家的1.26亿组数据,免费查、免费专利分析。
专利汇分析报告产品可以对行业情报数据进行梳理分析,涉及维度包括行业专利基本状况分析、地域分析、技术分析、发明人分析、申请人分析、专利权人分析、失效分析、核心专利分析、法律分析、研发重点分析、企业专利处境分析、技术处境分析、专利寿命分析、企业定位分析、引证分析等超过60个分析角度,系统通过AI智能系统对图表进行解读,只需1分钟,一键生成行业专利分析报告。