序号 专利名 申请号 申请日 公开(公告)号 公开(公告)日 发明人
21 PROCEDE ET DISPOSITIFS DE REALITE AUGMENTEE UTILISANT UN SUIVI AUTOMATIQUE, EN TEMPS REEL, D'OBJETS GEOMETRIQUES PLANAIRES TEXTURES, SANS MARQUEUR, DANS UN FLUX VIDEO PCT/FR2008/000068 2008-01-18 WO2008107553A2 2008-09-12 LEFEVRE, Valentin; LIVET, Nicolas

L'invention a pour objet un procédé et des dispositifs pour suivre en temps réel un ou plusieurs objets géométriques sensiblement plan d'une scène réelle dans aux moins deux images d'un flux vidéo, pour une application de réalité augmentée. Après avoir reçu une première image du flux vidéo (300), la première image comprenant l'objet à suivre, la position et l'orientation de cet objet dans la première image sont déterminées à partir d'une pluralité de blocs d'image préalablement déterminés (320), chaque bloc d'image de cette pluralité de blocs d'image étant associé à une pose de l'objet à suivre. La première image et la position et l'orientation de l'objet à suivre dans la première image forment une image clé. A la réception d'une seconde image du flux vidéo, la position et l'orientation de l'objet à suivre dans cette seconde image sont évaluées à partir de l'image clé (330). La seconde image et la position et l'orientation correspondantes de l'objet à suivre peuvent être mémorisées comme image clé. Si la position et l'orientation de l'objet à suivre ne sont pas retrouvées dans la seconde image à partir de l'image clé, la position et l'orientation de cet objet dans la seconde image sont déterminées à partir de la pluralité de blocs d'image et des oses associées (320).

22 AUGMENTED REALITY METHOD AND DEVICES USING A REAL TIME AUTOMATIC TRACKING OF MARKER-FREE TEXTURED PLANAR GEOMETRICAL OBJECTS IN A VIDEO STREAM PCT/FR2008000068 2008-01-18 WO2008107553A3 2009-05-22 LEFEVRE VALENTIN; LIVET NICOLAS
The invention relates to a method and to devices for the real-time tracking of one or more substantially planar geometrical objects of a real scene in at least two images of a video stream for an augmented-reality application. After receiving a first image of the video stream (300), the first image including the object to be tracked, the position and orientation of the object in the first image are determined from a plurality of previously determined image blocks (320), each image block of said plurality of image blocks being associated with an exposure of the object to be tracked. The first image and the position and the orientation of the object to be tracked in the first image define a key image. After receiving a second image from the video stream, the position and orientation of the object to be tracked in the second image are evaluated from the key image (300). The second image and the corresponding position and orientation of the object to be tracked can be stored as a key image. If the position and the orientation of the object to be tracked cannot be found again in the second image from the key image, the position and the orientation of this object in the second image are determined from the plurality of image blocks and the related exposures (320).
23 비디오 스트림 내에서 마커 없는 텍스쳐 기하학적 평면 객체를 실시간으로 자동 추적하는 증강 현실 방법 및 장치 KR1020097017554 2008-01-18 KR1020090110357A 2009-10-21 르페브르,발랑띤; 리베,니꼴라
The invention relates to a method and to devices for the real-time tracking of one or more substantially planar geometrical objects of a real scene in at least two images of a video stream for an augmented-reality application. After receiving a first image of the video stream (300), the first image including the object to be tracked, the position and orientation of the object in the first image are determined from a plurality of previously determined image blocks (320), each image block of said plurality of image blocks being associated with an exposure of the object to be tracked. The first image and the position and the orientation of the object to be tracked in the first image define a key image. After receiving a second image from the video stream, the position and orientation of the object to be tracked in the second image are evaluated from the key image (300). The second image and the corresponding position and orientation of the object to be tracked can be stored as a key image. If the position and the orientation of the object to be tracked cannot be found again in the second image from the key image, the position and the orientation of this object in the second image are determined from the plurality of image blocks and the related exposures (320).
24 Temporally and spatially scalable encoding for video object plane JP15718198 1998-06-05 JPH1118085A 1999-01-22 CHEN XUEMIN; LUTHRA AJAY; RAJAN GANESH; NARASIMHAN MANDAYAM
PROBLEM TO BE SOLVED: To impart a temporal and spatial scalability function for encoding a video signal including a video object plane(VOP) by differentially encoding an up sample VOP, which is provided from the VOP of an input video sequence, while using the VOP of the input video sequence. SOLUTION: VOP data from frames 105 and 115 are supplied for separating an encoding function. Especially, VOP 117-119 respectively perform shaping, moving and texture encoding at encoders 137-139. One specified piece of pixel data in the VOP of the input video sequence are down-sampled, and a base layer VOP having a decreased spatial resolution is given. The pixel data in the base layer VOP at one part at least are up-sampled, and an up-sampling VOP is given in a reinforced later. The up-sampling VOP is differentially encoded, while using specified one VOP of the input video sequence.
25 道路面等の平面対象物映像の平面展開画像処理方法、同逆展開画像変換処理方法及びその平面展開画像処理装置、逆展開画像変換処理装置 PCT/JP2003/008817 2003-07-11 WO2004008744A1 2004-01-22 岩根 和郎
A method and a device for converting a perspective image obtained by the perspective of various monitor cameras so as to develop and display it on a plan view as expressed on a map. From the perspective video obtained by an ordinary camera, information required for a plan-view development is read in, developed into a plan view such as a map by a mathematical expression, combined/concatenated into a large road surface development diagram. For example, a plurality of ordinary video cameras are attached around a bus so as to constitute the entire field of view and pick up images of a road outside the bus with a certain angle, i.e., a dip. The images of the road surface are developed as images of a map, which are processed and combined to generate a road image around the bus. This is obtained by calculating a mathematical expression.
26 Method for display time stamping and synchronization of multiple video object planes JP2002086031 2002-03-26 JP2002344967A 2002-11-29 TAN THIOW KENG; SHEN MEI SHEN; CHAKU JUU LEE
PROBLEM TO BE SOLVED: To provide a method of encoding a local time base embedded in compressed data. SOLUTION: The local time base is encoded in two parts. The 1st part has a modulo time base that indicates the specific interval in the reference time base and the 2nd part has a time base increment relative to the reference time. Two forms of time base increment is used to allow for the possibility of different encoding order and displaying order. A mechanism for the synchronization of multiple compressed streams with local time base is also described. A time base offset mechanism is also used to allow finer granularity synchronization of the multiple compressed streams.
27 用以將多個視訊物件平面加上顯示時間戳記及予以同步化的方法 TW086109499 1997-07-05 TW359918B 1999-06-01 李作裕
揭示一種將壓縮資料中之本地時基編碼之方法。本地時基被分成二部份編碼。第一部份為一模時基,指示參考時基之特定間隔而第二部份為一相對於參考時間之時基增量。使用二種時基增量形式以允許編碼順序與顯示順序不同之可能性。亦說明一種同步多個具本地時基之壓縮位元流之機制。亦使用一時基偏移機制以允許多個壓縮位元流之較細粒度同步。
28 Method for display time stamping and synchronization of multiple video object planes EP00122580.4 1997-07-03 EP1073278B1 2001-10-31 Tan, Thiow Keng; Shen, Sheng Mei; Lee, Chak Joo
29 Method for display time stamping and synchronization of multiple video object planes EP01104219.9 1997-07-03 EP1111933A1 2001-06-27 Tan, Thiow Keng; Shen, Sheng Mei; Lee, Chak Joo

A method of encoding a local time base embedded in the compressed data is disclosed. The local time base is encoded in two parts. The first part has a modulo time base that indicates the specific interval in the reference time base and the second part has a time base increment relative to the reference time. Two forms of time base increment is used to allow for the possibility of different encoding order and displaying order. A mechanism for the synchronization of multiple compressed streams with local time base is also described. A time base offset mechanism is also used to allow finer granularity synchronization of the multiple compressed streams.

30 Prediction and coding of two-way prediction video object plane for interlaced digital video JP9804098 1998-03-09 JPH1175191A 1999-03-16 EIFRIG ROBERT O; CHEN XUEMIN; LUTHRA AJAY
PROBLEM TO BE SOLVED: To provide an effective method and decoder providing predicting a motion vector MV to a macro block MB in a bidirectionally predicted video object plane B-VOR. SOLUTION: Prediction in the direct mode is conducted for a macro block subjected to field prediction of a future anchor image 440 and a B-VOP micro block 420 in common arrangement by calculating four field mobile vectors (MVf ,top , MVf ,bot , MVB,top , MVb ,bot ). The four field mobile vectors and their reference fields are decided from (1) an offset period (MVD) of a coded vector of a current macro block, (2) two future anchor image field mobile vectors (MVtop , MVbot ), (3) reference fields 405,410 used by two field mobile vectors in the future anchor macro block arranged in common, and (4) a time interval (TRB,top , TRb ,bot , TRD,top , TRD,bot ) in a field period and between the current B-VOP field and the anchor field.
31 Estimation and compensation of moving of video object plane for interlaced digital video device JP9804198 1998-03-09 JPH114441A 1999-01-06 EIFRIG ROBERT O; CHEN XUEMIN; LUTHRA AJAY
PROBLEM TO BE SOLVED: To conduct coding for movement estimate and movement compensation by selecting a horizontal motion vector component of a current block and selecting either of a vertical motion vector component of 1st and 2nd object blocks. SOLUTION: VOP data from frames 105, 115 are fed to each coding function, and VOPs 117, 118, 119 are given to encoders 137, 138, 139 respectively, where coding of a shape, a movement and a texture is conducted. The coded VOP data are multiplexed by a MUX 140 and demultiplexed by a DEMUX 150. A compositer 160 is an edit device that obtains an image ordered by the user, and includes VOP 178 stored just prior to different from the VOP received by a personal library 178 of, e.g. a user. The user assembles a frame 185, where circular VOP data 178 are rearranged into square VOP data 17. Thus, the user acts as if he were a video editor.
32 Method and apparatus for generating chrominance shape information of a video object plane in a video signal EP96306170.0 1996-08-23 EP0806871A3 2000-05-24 Kim, Jong-Il, Video Research Center

An apparatus generates chrominance shape information based on luminance shape information represented by binary values, to thereby describe an object in a video object plane effectively. The apparatus divides the luminance shape information into a multiplicity of sample blocks, a sample block including 2 x 2 pixels, and determines, for each sample block, a chrominance value based on all logic values in a sample block. The chrominance shape information is produced in a form of a matrix based on the chrominance values for all sample blocks.

33 Method and apparatus for generating chrominance shape information of a video object plane in a video signal EP96306170.0 1996-08-23 EP0806871A2 1997-11-12 Kim, Jong-Il, Video Research Center

An apparatus generates chrominance shape information based on luminance shape information represented by binary values, to thereby describe an object in a video object plane effectively. The apparatus divides the luminance shape information into a multiplicity of sample blocks, a sample block including 2 x 2 pixels, and determines, for each sample block, a chrominance value based on all logic values in a sample block. The chrominance shape information is produced in a form of a matrix based on the chrominance values for all sample blocks.

34 입력 비디오 시퀀스를 스케일링하고 복원하기 위한 방법과 비디오 객체면을 코딩하기 위한 방법 및, 입력 비디오 시퀀스를 복원하기 위한 디코더 장치 KR1019980020782 1998-06-05 KR1019990006678A 1999-01-25 쑤에민첸; 아재이루스라; 가네쉬라잔; 만다얌나라심한
본 발명은 입력 디지털 비디오 시퀀스에 비디오 객체면(VOP)을 포함하는 비디오 화상의 시간 및 공간적 스케일링을 제공한다. 코딩 효율은 스케일링 된 필드모드 비디오를 적응적으로 압축시킴으로써 향상된다. 인헨스먼트층에 업샘플링 된 VOP는 선형기준에 기초한 입력 비디오 시퀀스를 갖는 보다 큰 상관성을 제공하기 위해 재정렬 된다. 그 결과, 나머지는 DCT와 같은 공간변이를 이용하여 코딩 된다. 동작보상체계는 베이스층 VOP를 위해 이미 결정된 동작벡터를 스케일링 함으로써 인헨스먼트층 VOP를 코딩하기 위해 이용된다. 중심이 스케일링 된 동작벡터에 의해 정의되는 축소된 검색영역이 제공된다. 동작보상체계는 스케일링 된 프레임모드 또는 필드모드 비디오에 이용하는데 적합하다. 다양한 처리기 구성은 특정한 스케일링 가능 코딩 결과를 달성한다. 스케일링 가능 코딩의 응용은 입체 비디오, 픽처-인-픽처, 미리보기 접근 채널 및, ATM통신을 포함한다.
35 Method and apparatus for generating chrominance shape information of a video object plane in a video signal EP96306170.0 1996-08-23 EP0806871B1 2006-06-21 Kim, Jong-Il, Video Research Center
36 Method for display time stamping and synchronization of multiple video objects planes US09736300 1997-07-03 USRE38875E1 2005-11-15 Thiow Keng Tan; Sheng Mei Shen; Chak Joo Lee
A method of encoding a local time base embedded in the compressed data is disclosed. The local time base is encoded in two parts. The first part has a modulo time base that indicates the specific interval in the reference time base and the second part has a time base increment relative to the reference time. Two forms of time base increment is used to allow for the possibility of different encoding order and displaying order. A mechanism for the synchronization of multiple compressed streams with local time base is also described. A time base offset mechanism is also used to allow finer granularity synchronization of the multiple compressed streams.
37 Method for display time stamping and synchronization of multiple video object planes US09736441 1997-07-03 USRE38923E1 2005-12-20 Thiow Keng Tan; Sheng Mei Shen; Chak Joo Lee
A method of encoding a local time base embedded in the compressed data is disclosed. The local time base is encoded in two parts. The first part has a modulo time base that indicates the specific interval in the reference time base and the second part has a time base increment relative to the reference time. Two forms of time base increment is used to allow for the possibility of different encoding order and displaying order. A mechanism for the synchronization of multiple compressed streams with local time base is also described. A time base offset mechanism is also used to allow finer granularity synchronization of the multiple compressed streams.
38 Method for display time stamping and synchronization of multiple video object planes US11019149 1997-07-03 USRE40664E1 2009-03-17 Thiow Keng Tan; Sheng Mei Shen; Chak Joo Lee
A method of encoding a local time base embedded in the compressed data is disclosed. The local time base is encoded in two parts. The first part has a modulo time base that indicates the specific interval in the reference time base and the second part has a time base increment relative to the reference time. Two forms of time base increment is used to allow for the possibility of different encoding order and displaying order. A mechanism for the synchronization of multiple compressed streams with local time base is also described. A time base offset mechanism is also used to allow finer granularity synchronization of the multiple compressed streams.
39 Method for display time stamping and synchronization of multiple video object planes US11761 1998-02-26 US6075576A 2000-06-13 Thiow Keng Tan; Sheng Mei Shen; Chak Joo Lee
A method of encoding a local time base embedded in the compressed data is disclosed. The local time base is encoded in two parts. The first part has a modulo time base that indicates the specific interval in the reference time base and the second part has a time base increment relative to the reference time. Two forms of time base increment is used to allow for the possibility of different encoding order and displaying order. A mechanism for the synchronization of multiple compressed streams with local time base is also described. A time base offset mechanism is also used to allow finer granularity synchronization of the multiple compressed streams.
40 Temporal and spatial scaleable coding for video object planes US869493 1997-06-05 US06057884A 2000-05-02 Xuemin Chen; Ajay Luthra; Ganesh Rajan; Mandayam Narasimhan
Temporal and spatial scaling of video images including video object planes (VOPs) in an input digital video sequence is provided. Coding efficiency is improved by adaptively compressing scaled field mode video. Upsampled VOPs in the enhancement layer are reordered to provide a greater correlation with the input video sequence based on a linear criteria. The resulting residue is coded using a spatial transformation such as the DCT. A motion compensation scheme is used for coding enhancement layer VOPs by scaling motion vectors which have already been determined for the base layer VOPs. A reduced search area whose center is defined by the scaled motion vectors is provided. The motion compensation scheme is suitable for use with scaled frame mode or field mode video. Various processor configurations achieve particular scaleable coding results. Applications of scaleable coding include stereoscopic video, picture-in-picture, preview access channels, and ATM communications.
QQ群二维码
意见反馈