序号 专利名 申请号 申请日 公开(公告)号 公开(公告)日 发明人
1 Server and how to display the video to the client terminal JP2013226630 2013-10-31 JP5504370B1 2014-05-28 一穂 奥
【課題】 ゲーム用画面等の動画をHTTP通信を用いてクライアント端末に表示させる際の負荷を低減する。
【解決手段】 一実施形態に係るサーバは、ゲームを進行させるゲーム進行モジュールと、ゲーム用画面を構成する連続する複数のフレームの1フレームを表示させるためのフレーム情報を当該連続する順序に従って生成する生成モジュールと、生成されたフレーム情報を端末装置に対して送信するか否かを判定する判定モジュールと、生成されたフレーム情報を圧縮する圧縮モジュールと、端末装置からのHTTPリクエストに対するHTTPレスポンスとして、複数の圧縮されたフレーム情報を連続する順序に従って端末装置に送信する送信制御モジュールと、を備えるプログラムを実行可能なように構成されている。
【選択図】 図2
2 動画をクライアント端末に表示させるサーバ及び方法 JP2013226630 2013-10-31 JP2015088002A 2015-05-07 奥 一穂
【課題】 ゲーム用画面等の動画をHTTP通信を用いてクライアント端末に表示させる際の負荷を低減する。
【解決手段】 一実施形態に係るサーバは、ゲームを進行させるゲーム進行モジュールと、ゲーム用画面を構成する連続する複数のフレームの1フレームを表示させるためのフレーム情報を当該連続する順序に従って生成する生成モジュールと、生成されたフレーム情報を端末装置に対して送信するか否かを判定する判定モジュールと、生成されたフレーム情報を圧縮する圧縮モジュールと、端末装置からのHTTPリクエストに対するHTTPレスポンスとして、複数の圧縮されたフレーム情報を連続する順序に従って端末装置に送信する送信制御モジュールと、を備えるプログラムを実行可能なように構成されている。
【選択図】 図2
3 Techniques for processing image data generated from three-dimensional graphic models JP2012015994 2012-01-27 JP2013061927A 2013-04-04 MICHAEL KASCHALK; ERIC A DANIELS; BRIAN S WHITED; KYLE D ODERMATT; PATRICK T OSBORNE
PROBLEM TO BE SOLVED: To provide techniques for creating animated video frames which include both computer generated elements and hand drawn elements.SOLUTION: A software tool allows an artist to draw line work (or supply other 2D image data) to composite with an animation frame rendered from a three dimensional (3D) graphical model of an object. The software tool may be configured to determine how to animate such 2D image data provided for one frame in order to appear in subsequent (or prior) frames in a manner consistent with changes in rendering the underlying 3D geometry.
4 COMPUTER-BASED METHOD FOR 3D SIMULATION OF OIL AND GAS OPERATIONS EP13758075.9 2013-03-06 EP2823425A1 2015-01-14 LOTH, Matthew; ROBINEAU, Delphine; ZUBAIR, Khalid
The present disclosure relates to a computer-based method for 3D simulation of oil and gas operations. According to an aspect, the method comprises: - selecting from a database comprising data related to a plurality of equipments and a plurality of environments, one environment and at least one equipment; - loading, using a processor, core data and 3D models related to the selected environment and equipment(s), wherein the core data and 3D models are stored in the database; - determining, using the processor, the position of the selected equipment(s) in the selected environment, based on the core data of the equipment(s) and environment; - generating, using the 3D models and the determined position of the equipment(s), a 3D representation of a scene comprising the selected environment and equipment(s); - displaying views and/or animations related to the equipment(s) and/or environment upon request of a user, wherein the views and/or animations are derived from the 3D models.
5 Systems and methods for creation and sharing of selectively animated digital photos US14377732 2013-02-11 US09704281B2 2017-07-11 Philippe Louis LeBlanc; Mark Harris Pavlidis; Bretton James MacLean
A method of generating distributable and/or shareable selectively animated images comprising the steps of: (a) opening a client computer program, implemented as client computer program loaded on a mobile device; (b) capturing or accessing a video content; (c) using a user interface of the client computer program, a user drawing a path or region on an image frame from the video content to be animated (“animated region”), wherein the client computer program generated based on the animated region a mask, wherein the mask represents the static portion of a selectively animated image, and the mask is operable to mask underlying animated regions; and (d) the client computer program initiating, at the mobile device or via a server linked to the mobile device, the composition of a series of images including user selected animated regions, by rendering an animated image based on mask and the underlying masked animated regions. A computer program is provided for implementing the steps of the method, which may consist of a mobile application. The computer program may include a server application that cooperates with the mobile application for enabling the animated image composition processes and/or distribution and sharing of the animated images. A computer system is provided that includes a mobile device implementing the mobile application and optionally a server implementing the server application.
6 Defining transitions based upon differences between states US12242878 2008-09-30 US09274764B2 2016-03-01 Narciso B. Jaramillo; Ethan A. Eismann; Robert Tyler Voliter; Robin James Adams
A method is illustrated that comprises receiving at least two states, each state including at least one object with an associated property. Further, the method includes comparing each object of each state to produce a set of differences between states. Additionally, the method includes defining a transition based upon the set of differences.
7 SYSTEMS AND METHODS FOR CREATION AND SHARING OF SELECTIVELY ANIMATED DIGITAL PHOTOS US14377732 2013-02-11 US20150324096A1 2015-11-12 PHILIPPE LOUIS LEBLANC; MARK HARRIS PAVLIDIS; BRETTON JAMES MACLEAN
A method of generating distributable and/or shareable selectively animated images comprising the steps of: (a) opening a client computer program, implemented as client computer program loaded on a mobile device; (b) capturing or accessing a video content; (c) using a user interface of the client computer program, a user drawing a path or region on an image frame from the video content to be animated (“animated region”), wherein the client computer program generated based on the animated region a mask, wherein the mask represents the static portion of a selectively animated image, and the mask is operable to mask underlying animated regions; and (d) the client computer program initiating, at the mobile device or via a server linked to the mobile device, the composition of a series of images including user selected animated regions, by rendering an animated image based on mask and the underlying masked animated regions. A computer program is provided for implementing the steps of the method, which may consist of a mobile application. The computer program may include a server application that cooperates with the mobile application for enabling the animated image composition processes and/or distribution and sharing of the animated images. A computer system is provided that includes a mobile device implementing the mobile application and optionally a server implementing the server application.
8 Techniques for processing image data generated from three-dimensional graphic models US13230613 2011-09-12 US09041717B2 2015-05-26 Michael Kaschalk; Eric A. Daniels; Brian S. Whited; Kyle D. Odermatt; Patrick T. Osborne
Techniques are disclosed for creating animated video frames which include both computer generated elements and hand drawn elements. For example, a software tool may allows an artist to draw line work (or supply other 2D image data) to composite with an animation frame rendered from a three dimensional (3D) graphical model of an object. The software tool may be configured to determine how to animate such 2D image data provided for one frame in order to appear in subsequent (or prior) frames in a manner consistent with changes in rendering the underlying 3D geometry.
9 Data Compression for Real-Time Streaming of Deformable 3D Models for 3D Animation US14298136 2014-06-06 US20140285496A1 2014-09-25 Edilson de Aguiar; Stefano Corazza; Emiliano Gambaretto
Systems and methods are described for performing spatial and temporal compression of deformable mesh based representations of 3D character motion allowing the visualization of high-resolution 3D character animations in real time. In a number of embodiments, the deformable mesh based representation of the 3D character motion is used to automatically generate an interconnected graph based representation of the same 3D character motion. The interconnected graph based representation can include an interconnected graph that is used to drive mesh clusters during the rendering of a 3D character animation. The interconnected graph based representation provides spatial compression of the deformable mesh based representation, and further compression can be achieved by applying temporal compression processes to the time-varying behavior of the mesh clusters. Even greater compression can be achieved by eliminating redundant data from the file format containing the interconnected graph based representation of the 3D character motion that would otherwise be repeatedly provided to a game engine during rendering, and by applying loss-less data compression to the data of the file itself.
10 Data compression for real-time streaming of deformable 3D models for 3D animation US12579334 2009-10-14 US08749556B2 2014-06-10 Edilson de Aguiar; Stefano Corazza; Emiliano Gambaretto
Systems and methods are described for performing spatial and temporal compression of deformable mesh based representations of 3D character motion allowing the visualization of high-resolution 3D character animations in real time. In a number of embodiments, the deformable mesh based representation of the 3D character motion is used to automatically generate an interconnected graph based representation of the same 3D character motion. The interconnected graph based representation can include an interconnected graph that is used to drive mesh clusters during the rendering of a 3D character animation. The interconnected graph based representation provides spatial compression of the deformable mesh based representation, and further compression can be achieved by applying temporal compression processes to the time-varying behavior of the mesh clusters. Even greater compression can be achieved by eliminating redundant data from the file format containing the interconnected graph based representation of the 3D character motion that would otherwise be repeatedly provided to a game engine during rendering, and by applying loss-less data compression to the data of the file itself.
11 Mesh transfer US12200704 2008-08-28 US08379036B2 2013-02-19 Tony DeRose; Mark Meyer; Tom Sanocki; Brian Green
Mesh data and other proximity information from the mesh of one model can be transferred to the mesh of another model, even with different topology and geometry. A correspondence can be created for transferring or sharing information between points of a source mesh and points of a destination mesh. Information can be “pushed through” the correspondence to share or otherwise transfer data from one mesh to its designated location at another mesh. Correspondences can be authored on a source mesh by drawing or placing one or more geometric primitives (e.g., points, lines, curves, volumes, etc.) at the source mesh and corresponding geometric primitives at the destination mesh. A collection of “feature curves” may be placed to partition the source and destination meshes into a collection of “feature regions” resulting in partitions or “feature curve networks” for constructing correspondences between all points of one mesh and all points of another mesh.
12 COMPUTER-BASED METHOD FOR 3D SIMULATION OF OIL AND GAS OPERATIONS EP13758075 2013-03-06 EP2823425A4 2016-02-10 LOTH MATTHEW; ROBINEAU DELPHINE; ZUBAIR KHALID
13 Techniques for processing image data generated from three-dimensional graphical models EP11191790.2 2011-12-02 EP2568442A2 2013-03-13 Kaschalk, Michael; Daniels, Eric A.; Whited, Brian S.; Odermatt, Kyle D.; Osborne, Patrick T.

Techniques are disclosed for creating animated video frames which include both computer generated elements and hand drawn elements, For example, a software tool may allows an artist to draw line work (or supply other 2D image data) to composite with an animation frame rendered from a three dimensional (3D) graphical model of an object. The software tool may be configured to determine how to animate such 2D image data provided for one frame in order to appear in subsequent (or prior) frames in a manner consistent with changes in rendering the underlying 3D geometry.

14 Transfer of rigs with temporal coherence US14170835 2014-02-03 US09947123B1 2018-04-17 Brian Green
In various embodiments, a user can create or generate objects to be modeled, simulated, and/or rendered. The user can apply a mesh to the character's form to create the character's topology. Information, such as character rigging, shader and paint data, hairstyles, or the like can be attached to or otherwise associated with the character's topology. A standard or uniform topology can then be generated that allows information associated with the character to be transfer to other characters that have a similar topological correspondence.
15 CONTEXT-SPECIFIC USER INTERFACES US15798257 2017-10-30 US20180067633A1 2018-03-08 Christopher WILSON; Gary Ian BUTCHER; Kevin Will CHEN; Imran CHAUDHRI; Alan C. DYE; Aurelio GUZMAN; Jonathan P. IVE; Chanaka G. KARUNAMUNI; Kenneth KOCIENDA; Kevin LYNCH; Pedro MARI; Alessandro SABATELLI; Brian SCHMITT; Eric Lance WILSON; Lawrence Y. YANG
Context-specific user interfaces for use with a portable multifunction device are disclosed. The methods described herein for context-specific user interfaces provide indications of time and, optionally, a variety of additional information. Further disclosed are non-transitory computer-readable storage media, systems, and devices configured to perform the methods described herein.
16 Context-specific user interfaces US14815879 2015-07-31 US09804759B2 2017-10-31 Christopher Wilson; Gary Ian Butcher; Kevin Will Chen; Imran Chaudhri; Alan C. Dye; Aurelio Guzman; Jonathan P. Ive; Chanaka G. Karunamuni; Kenneth Kocienda; Kevin Lynch; Pedro Mari; Alessandro Sabatelli; Brian Schmitt; Eric Lance Wilson; Lawrence Y. Yang
Context-specific user interfaces for use with a portable multifunction device are disclosed. The methods described herein for context-specific user interfaces provide indications of time and, optionally, a variety of additional information. Further disclosed are non-transitory computer-readable storage media, systems, and devices configured to perform the methods described herein.
17 CONTEXT-SPECIFIC USER INTERFACES US14815909 2015-07-31 US20160034167A1 2016-02-04 Christopher WILSON; Gary Ian BUTCHER; Kevin Will CHEN; Imran CHAUDHRI; Alan C. DYE; Aurelio GUZMAN; Jonathan P. IVE; Chanaka G. KARUNAMUNI; Kenneth KOCIENDA; Kevin LYNCH; Pedro MARI; Alessandro SABATELLI; Brian SCHMITT; Eric Lance WILSON; Lawrence Y. YANG
Context-specific user interfaces for use with a portable multifunction device are disclosed. The methods described herein for context-specific user interfaces provide indications of time and, optionally, a variety of additional information. Further disclosed are non-transitory computer-readable storage media, systems, and devices configured to perform the methods described herein.
18 CONTEXT-SPECIFIC USER INTERFACES US14815879 2015-07-31 US20160034152A1 2016-02-04 Christopher WILSON; Gary Ian BUTCHER; Kevin Will CHEN; Imran CHAUDHRI; Alan C. DYE; Aurelio GUZMAN; Jonathan P. IVE; Chanaka G. KARUNAMUNI; Kenneth KOCIENDA; Kevin LYNCH; Pedro MARI; Alessandro SABATELLI; Brian SCHMITT; Eric Lance WILSON; Lawrence Y. YANG
Context-specific user interfaces for use with a portable multifunction device are disclosed. The methods described herein for context-specific user interfaces provide indications of time and, optionally, a variety of additional information. Further disclosed are non-transitory computer-readable storage media, systems, and devices configured to perform the methods described herein.
19 TECHNIQUES FOR PROCESSING IMAGE DATA GENERATED FROM THREE-DIMENSIONAL GRAPHIC MODELS US14692459 2015-04-21 US20150228103A1 2015-08-13 Michael KASCHALK; Eric A. DANIELS; Brian S. WHITED; Kyle D. ODERMATT; Patrick T. OSBORNE
Techniques are disclosed for creating animated video frames which include both computer generated elements and hand drawn elements. For example, a software tool may allows an artist to draw line work (or supply other 2D image data) to composite with an animation frame rendered from a three dimensional (3D) graphical model of an object. The software tool may be configured to determine how to animate such 2D image data provided for one frame in order to appear in subsequent (or prior) frames in a manner consistent with changes in rendering the underlying 3D geometry.
20 Transfer of rigs with temporal coherence US12325898 2008-12-01 US08643654B1 2014-02-04 Brian Green
In various embodiments, a user can create or generate objects to be modeled, simulated, and/or rendered. The user can apply a mesh to the character's form to create the character's topology. Information, such as character rigging, shader and paint data, hairstyles, or the like can be attached to or otherwise associated with the character's topology. A standard or uniform topology can then be generated that allows information associated with the character to be transfer to other characters that have a similar topological correspondence.
QQ群二维码
意见反馈