序号 专利名 申请号 申请日 公开(公告)号 公开(公告)日 发明人
61 METHOD FOR SHARING EMOTIONS THROUGH THE CREATION OF THREE-DIMENSIONAL AVATARS AND THEIR INTERACTION US15893077 2018-02-09 US20180232929A1 2018-08-16 Massimiliano Tarquini; Olivier C. De Keyser; Allessandro Ligi
A two-dimensional image is transformed into at least one portion of a human or animal body into a three-dimensional model. An image is acquired that includes the at least one portion of the human or animal body. An identification is made of the at least one portion within the image. Searches are made for features indicative of the at least one portion of the human or animal body within the at least one portion. One or more identifications are made of a set of landmarks corresponding to the features. An alignment is a deformable mask including the set of landmarks. The deformable mask includes a number of meshes corresponding to the at least one portion of the human or animal body. The 3D model is animated by dividing it into concentric rings, quasi rings and applying different degrees of rotation to each ring.
62 SYSTEM FOR PARAMETRIC GENERATION OF CUSTOM SCALABLE ANIMATED CHARACTERS ON THE WEB US14811256 2015-07-28 US20170032492A1 2017-02-02 Asa Jonas Ivry BLOCK; Suzanne CHAMBERS; George Michael BROWER; Igor CLARK; Richard THE
A graphic character object temporary storage stores parameters of a character and associated default values in a hierarchical data structure and one or more animation object data represented in a hierarchical data structure, the one or more animation object data having an associated animation, the graphic character object temporary storage and the animation object data being part of a local memory of a computer system. A method includes receiving a vector graphic object having character part objects which are represented as geometric shapes, displaying a two dimensional character, changing the scale of a part of the displayed two dimensional character, and storing an adjusted parameter in the graphic character object temporary storage as a percentage change from the default value, displaying a customized two dimensional character, applying keyframe data in an associated animation object data to the character parts objects, and displaying an animation according to the keyframe data.
63 CO-REGISTRATION - SIMULTANEOUS ALIGNMENT AND MODELING OF ARTICULATED 3D SHAPES US14433178 2012-12-14 US20150262405A1 2015-09-17 Michael Black; David L. Hirshberg; Matthew Loper; Eric Rachlin; Alex Weiss
Present application refers to a method, a model generation unit and a computer program (product) for generating trained models (M) of moving persons, based on physically measured person scan data (S). The approach is based on a common template (T) for the respective person and on the measured person scan data (S) in different shapes and different poses. Scan data are measured with a 3D laser scanner. A generic personal model is used for co-registering a set of person scan data (S) aligning the template (T) to the set of person scans (S) while simultaneously training the generic personal model to become a trained person model (M) by constraining the generic person model to be scan-specific, person-specific and pose-specific and providing the trained model (M), based on the co registering of the measured object scan data (S).
64 Script control for camera positioning in a scene generated by a computer rendering engine US12259288 2008-10-27 US08717359B2 2014-05-06 Charles J. Kulas
A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech.
65 Script control for lip animation in a scene generated by a computer rendering engine US12259292 2008-10-27 US08674996B2 2014-03-18 Charles J. Kulas
A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech.
66 ANIMATION ENGINE DECOUPLED FROM ANIMATION CATALOG US13249267 2011-09-30 US20130083034A1 2013-04-04 Aditi Mandal; Arye Gittelman; Lionel Robinson; Joy Seth
Embodiments provide animations with an animation engine decoupled from an animation catalog storing animation definitions. A computing device accesses at least one of the animation definitions corresponding to at least one markup language (ML) element to be animated. Final attribute values associated with the ML element are identified (e.g., provided by the caller or defined in the animation definition). The computing device animates the ML element using the accessed animation definition and the identified final attribute values. In some embodiments, the animation engine uses a single timer to animate a plurality of hypertext markup language (HTML) elements displayed by a browser.
67 COORDINATION OF ANIMATIONS ACROSS MULTIPLE APPLICATIONS OR PROCESSES US12966787 2010-12-13 US20120147012A1 2012-06-14 Bonny Lau; Song Zou; Wei Zhang; Brian Beck; Jonathan Gleasman; Pai-Hung Chen
Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information.
68 Multi-level control language for computer animation US11089090 2005-03-23 US08199150B2 2012-06-12 Charles J. Kulas
A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech.
69 Method for defining animation parameters for an animation definition interface US11841356 2007-08-20 US07920143B1 2011-04-05 Erich Haratsch; Joern Ostermann
A system and a computer-readable medium are provided for controlling a computing device to define a set of computer animation parameters for an object to be animated electronically. An electronic reference model of the object to be animated is obtained. The reference model is altered to form a modified model corresponding to a first animation parameter. Physical differences between the electronic reference model and the modified model are determined and a representation of the physical differences are stored as the first animation parameter. Altering of the reference model and determining of the physical differences are repeated. The stored parameters are provided to a rendering device for generation of the animation in accordance with the stored parameters. Determining physical differences between the electronic reference model and the modified model and storing a representation of the physical differences as the first animation parameter include comparing vertex positions of the reference model.
70 Dynamic scene descriptor method and apparatus US12475286 2009-05-29 US07839408B2 2010-11-23 Darwyn Peachey
A method for rendering a frame of animation includes retrieving scene descriptor data that specifies at least one object, wherein the object is associated with a first database query, wherein the first database query is associated with a first rendering option, receiving a selection of the first rendering option or a second rendering option, querying a database with the first database query and receiving a first representation of the object from a database when the selection is of the first rendering option, loading the first representation of the object into computer memory when the selection is of the first rendering option, and rendering the object for the frame of animation using the first representation of the object when the selection is of the first rendering option, wherein the first representation of the object is not loaded into computer memory when the selection is of the second rendering option.
71 Multiple-level graphics processing system and method US11555040 2006-10-31 US07705851B2 2010-04-27 Joseph S. Beda; Gregory D. Swedberg; Oreste Dorin Ungureanu; Kevin T. Gallo; Paul C. David; Matthew W. Calkins
A multiple-level graphics processing system and method (e.g., of an operating system) for providing improved graphics output including, for example, smooth animation. One such multiple-level graphics processing system comprises two components, including a tick-on-demand or slow-tick high-level component, and a fast-tick (e.g., at the graphics hardware frame refresh rate) low-level component. In general, the high-level, less frequent component performs computationally intensive aspects of updating animation parameters and traversing scene data structures, in order to pass simplified data structures to the low-level component. The low-level component operates at a higher frequency, such as the frame refresh rate of the graphics subsystem, to process the data structures into constant output data for the graphics subsystem. The low-level processing includes interpolating any parameter intervals as necessary to obtain instantaneous values to render the scene for each frame of animation.
72 Dynamic scene descriptor method and apparatus US10810487 2004-03-26 US07548243B2 2009-06-16 Darwyn Peachey
A method for rendering a frame of animation includes retrieving scene descriptor data that specifies at least one object, wherein the object is associated with a first database query, wherein the first database query is associated with a first rendering option, receiving a selection of the first rendering option or a second rendering option, querying a database with the first database query and receiving a first representation of the object from a database when the selection is of the first rendering option, loading the first representation of the object into computer memory when the selection is of the first rendering option, and rendering the object for the frame of animation using the first representation of the object when the selection is of the first rendering option, wherein the first representation of the object is not loaded into computer memory when the selection is of the second rendering option.
73 Method and system for mapping a natural language text into animation US11586676 2006-10-26 US20080215310A1 2008-09-04 Pascal Audant
A method for analyzing a natural language sentence describing an action, to create an action structure to be used in creating an animation of the action, the method comprising: processing the natural language sentence to create a grammatical tree comprising an action word and its associated values; providing constructs for the action word, each of the constructs having parameter types for defining the action expressed by the action word; identifying from the constructs at least one construct wherein at least one of the parameter types can take on at least one of the associated values thereby defining a matching value; and recording the at least one of the parameter types from the at least one construct as well as the matching value, thereby creating the action structure.
74 Offering Menu Items to a User US11885704 2005-03-04 US20080172635A1 2008-07-17 Andree Ross; Wolfgang Theimer
The invention relates to an electronic device 1 offering a plurality of menu items to a user. In order to enable a user friendly selection of the menu items, the electronic device 1 comprises a screen 60, user input means 70, storing means 50 adapted to store parameters for a virtual model of a user and processing means 31. The processing means 31 are adapted to generate a visual representation of a virtual user model 61 on the screen 60 based on the stored parameters for the virtual model of a user, to cause a movement of a visually represented virtual user model 61 depending on a user input, to detect a movement of a visually represented virtual user model 61 that is associated to a particular menu item, which menu item is offered for any of a plurality of applications, and to call a function that is assigned to the particular menu item.
75 Joint component framework for modeling complex joint behavior US10425122 2003-04-25 US07333111B2 2008-02-19 Victor Ng-Thow-Hing; Wei Shao
A general joint component framework that is capable of exhibiting complex behaviors of joints in articulated figures is provided. A network of joint components is used to model the kinematics of a joint. A joint builder can specify parameters for each of the joint components and join the joint components to form a joint set function that captures the biomechanical dependencies between the components. The joint function has fewer inputs than the total number of possible articulations yielding both simple control and biomechanically accurate joint movement.
76 MULTIPLE-LEVEL GRAPHICS PROCESSING SYSTEM AND METHOD US11555040 2006-10-31 US20070057943A1 2007-03-15 Joseph Beda; Gregory Swedberg; Oreste Ungureanu; Kevin Gallo; Paul David; Matthew Calkins
A multiple-level graphics processing system and method (e.g., of an operating system) for providing improved graphics output including, for example, smooth animation. One such multiple-level graphics processing system comprises two components, including a tick-on-demand or slow-tick high-level component, and a fast-tick (e.g., at the graphics hardware frame refresh rate) low-level component. In general, the high-level, less frequent component performs computationally intensive aspects of updating animation parameters and traversing scene data structures, in order to pass simplified data structures to the low-level component. The low-level component operates at a higher frequency, such as the frame refresh rate of the graphics subsystem, to process the data structures into constant output data for the graphics subsystem. The low-level processing includes interpolating any parameter intervals as necessary to obtain instantaneous values to render the scene for each frame of animation.
77 Joint component framework for modeling complex joint behavior US10425122 2003-04-25 US20060061574A1 2006-03-23 Victor Ng-Thow-Hing; Wei Shao
A general joint component framework that is capable of exhibiting complex behaviors of joints in articulated figures is provided. A network of joint components is used to model the kinematics of a joint. A joint builder can specify parameters for each of the joint components and join the joint components to form a joint set function that captures the biomechanical dependencies between the components. The joint function has fewer inputs than the total number of possible articulations yielding both simple control and biomechanically accurate joint movement.
78 Method for defining animation parameters for an animation definition interface US11179715 2005-07-12 US20050243092A1 2005-11-03 Erich Haratsch; Joern Ostermann
A system and a computer-readable medium are provided for controlling a computing device to define a set of computer animation parameters for an object to be animated electronically. An electronic reference model of the object to be animated is obtained. The reference model is altered to form a modified model corresponding to a first animation parameter. Physical differences between the electronic reference model and the modified model are determined and a representation of the physical differences are stored as the first animation parameter. Altering of the reference model and determining of the physical differences are repeated. The stored parameters are provided to a rendering device for generation of the animation in accordance with the stored parameters. Determining physical differences between the electronic reference model and the modified model and storing a representation of the physical differences as the first animation parameter include comparing vertex positions of the reference model.
79 Creation and playback of computer-generated productions using script-controlled rendering engines US09576704 2000-05-22 US06947044B1 2005-09-20 Charles J. Kulas
A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech.
80 Information processing system, information processing method and apparatus, and information serving medium US09520910 2000-03-07 US06820112B1 2004-11-16 Koichi Matsuda; Koji Matsuoka; Masashi Takeda
An AO server 100, shared server 105 and a client PC 110 are connected to one another via the Internet to build virtual community space accessible from a plurality of client PCs 110, 120, 130, 140, .. . A movement interpretation node 112 for a virtual living object 111 is provided at the client PC 110 for example, and an object management node 102 for the virtual living object 101 (111) in the virtual community space is provided at the AO server 100. Thus, the movement and structure of an object in a virtual space can freely be changed or modified.
QQ群二维码
意见反馈