101 |
System for parametric generation of custom scalable animated characters on the web |
US14811256 |
2015-07-28 |
US09786032B2 |
2017-10-10 |
Asa Jonas Ivry Block; Suzanne Chambers; George Michael Brower; Igor Clark; Richard The |
A graphic character object temporary storage stores parameters of a character and associated default values in a hierarchical data structure and one or more animation object data represented in a hierarchical data structure, the one or more animation object data having an associated animation, the graphic character object temporary storage and the animation object data being part of a local memory of a computer system. A method includes receiving a vector graphic object having character part objects which are represented as geometric shapes, displaying a two dimensional character, changing the scale of a part of the displayed two dimensional character, and storing an adjusted parameter in the graphic character object temporary storage as a percentage change from the default value, displaying a customized two dimensional character, applying keyframe data in an associated animation object data to the character parts objects, and displaying an animation according to the keyframe data. |
102 |
COMPUTERIZED MOTION ARCHITECTURE |
US14948146 |
2015-11-20 |
US20170148202A1 |
2017-05-25 |
Jeffrey David Verkoeyen; Randall Li |
A computing system is presented including a processor and non-transient memory which includes instructions to execute a method including receiving a motion instruction message which includes graphical objects to be modified and instructions to be assigned to each of the graphical objects to be modified, where an instruction includes a property to be applied to a graphical object. The method also includes identifying actors to be assigned to each of the graphical objects based on the instructions assigned to each of the graphical objects, where an actor is a non-graphical object capable of executing one or more instructions. The method also includes generating the actors for each of the graphical objects, executing the instructions assigned to each of the graphical objects via the actors, and outputting the modified graphical objects for display. |
103 |
OFFERING MENU ITEMS TO A USER |
US15406277 |
2017-01-13 |
US20170124751A1 |
2017-05-04 |
Andree ROSS; Wolfgang THEIMER |
The invention relates to an electronic device (1) offering a plurality of menu items to a user. In order to enable a user friendly selection of the menu items, the electronic device (1) comprises a screen (60), user input means (70), storing means (50) adapted to store parameters for a virtual model of a user and processing means (31). The processing means (31) are adapted to generate a visual representation of a virtual user model (61) on the screen (60) based on the stored parameters for the virtual model of a user, to cause a movement of a visually represented virtual user model (61) depending on a user input, to detect a movement of a visually represented virtual user model (61) that is associated to a particular menu item, which menu item is offered for any of a plurality of applications, and to call a function that is assigned to the particular menu item. |
104 |
Skeletal Joint Optimization For Linear Blend Skinning Deformations Utilizing Skeletal Pose Sampling |
US14809519 |
2015-07-27 |
US20170032579A1 |
2017-02-02 |
Elmar Eisemann; Jean-Marc Thiery |
A novel and useful mechanism for the skinning of 3D meshes with reference to a skeleton utilizing statistical weight optimization techniques. The mechanism of the present invention comprises (1) an efficient high quality linear blend skinning (LBS) technique based on a set of skeleton deformations sampled from the manipulation space; (2) a joint placement algorithm to optimize the input skeleton; and (3) a set of tools for a user to interactively control the skinning process. Statistical skinning weight maps are computed using an as-rigid-as-possible (ARAP) optimization. The method operates with a coarsely placed initial skeleton and optimizes joint placements to improve the skeleton's alignment. Bones may also be parameterized incorporating twists, bends, stretches and spines. Several easy to use tools add additional constraints to resolve ambiguous situations when needed and interactive feedback is provided to aid users. Quality weight maps are generated for challenging deformations and various data types (e.g., triangle, tetrahedral meshes), including noisy, complex and topologically challenging examples (e.g., missing triangles, open boundaries, self-intersections, or wire edges). |
105 |
Display device and computer |
US14370593 |
2012-05-28 |
US09501858B2 |
2016-11-22 |
Takayoshi Takehara; Atsushi Hori; Shinya Taguchi |
A display device converts an animation file into first binary data in a data format which can be processed by a first graphics library of a first display, the binary data including a DL, and converts the converted first binary data into second binary data in a data format which can be processed by a second graphics library of a second display. |
106 |
SYSTEM, METHOD AND APPARATUS FOR GENERATING HAND GESTURE ANIMATION DETERMINED ON DIALOGUE LENGTH AND EMOTION |
US14901013 |
2014-06-27 |
US20160292131A1 |
2016-10-06 |
Linus LANGELS; Philip EDNER; Jonas LOFGREN |
System, method and apparatuses directed to a paradigm of manuscript generation and manipulation combined with contemporaneous or simultaneous visualization of the text or other media being entered by the creator with emotion and mood of the characters being conveyed graphically through rendering. Through real time calculations, respective characters are graphically depicted speaking and interacting physically with other characters, pursuant to directive found in a manuscript text. |
107 |
SYSTEM, APPARATUS AND METHOD FOR MOVIE CAMERA PLACEMENT BASED ON A MANUSCRIPT |
US14900916 |
2014-06-27 |
US20160162154A1 |
2016-06-09 |
Jonathan Hise Kaldma; Philip Edner; Jonas Löfgren |
System, method and apparatuses directed to a paradigm of manuscript generation and manipulation combined with contemporaneous or simultaneous visualization of the text or other media being entered by the creator. Through real time calculations, respective vantage points or visual points of view are determined in the rendering and depiction of the visualization of the manuscript instructions and text. |
108 |
SYSTEM, APPARATUS AND METHOD FOR THE CREATION AND VISUALIZATION OF A MANUSCRIPT FROM TEXT AND/OR OTHER MEDIA |
US14900854 |
2014-06-27 |
US20160139786A1 |
2016-05-19 |
Jonathan Hise Kaldma; Ted Ulrik Lindström; Philip Edner; Jonas Löfgren |
System, method and apparatuses directed to a paradigm of manuscript generation, transformation and manipulation combined with contemporaneous or simultaneous visualization of the text or other media being entered by the creator. Through respective panels or interfaces the creator may manipulate a work or manuscript while visualizing the effects desired. |
109 |
COORDINATION OF ANIMATIONS ACROSS MULTIPLE APPLICATIONS OR PROCESSES |
US14617791 |
2015-02-09 |
US20150221120A1 |
2015-08-06 |
Bonny Lau; Song Zou; Wei Zhang; Brian Beck; Jonathan Gleasman; Pai-Hung Chen |
Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information. |
110 |
Interaction between 3D animation and corresponding script |
US13299518 |
2011-11-18 |
US09003287B2 |
2015-04-07 |
Joshua Goldenberg; Louise Rasmussen; Adam Schnitzer; Domenico Porcino; George Lucas; Brett A. Allen |
Interaction between a 3D animation and a corresponding script includes: displaying a user interface that includes at least a 3D animation area and a script area, the 3D animation area including (i) a 3D view area for creating and playing a 3D animation and (ii) a timeline area for visualizing actions by one or more 3D animation characters, the script area comprising one or more objects representing lines from a script having one or more script characters; receiving a first user input corresponding to a user selecting at least one of the objects from the script area for assignment to a location in the timeline area; generating a timeline object at the location in response to the first user input, the timeline object corresponding to the selected object; and associating audio data with the generated timeline object, the audio data corresponding to a line represented by the selected object. |
111 |
METHOD FOR SHARING EMOTIONS THROUGH THE CREATION OF THREE DIMENSIONAL AVATARS AND THEIR INTERACTION |
US14456558 |
2014-08-11 |
US20150070351A1 |
2015-03-12 |
Massimiliano Tarquini; Olivier Chandra De Keyser; Allessandro Ligi |
A method is provided for transforming a two-dimensional image of at least one portion of a human or animal body into a three-dimensional model. A search is made of features indicative of at least a portion of the human or animal body within the at least one portion. A set of landmarks is identified that corresponds to the features. A 3D deformable mask is aligned, including the set of landmarks to create a 3D model of the face respecting its morphology, the deformable mask including a number of mesh shapes that correspond to at least one portion of the human or animal. The 3D model is animated by dividing it into concentric rings or quasi rings and applying different degrees of rotation to each ring, or quasi ring. |
112 |
Coordination of animations across multiple applications or processes |
US12966787 |
2010-12-13 |
US08957900B2 |
2015-02-17 |
Bonny Lau; Song Zou; Wei Zhang; Brian Beck; Jonathan Gleasman; Pai-Hung Chen |
Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information. |
113 |
SCRIPT CONTROL FOR CAMERA POSITIONING IN A SCENE GENERATED BY A COMPUTER RENDERING ENGINE |
US14220061 |
2014-03-19 |
US20140218373A1 |
2014-08-07 |
Charles J. Kulas |
A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech. |
114 |
Methods and Systems for Representing Complex Animation using Style Capabilities of Rendering Applications |
US13018830 |
2011-02-01 |
US20140049547A1 |
2014-02-20 |
Rick Cabanier; Dan J. Clark; David George |
A computerized device implements an animation coding engine to analyze timeline data defining an animation sequence and generate a code package representing the animation sequence as a set of visual assets and animation primitives supported by a rendering application, each visual asset associated with a corresponding animation primitive. The code package is generated to include suitable code that, when processed by the rendering application, causes the rendering application to invoke the corresponding animation primitive for each visual asset to provide the animation sequence. For example, the rendering application can comprise a browser that renders the visual assets. The code package can comprise a markup document including or referencing a cascading style sheet defining the corresponding animation primitives as styles to be applied to the visual assets when rendered by the browser. |
115 |
EXTENSIBLE SPRITE SHEET GENERATION MECHANISM FOR DECLARATIVE DATA FORMATS AND ANIMATION SEQUENCE FORMATS |
US13458839 |
2012-04-27 |
US20130286025A1 |
2013-10-31 |
Henry David Spells, III; Peter W. Moody |
A sprite sheet generation mechanism includes providing a sprite sheet generation engine host, which may be an authoring application. The host loads code that describes sprite sheet format information and a set of ordered images into the sprite sheet generation engine. The code is from code resources may be plug-ins created by a user and managed by a plug-in type manager. The sprite sheet generation engine is operated using the sprite sheet format information and the set of ordered images to generate a sprite sheet. |
116 |
Interaction Between 3D Animation and Corresponding Script |
US13299518 |
2011-11-18 |
US20130132835A1 |
2013-05-23 |
Joshua Goldenberg; Louise Rasmussen; Adam Schnitzer; Domenico Porcino; George Lucas; Brett A. Allen |
Interaction between a 3D animation and a corresponding script includes: displaying a user interface that includes at least a 3D animation area and a script area, the 3D animation area including (i) a 3D view area for creating and playing a 3D animation and (ii) a timeline area for visualizing actions by one or more 3D animation characters, the script area comprising one or more objects representing lines from a script having one or more script characters; receiving a first user input corresponding to a user selecting at least one of the objects from the script area for assignment to a location in the timeline area; generating a timeline object at the location in response to the first user input, the timeline object corresponding to the selected object; and associating audio data with the generated timeline object, the audio data corresponding to a line represented by the selected object. |
117 |
Methods and Systems for Representing Complex Animation Using Scripting Capabilities of Rendering Applications |
US13081745 |
2011-04-07 |
US20120256928A1 |
2012-10-11 |
Alexandru Chiculita |
A computerized device implements an animation coding engine to analyze timeline data defining an animation sequence and generate a code package. The code package can represent the animation sequence using markup code that defines a rendered appearance of a plurality of frames and a structured data object also comprised in the code package and defining a parameter used by a scripting language in transitioning between frames. The markup code can also comprise a reference to a visual asset included within a frame. The code package further comprises a cascading style sheet defining an animation primitive as a style to be applied to the asset to reproduce one or more portions of the animation sequence without transitioning between frames. |
118 |
OFFERING MENU ITEMS TO A USER |
US13367110 |
2012-02-06 |
US20120200607A1 |
2012-08-09 |
Andree Ross; Wolfgang Theimer |
The invention relates to an electronic device offering a plurality of menu items to a user. In order to enable a user friendly selection of the menu items, the electronic device comprises a screen, user input means, storing means adapted to store parameters for a virtual model of a user and processing means. The processing means are adapted to generate a visual representation of a virtual user model on the screen based on the stored parameters for the virtual model of a user, to cause a movement of a visually represented virtual user model depending on a user input, to detect a movement of a visually represented virtual user model that is associated to a particular menu item, and to call a function that is assigned to the particular menu item. |
119 |
METHOD FOR DEFINING ANIMATION PARAMETERS FOR AN ANIMATION DEFINITION INTERFACE |
US13075923 |
2011-03-30 |
US20110175922A1 |
2011-07-21 |
Erich Haratsch; Joern Ostermann |
A system and a computer-readable medium are provided for controlling a computing device to define a set of computer animation parameters for an object to be animated electronically. An electronic reference model of the object to be animated is obtained. The reference model is altered to form a modified model corresponding to a first animation parameter. Physical differences between the electronic reference model and the modified model are determined and a representation of the physical differences are stored as the first animation parameter. Altering of the reference model and determining of the physical differences are repeated. The stored parameters are provided to a rendering device for generation of the animation in accordance with the stored parameters. Determining physical differences between the electronic reference model and the modified model and storing a representation of the physical differences as the first animation parameter include comparing vertex positions of the reference model. |
120 |
Script control for gait animation in a scene generated by a computer rendering engine |
US12259291 |
2008-10-27 |
US07830385B2 |
2010-11-09 |
Charles J. Kulas |
A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech. |