A method performed by a computer is disclosed. The method comprises receiving color data for input pixels of an input image and an input set of features used to render the input image of a three-dimensional animation environment, wherein the input pixels are of a first resolution. The computer may then load into memory a generator of a generative adversarial network including a neural network used to scale the input image, the neural network trained using training data comprising color data of training input images and training output images and a training set of the features used to render the training input images. After the generator is loaded into memory, the computer may generate an output image having a second resolution that is different than the first resolution by passing the color data and the input set of features through the generator.
Systems and methods generate a modified three-dimensional mesh representation of an object using a trained neural network. A computer system receives a set of input values for posing an initial mesh defining a surface of a three-dimensional object. The computer system provides the input values to a neural network trained on posed meshes generated using a rigging model to generate mesh offset values based upon the set of input values and the initial mesh. The neural network includes an input layer, an output layer, and a plurality of intermediate layers. The computer system generates, by the output layer of the neural network, a set of offset values corresponding to a set of three-dimensional target points based on the set of input values. The offset values are applied to the initial mesh to generate a posed mesh. The computer system outputs the posed mesh for generating an animation frame.
Embodiments provide for three-dimensional cloth modeling are provided. A first mesh comprising a plurality of faces defined by a plurality of edges is accessed, and a render mesh is generated using quadrangulated tessellation of the first mesh, where the render mesh comprises quad faces. One or more attributes of the plurality of faces of the first mesh are transferred to one or more of the quad faces of the render mesh using a stochastic transfer operation. The render mesh is displayed via a graphical user interface (GUI).
Systems and methods automatically generate contours on an illustrated object for performing an animation. Contour lines are generated on the surface of the object according to criteria related to the shape of the surface of the object. Points of the contour lines that are occluded from a virtual camera are identified. The occluded points are removed to generate visible lines. The visible lines are extruded to define a three-dimensional volume defining contours of the object. The object itself, along with the three-dimensional volume, are illuminated and rendered. The parameters defining the opacity and color of the contour may differ from corresponding parameters of the rest of the object, so that the contours stand out and define portions of the object. The contours are useful in contexts such as defining areas of an object that is fuzzy or cloudy in appearance, as well as creating certain artistic effects.
A modular architecture is provided for denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. An error-predicting module is configured to produce adaptive sampling maps for a renderer to achieve more uniform residual noise distribution. An asymmetric loss function may be used for training the neural networks, which can provide control over the variance-bias trade-off during denoising.
Embodiments provide for cut-aware UV transfer. Embodiments include receiving a surface correspondence map that maps points of a source mesh to points of a target mesh. Embodiments include generating a set of functions encoding locations of seam curves and wrap curves from a source UV map of the source mesh. Embodiments include using the set of functions and the surface correspondence map to determine a target UV map that maps a plurality of target seam curves and a plurality of target wrap curves to the target mesh. Embodiments include transferring a two-dimensional parametrization of the source UV map to the target UV map.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Entertainment services, namely, the development, creation, production, and distribution of digital multimedia and audio and visual content, namely, motion picture films, television programs, and multimedia entertainment; development, creation, production, distribution, of audio and visual recordings; production of entertainment shows for distribution via audio and video streaming, and electronic means
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Games, toys and playthings; Video game apparatus; Gymnastic and sporting articles; Decorations for Christmas trees; Action skill games; action figures; board games; card games; children's multiple activity toys; badminton sets; balloons; basketballs; bath toys; baseball bats; baseballs; beach balls; bean bags [playthings] used in target games; bean bag dolls; bobblehead dolls; bowling balls; bubble making wand and solution sets; chess sets; toy imitation cosmetics; Christmas stockings; Christmas tree ornaments; collectable toy figures; crib mobiles; crib toys; disc toss toys; dolls; doll clothing; doll accessories; doll playsets; electric action toys; equipment sold as a unit for playing card games; fishing tackle; fishing rods; footballs; golf balls; golf gloves; golf ball markers; hand-held units for playing electronic games for use with or without an external display screen or monitor; hockey pucks; hockey sticks; infant toys; inflatable toys; inflatable pool toys; jigsaw puzzles; jump ropes; kites; magic tricks; marbles; manipulative games; mechanical toys; music box toys; musical toys; parlor games; party favors in the nature of small toys; paper party favors; paper party hats; party games; playing cards; plush toys; puppets; roller skates; rubber balls; skateboards; snow boards; snow globes; soccer balls; spinning tops; squeeze toys; stuffed toys; table tennis balls; table tennis paddles and rackets; table tennis tables; talking toys; target games; teddy bears; tennis balls; tennis rackets; toy action figures and accessories therefor; toy boats; toy bucket and shovel sets in the nature of sand toys; toy building blocks; toy mobiles; toy vehicles; toy scooters; toy cars; toy figures; toy banks; toy watches; toy weapons; toy building structures and toy vehicle tracks; video game machines for use with televisions; volley balls; wind-up toys; yo-yos; toy trains and parts and accessories therefor; toy aircraft; balls for games; battery operated action toys; bendable toys; construction toys; game tables; inflatable inner tubes for aquatic recreational use; inflatable swimming pools; piñatas; radio controlled toy vehicles; role playing games; snow sleds for recreational use; stacking toys; surf boards; swim fins; toy furniture; toy gliders; toy masks; toy model train sets; water slides; protective films adapted for screens for portable games. Education; Providing of training; Entertainment; Sporting and cultural activities; Development, creation, production, and distribution of digital multimedia and audio and visual content, namely, motion picture films, television programs, radio programs, and multimedia entertainment and educational content; development, creation, production, distribution, and rental of audio and visual recordings; production of entertainment shows and interactive programs for distribution via audio and visual media, and electronic means; production and provision of entertainment news and entertainment information via electronic communication networks; providing online computer games, websites and applications featuring a wide variety of general interest entertainment information relating to motion picture films, television programs, musical videos, film clips, photographs, and other multimedia materials; providing online non-downloadable comic books and graphic novels; providing online and subscription games; online games; amusement park and theme park services; educational and entertainment services rendered in or relating to theme parks; live stage shows; live amusement park shows; live performances by costumed characters; production and presentation of live theatrical performances; production and presentation of live shows; theater productions; entertainer services; live appearances by a professional entertainer.
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Providing websites featuring a wide variety of general interest entertainment information relating to motion picture films, television programs, film clips, photographs, and other multimedia materials
Embodiments provide for transferring mesh connectivity. Embodiments include receiving a definition of a correspondence between a first curve for a source mesh and a second curve for a target shape. Embodiments include initializing an output mesh by setting a third plurality of vertices in the output mesh equal to a first plurality of vertices in the source mesh. Embodiments include transforming the output mesh by modifying the third plurality of vertices based on the first curve, the second curve, and a second plurality of vertices of the target mesh. Vertices of the third plurality of vertices that relate to the first curve are conformed to a shape defined by the second curve, and vertex modifications that result in affine transformations of faces in the output mesh are favored. Embodiments include using the output mesh to transfer an attribute from the source mesh to the target shape.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Techniques are disclosed that allow animators to easily share and reuse character poses such as gestures, expressions, and mouth shapes. When starting on a new shot, an animator often wants a character to have the same pose exactly as the end of the previous shot. According to various embodiments, an animator can easily set up these hookup poses by animator copying a pose directly from a clip of prerecorded media. In one aspect, a pose at the current playhead of the playback tool is copied into a software buffer of an animation tool and then pasted into a character. Thus, the animator may copy a pose exactly as he/she is seeing visually. In various aspects, animators can choose a pose from an entire inventory of available animated videos. This provides a more efficient method for selecting a pose since the user can easily choose and pick a pose from a large inventory of animated videos and bring in a desired pose in a matter of a few mouse clicks.
Embodiments provide for sculpt transfer. Embodiments include identifying a source polygon of a source mesh that corresponds to a target polygon of a target mesh. Embodiments include determining a first matrix defining a first rotation that aligns a target rest state of the target polygon to a source rest state of the source polygon, determining a second matrix defining a linear transformation that aligns the source rest state to a source pose of the source polygon, wherein the linear transformation comprises rotating and stretching, determining a third matrix defining a second rotation that aligns the source pose to the target rest state, and determining a fourth matrix defining a third rotation that aligns the source rest state to the source pose. Embodiments include determining a target pose of the target polygon based on the target rest state, the first matrix, the second matrix, the third matrix, and the fourth matrix.
A modular architecture is provided for denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. An error-predicting module is configured to produce adaptive sampling maps for a renderer to achieve more uniform residual noise distribution. An asymmetric loss function may be used for training the neural networks, which can provide control over the variance-bias trade-off during denoising.
Supervised machine learning using neural networks is applied to denoising images rendered by MC path tracing. Specialization of neural networks may be achieved by using a modular design that allows reusing trained components in different networks and facilitates easy debugging and incremental building of complex structures. Specialization may also be achieved by using progressive neural networks. In some embodiments, training of a neural-network based denoiser may use importance sampling, where more challenging patches or patches including areas of particular interests within a training dataset are selected with higher probabilities than others. In some other embodiments, generative adversarial networks (GANs) may be used for training a machine-learning based denoiser as an alternative to using pre-defined loss functions.
Supervised machine learning using neural networks is applied to denoising images rendered by MC path tracing. Specialization of neural networks may be achieved by using a modular design that allows reusing trained components in different networks and facilitates easy debugging and incremental building of complex structures. Specialization may also be achieved by using progressive neural networks. In some embodiments, training of a neural-network based denoiser may use importance sampling, where more challenging patches or patches including areas of particular interests within a training dataset are selected with higher probabilities than others. In some other embodiments, generative adversarial networks (GANs) may be used for training a machine-learning based denoiser as an alternative to using pre-defined loss functions.
Embodiments herein describe a headset that simulates accelerations that correspond to a visual presentation being viewed by the user. The headset includes a force system that applies a force on the head of the user to simulate an acceleration being viewed by the user. The force system may include an actuator that moves a weight to different locations on or around the headset. By moving the weight to different locations, the weight can apply a force that simulates acceleration. For example, the headset can move the weight to apply a force that lifts the head of the user up, which is similar to a force that would be applied if the user was physically accelerated forward. By moving the weight, the force system can simulate accelerations in any number of directions—e.g., front, back, left, right, etc.
A63F 13/285 - Génération de signaux de retour tactiles via le dispositif d’entrée du jeu, p.ex. retour de force
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A63F 13/98 - Accessoires, c. à d. agencements détachables optionnels à l’utilisation du dispositif de jeu vidéo, p.ex. support de prise de manettes de jeu
A63F 13/245 - Dispositions d'entrée pour les dispositifs de jeu vidéo - Parties constitutives, p.ex. manettes de jeu avec poignées amovibles spécialement adaptées pour un type particulier de jeu, p.ex. les volants
A63F 13/24 - Dispositions d'entrée pour les dispositifs de jeu vidéo - Parties constitutives, p.ex. manettes de jeu avec poignées amovibles
Action figures; board games; card games; children's multiple activity toys; bath toys; bobblehead dolls; bubble making wand and solution sets; Christmas tree ornaments; collectable toy figures; dolls; doll accessories; doll playsets; electric action toys; equipment sold as a unit for playing card games; jigsaw puzzles; mechanical toys; musical toys; parlor games; playing cards; plush toys; stuffed toys; talking toys; teddy bears; toy action figures and accessories therefor; toy building blocks; toy vehicles; toy cars; toy figures; toy weapons; toy aircraft
09 - Appareils et instruments scientifiques et électriques
Produits et services
Digital media, namely, pre-recorded audio and video recordings, CDs, DVDs, high definition digital discs, and mp3 files featuring music, stories, dramatic performances, non-dramatic performances, and children's programming; audio books in the field of entertainment and education featuring fiction; audio and visual recordings featuring live-action entertainment, animated entertainment, music, and stories for children; musical recordings; downloadable electronic publications in the nature of children's stories in illustrated form; mouse pads; cell phone cases; sunglasses; decorative magnets; protective covers and cases for tablet computers
21 - Ustensiles, récipients, matériaux pour le ménage; verre; porcelaine; faience
Produits et services
Beverageware; beverage glassware; bowls; coasters, not of paper or textile; cookie jars; cups; dinnerware; dishware; dishes; household containers for food and beverages; lunch boxes; mugs; non-metallic trays for domestic purposes; serving trays; plates; servingware for serving food; sports bottles sold empty; vacuum bottles; thermal insulated containers for beverages
26.
System and method for smoothing computer animation curves
Techniques for smoothing curves used in computer animation are disclosed. In one embodiment, a smoothing application determines a number of tangents to a curve in response to a modification to a knot or the addition of a new knot, by first determining phantom tangents at knots that are neighbors of each knot that is processed. The smoothing application then (1) determines a length of each side of the tangent at each knot being processed as 1/N times the x-axis distance to a neighboring knot on the same side, (2) determines initial angles of the tangent at each knot being processed by pointing a tip of each side of the tangent at a near tip of a previously determined phantom tangent on the same side, and (3) reconciles the initial angles determined for the tangent at each knot being processed by taking a weighted sum of those initial angles.
G06T 11/40 - Remplissage d'une surface plane par addition d'attributs de surface, p.ex. de couleur ou de texture
G06F 17/17 - Opérations mathématiques complexes Évaluation de fonctions par des procédés d'approximation, p.ex. par interpolation ou extrapolation, par lissage ou par le procédé des moindres carrés
One aspect of the present disclosure is directed to enabling a user to specify one or more forces to influence how a movable object carried by a 3D character may move during an animation sequence of the 3D character. In some embodiments, the user input can include an arrow. The user can be enabled manipulate the arrow to specify values for at least one parameter of the force to be applied to the movable object during the animation sequence. Another aspect of the disclosure is directed to enabling the user to draw a silhouette stroke to direct an animation of the movable object during the animation sequence. The silhouette stroke drawn by the user can be used as a “boundary” towards which the movable object may be “pulled” during the animation sequence. This may involve generating forces according to the position where the silhouette stroke is drawn.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Embodiments can generate content (e.g., a feature film, virtual reality experience) in a virtual environment (e.g., a VR environment). Specifically, they allow fast prototyping and development of a virtual reality experience by allowing virtual assets to be quickly imported into a virtual environment. The virtual assets can be used to help visualize or “storyboard” an item of content during early stages of development. In doing so, the content can be rapidly iterated upon without requiring use of more substantial assets, which can be time consuming and resource intensive.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
The present disclosure relates to a method, computer program product, and system for rendering one or more objects in a set. The illumination agent designates a region of interest in the set to be rendered. The illumination agent defines an amount of photons to be directed towards the region of interest in the set. The illumination agent generates a photon map of the set. The illumination agent generates a portion of the photon map based on the region of interest and the designated amount of photons to be applied to the region of interest. The illumination agent generates a remainder of the photon map based on an area exterior to the region of interest and a second amount of photons to be applied to the area. The processor transmits the photon map for processing.
A modular architecture is provided for denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. An error-predicting module is configured to produce adaptive sampling maps for a renderer to achieve more uniform residual noise distribution. An asymmetric loss function may be used for training the neural networks, which can provide control over the variance-bias trade-off during denoising.
A modular architecture is provided for denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. An error-predicting module is configured to produce adaptive sampling maps for a renderer to achieve more uniform residual noise distribution. An asymmetric loss function may be used for training the neural networks, which can provide control over the variance-bias trade-off during denoising.
A modular architecture is provided for denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. An error-predicting module is configured to produce adaptive sampling maps for a renderer to achieve more uniform residual noise distribution. An asymmetric loss function may be used for training the neural networks, which can provide control over the variance-bias trade-off during denoising.
Surface relaxation techniques are disclosed for smoothing the shapes of three-dimensional (3D) virtual geometry. In one embodiment, a surface relaxation application determines, for each of a number of vertices of a 3D virtual geometry, span-aware weights for each edge incident to the vertex based on the alignment of other edges incident to the vertex with an orthonormal frame of the edge constructed using a decal map. The surface relaxation application uses such span-aware weights to compute weighted averages that provide surface relaxation offsets. Further, the surface relaxation application may restore relaxation offsets from an original to a deformed geometry by determining relaxation offsets for both geometries and transferring the relaxation offsets from the original to the deformed 3D geometry using a blending of the determined relaxation offsets and a rotation. In another embodiment, volume is preserved by computing relaxation offsets in the plane and lifting relaxed vertices back to 3D.
The present disclosure relates to using a neural network to efficiently denoise images that were generated by a ray tracer. The neural network can be trained using noisy images generated with noisy samples and corresponding denoised or high-sampled images (e.g., many random samples). An input feature to the neural network can include color from pixels of an image. Other input features to the neural network, which would not be known in normal image processing, can include shading normal, depth, albedo, and other characteristics available from a computer-generated scene. After the neural network is trained, a noisy image that the neural network has not seen before can have noise removed without needing manual intervention.
Systems, methods and articles of manufacture for rendering images depicting materials are disclosed. A stable Neo-Hookean energy model is disclosed which does not include terms that can produce singularities, or require the use of arbitrarily selected clamping parameters. The stable Neo-Hookean energy may include a length-preserving term and volume-preserving term(s), and the volume-preserving terms themselves may include term(s) from a Taylor expansion of a logarithm of a measurement of volume. The stable Neo-Hookean energy may further include an origin barrier term that increases the difficulty of reaching the origin and expands a mesh in response to a perturbation when the mesh is at the origin. Closed-form expressions of eigenvalues and eigenvectors of a Hessian of the stable Neo-Hookean energy are disclosed, which may be used in a simulation of a material to, e.g., project the Hessian to semi-positive-definiteness in Newton iterations used to determine a substantially minimal energy configuration.
User interface display layouts are provided that draw a user's attention to a specific element or elements by de-emphasizing the surrounding content, but without removing the de-emphasized content from the interface. This ability to maintain the whole presentable layout with visibility layers and without layout changes provides a useful navigation experience for the user as it is clear where the user's attention should go and yet the surrounding content is still subtly there, constantly reminding the user of the other available content. De-emphasis of certain content items is achieved by modifying display characteristics of those content items relative to a base display level, for example by lowering saturation, lowering opacity, and/or de-focusing (as if the user is looking through a camera) and modification can be done variably. Driven by a relevancy score, each content item in a display layout can be de-emphasized more or less depending on which content is more meaningful to the user's filtering actions.
Users may dynamically specify a “posing root” node in an animation hierarchy that is different than the model root node used to define the animation hierarchy. When a posing root node is specified, users specify the pose, including translations and rotations, of other nodes relative to the posing root node, rather than the model root node. Poses of nodes may be specified using animation variable values relative to the posing root node. Animation variable values specified relative to the posing root node are dynamically converted to equivalent animation variable values relative to the model root node, which then may be used to pose an associated model. Animation data may be presented to users relative to the current posing root node. If a posing root node is changed to a different location, the animation data is converted so that it is expressed relative to the new posing root node.
Provided herein are methods, systems, and computer products for evaluating nodes concurrently using a modified data flow graph. The modified data flow graph can identify independent nodes that can run as separate tasks. However, rather than relying on declared dependencies, embodiments herein can determine dependencies between segments of data elements in a data flow graph, and modify the data flow graph to take advantage of the determined dependencies. In such embodiments, the data elements can be divided into segments. By separating data elements into segments, nodes that previously depended on each other can be evaluated concurrently when independent segments are identified.
Provided are methods, systems, and computer-program products for recovering from intersections during a simulation of an animated scene when a collision detection operation is active. For example, the collision detection operation can be selectively activated and deactivated during the simulation of one or more objects for a time step based on an intersection analysis, which can identify intersections of the one or more objects for the time step. Once the collision detection operation is deactivated, a collision response can apply one or more forces to intersecting portions of the one or more objects to eliminate the intersections of the one or more objects. For example, a portion of a cloth that is in a state of intersection can be configured such that the collision detection operation is not performed on the portion, thereby allowing the cloth to be removed from inside of another object by a collision response algorithm.
G06F 30/23 - Optimisation, vérification ou simulation de l’objet conçu utilisant les méthodes des éléments finis [MEF] ou les méthodes à différences finies [MDF]
G06F 30/20 - Optimisation, vérification ou simulation de l’objet conçu
G06F 111/20 - CAO de configuration, p.ex. conception par assemblage ou positionnement de modules sélectionnés à partir de bibliothèques de modules préconçus
Systems and methods can provide computer animation of animated scenes or interactive graphics sessions. A grid camera separate from the render camera can be created for segments where the configurations (actual or predicted) of the render camera satisfy certain properties, e.g., an amount of change is within a threshold. If a segment is eligible for the use of the separate grid camera, configurations of the grid camera during a segment can be determined, e.g., from the configurations of the render camera. The configurations of the grid camera can then be used to determine grids for rendering objects. If a segment is not eligible for the use of the grid camera, then the configurations of the render camera can be used to determine the grids for rendering.
A user interface provides for multi-stroke marking menus and other uses, for use on multitouch devices. One variant of multi-stroke marking is where users draw strokes with either both hands simultaneously or alternating between the hands. Alternating strokes between hands doubles the number of accessible menu items for the same number of strokes. Other inputs can be used as well, such as timing, placement, and direction.
G06F 3/033 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires
G06F 3/048 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI]
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
41 - Éducation, divertissements, activités sportives et culturelles
Produits et services
Entertainment services, namely, the provision of non-downloadable short motion picture films via a video-on-demand service; provision of a video-on-demand website featuring non-downloadable short motion picture films; provision via a video-on-demand service of non-downloadable audio and video files and digital media featuring entertainment in the nature of short motion picture films; development, creation, production, and distribution of digital multimedia and audio and visual content, namely, motion picture films and multimedia entertainment; development, creation, production, distribution of audio and visual recordings
44.
Sculpting brushes based on solutions of elasticity
Systems, methods, and articles of manufacture for physically-based sculpting of virtual elastic materials are provided. The physically-based sculpting in one embodiment simulates elastic responses to localized distributions of force produced by sculpting with a brush-like force (e.g., grab, twist, pinch, scale) using one or more regularized solutions to equations of linear elasticity applied to a virtual infinite elastic space, referred to herein as “regularized Kelvinlets.” In other cases, compound brushes, each based on a regularized Kelvinlet, may be used for arbitrarily fast decay; a linear combination of brushes based on regularized Kelvinlets may be used to impose pointwise constraints on displacements and gradients; locally affine forms of regularized Kelvinlets may be used for certain sculpting brushes; brush displacement constraints may be imposed by superimposing regularized Kelvinlets of different radial scales; and symmetrized deformations may be generated by copying and reflecting forces produced by regularized Kelvinlets.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Supervised machine learning using neural networks is applied to denoising images rendered by MC path tracing. Specialization of neural networks may be achieved by using a modular design that allows reusing trained components in different networks and facilitates easy debugging and incremental building of complex structures. Specialization may also be achieved by using progressive neural networks. In some embodiments, training of a neural-network based denoiser may use importance sampling, where more challenging patches or patches including areas of particular interests within a training dataset are selected with higher probabilities than others. In some other embodiments, generative adversarial networks (GANs) may be used for training a machine-learning based denoiser as an alternative to using pre-defined loss functions.
Supervised machine learning using neural networks is applied to denoising images rendered by MC path tracing. Specialization of neural networks may be achieved by using a modular design that allows reusing trained components in different networks and facilitates easy debugging and incremental building of complex structures. Specialization may also be achieved by using progressive neural networks. In some embodiments, training of a neural-network based denoiser may use importance sampling, where more challenging patches or patches including areas of particular interests within a training dataset are selected with higher probabilities than others. In some other embodiments, generative adversarial networks (GANs) may be used for training a machine-learning based denoiser as an alternative to using pre-defined loss functions.
The present disclosure relates to using a neural network to efficiently denoise images that were generated by a ray tracer. The neural network can be trained using noisy images generated with noisy samples and corresponding denoised or high-sampled images (e.g., many random samples). An input feature to the neural network can include color from pixels of an image. Other input features to the neural network, which would not be known in normal image processing, can include shading normal, depth, albedo, and other characteristics available from a computer-generated scene. After the neural network is trained, a noisy image that the neural network has not seen before can have noise removed without needing manual intervention.
Supervised machine learning using neural networks is applied to denoising images rendered by MC path tracing. Specialization of neural networks may be achieved by using a modular design that allows reusing trained components in different networks and facilitates easy debugging and incremental building of complex structures. Specialization may also be achieved by using progressive neural networks. In some embodiments, training of a neural-network based denoiser may use importance sampling, where more challenging patches or patches including areas of particular interests within a training dataset are selected with higher probabilities than others. In some other embodiments, generative adversarial networks (GANs) may be used for training a machine-learning based denoiser as an alternative to using pre-defined loss functions.
A multi-scale method is provided for computer graphic simulation of incompressible gases in three-dimensions with resolution variation suitable for perspective cameras and regions of importance. The dynamics is derived from the vorticity equation. Lagrangian particles are created, modified and deleted in a manner that handles advection with buoyancy and viscosity. Boundaries and deformable object collisions are modeled with the source and doublet panel method. The acceleration structure is based on the fast multipole method (FMM), but with a varying size to account for non-uniform sampling.
In various embodiments, a user can create or generate objects to be modeled, simulated, and/or rendered. The user can apply a mesh to the character's form to create the character's topology. Information, such as character rigging, shader and paint data, hairstyles, or the like can be attached to or otherwise associated with the character's topology. A standard or uniform topology can then be generated that allows information associated with the character to be transfer to other characters that have a similar topological correspondence.
This disclosure provides an approach for automatically generating UV maps for modified three-dimensional (3D) virtual geometry. In one embodiment, a UV generating application may receive original 3D geometry and associated UV panels, as well as modified 3D geometry created by deforming the original 3D geometry. The UV generating application then extracts principal stretches of a mapping between the original 3D geometry and the associated UV panels and transfers the principal stretches, or a function thereof, to a new UV mapping for the modified 3D geometry. Transferring the principal stretches or the function thereof may include iteratively performing the following steps: determining new UV points assuming a fixed affine transformation, determining principal stretches of a transformation between the modified 3D geometry and the determined UV points, and determining a correction of a transformation matrix for each triangle to make the matrix a root of a scoring function.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
Systems, devices, and methods are provided for rendering images of hair using a statistical light scattering model for hair that approximates ground truth physical models. The model is significantly faster than other implementations of the Marschner hair model. The statistical light scattering model includes all the features of Marschner such as eccentricity for elliptical cross-sections, and extends them by adding azimuthal roughness control, consideration of natural fiber torsion, and full energy preserving. Adaptive Importance Sampling (AIS) is specialized to fit easily sampled distributions to bidirectional curve scattering density functions (BCSDFs) of the model.
The disclosure provides an approach for a hybrid binding of meshes. Multiple meshes having levels of detail appropriate for different regions of a model are topologically connected by binding them together at simulation time. In one embodiment, a simulation application creates both geometric and force bindings between vertices in meshes. The simulation application identifies embedded vertices of a first mesh to be bound to a second mesh as being “best” bound vertices, such as vertices coincident with vertices in the second mesh, and geometrically binds those vertices to appropriate vertices of the second mesh. The simulation application then binds each of the remaining embedded vertices which cannot be geometrically bound to vertices of the second mesh via a force binding, in which a zero-length spring force based technique is used to transfer forces and velocities between the force bound vertex of the first mesh and vertices of the second mesh.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Techniques for animating a non-rigid object in a computer graphics environment. A three-dimensional (3D) curve rigging element representing the non-rigid object is defined, the 3D curve rigging element comprising a plurality of knot primitives. One or more defined values are received for an animation control attribute of a first knot primitive. One or more values are generated, for a second animation control attribute for a second knot primitive, based on the plurality of animation control attributes of a neighboring knot primitive. An animation is then rendered using the 3D curve rigging element. More specifically, one or more defined values for the first attribute of the first knot primitive and the generated value for the second attributes of the second knot primitive are used to generate the animation. The rendered animation is output for display.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
The disclosure provides an approach for simulating scattering in a participating medium. In one embodiment, a rendering application receives an image and depth values for pixels in the image, and generates multiple copies of the image associated with respective numbers of scattering events. The rendering application further applies per-pixel weights to pixels of the copies of the image, with the per-pixel weight applied to each pixel representing a probability of a light ray associated with the pixel experiencing the number of scattering events associated with the copy of the image in which the pixel is located. In addition, the rendering application applies a respective blur to each of the weighted copies of the image based on the number of scattering events associated with the weighted copy, sums the blurred weighted image copies, and normalizes the sum to account for conservation of energy.
A method and system for rendering a three-dimensional (3D) scene by excluding non-contributing objects are disclosed. A preliminary object analysis using relatively few rays can be performed to determine which off-camera objects are to be excluded or included in the rendering process. The preliminary object analysis may involve performing an initial ray path tracing to identify intersections between a plurality of rays and one or more objects in the 3D scene. The object analysis can include identifying whether a first object in the 3D scene can be identified as an off-camera object. When the first object is identified as an off-camera object, a number of intersections between the plurality of rays and the first object can be counted. If the number of intersections is less than a corresponding threshold, the first object can be identified as being excluded from a future rendering process to render the first frame.
Systems, methods and articles of manufacture for rendering an image. Embodiments include selecting a plurality of positions within the image and constructing a respective linear prediction model for each selected position. A respective prediction window is determined for each constructed linear prediction model. Additionally, embodiments render the image using the linear prediction models using the constructed linear prediction models, where at least one of the constructed linear prediction models is used to predict values for two or more of a plurality of pixels of the image, and where a value for at least one of the plurality of pixels is determined based on two or more of the constructed linear prediction models.
Techniques are proposed for embedding transition points in media content. A transition point system retrieves a time marker associated with a point of interest in the media content. The transition point system identifies a first position within the media content corresponding to the point of interest. The transition point system embeds data associated with the time marker into the media content at a second position that is no later in time than the first position. The transition point system causes a client media player to transition from a first image quality level to a second quality level based on the time marker.
H04L 29/06 - Commande de la communication; Traitement de la communication caractérisés par un protocole
H04N 21/2343 - Traitement de flux vidéo élémentaires, p.ex. raccordement de flux vidéo ou transformation de graphes de scènes MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
H04N 21/24 - Surveillance de procédés ou de ressources, p.ex. surveillance de la charge du serveur, de la bande passante disponible ou des requêtes effectuées sur la voie montante
H04N 21/845 - Structuration du contenu, p.ex. décomposition du contenu en segments temporels
59.
Tetrahedral volumes from segmented bounding boxes of a subdivision
In various embodiments, systems and methods are disclosed for rapidly generating tetrahedral volumes using centerlines in character animation. The volumes are generated to closely approximate bounding volumes that provide rapid collision detection while at the same time conforming to the original mesh surface. Therefore, more accurate and higher quality collisions are achieved using the original surface in real-time and without using a proxy/simulation.
Techniques are disclosed for solving geometry processing tasks on a subdivision surface of an input geometry using a subdivision exterior calculus (SEC) framework. A control polygonal mesh is received for generating a subdivision surface model. The polygonal mesh is associated with subdivision levels. To generate the subdivision surface model, one or more subdivision matrices of the polygonal mesh is determined at each subdivision level. One or more SEC matrices is computed from the subdivision matrices. The differential equation required by the geometry processing application is then solved numerically on the input control mesh using the SEC matrices.
Techniques relate to fitting a shape of an object when placed in a desired pose. For example, a plurality of training poses can be received, wherein each training pose is associated with a training shape. The training poses can be clustered in pose space, and a bid point can be determined for each cluster. A cluster-fitted shape can then be determined for a pose at the bid point using the training shapes in the cluster. A weight for each cluster-fitted shape can then be determined. The cluster-fitted shapes can then be combined using the determined weights to determine a shape of the object in the desired pose.
Simulating cloth garments can be a large challenge that requires both directability as well as stability. In various embodiments, cloth garments can be animated using a technique called “Clothwarp.” Clothwarp assists garment animation through methods of cloth articulation and simulation targeting. In one aspect, Clothwarp grants another level of directable control on simulation, allowing the artist to modify the influence of the warp, both as a target input into the simulation and as a cleanup tool on simulation results.
A computer-implemented method for generating a shadow in a graphics scene. The method includes casting a ray having a finite length associated with a point on a surface of an object in the graphics scene towards a light source; determining whether the ray intersects any other objects in the graphics scene; and generating a shadow value associated with the point on the surface of the object based on a combination of geometric scene information obtained as a result of determining whether the ray intersects any other objects in the graphics scene and an image-based shadow map value.
Tetrahedra can be used as primitives to represent volumetric shells because of their ease of performing geometric tests such as intersection with other geometric primitives. Each triangular face of a triangulated surface mesh can be extruded or otherwise formed into a prism, and that prism can be filled with tets (tetrahedra). An edge of a tet can be deemed to be rising if, going counterlockwise around the face, the corresponding tet-edge that splits the extruded face proceeds from the inset surface to the offset surface. To determine a valid tet orientation, each directed edge of the surface mesh is labeled as Rising or Falling (R, F). In various embodiments, one or more simple rules are used for determining whether an edge is rising or falling. In one aspect, a partial ordering of the connectivity of a surface is used in the tet generation process.
Test patterns and associated techniques for testing the fidelity of color processing are disclosed. One set of embodiments provide a test pattern that exhibits a large number of spatial interactions (e.g., edges) between colors corresponding to triples of constant value RGB primaries that incorporate a specific primary. Another set of embodiments provide an animated sequence of test patterns that exhibit temporal interactions between the colors identified above. Yet another set of embodiments provide a test pattern comprising a plurality of zones, where distinct subsets of the zones are configured to exhibit independent visual changes in response to adjustments of specific color processing controls. Using these test patterns, users may more easily test the fidelity of color processing (such as color dematrixing), and may more easily calibrate color processing controls accordingly.
To prevent correlated data from being inadvertently altered by subsequent modifications or additions, changes to correlated data are automatically detected. Corrections may be automatically applied to data to preserve data correlation. Change detection data is determined from an initial correlation between source data and dependent data. The change detection data is stored in association with the dependent data. A subsequent evaluation of the data defines a current correlation between the source data and the dependent data. The current correlation is evaluated with the change detection data to determine if the current correlation differs from the initial correlation. If the current correlation between source data and dependent data does not match the initial correlation, the current correlation is reevaluated using topological; geometric, or other analysis techniques. The reevaluated correlation can be provided as part of the authored state of a computer graphics component.
Unordered list operations are used to create and modify ordered lists of components. Each list operation specifies an intention to change some aspect of an ordered list, such as the addition or removal of components or a change in the sequence of components. List operations are associated with intrinsic and extrinsic time-independent attributes. Multiple users can collaborate on an ordered list by specifying their own list operations. List operations are cumulative and do not destructively overwrite list operations from previous pipeline activities. An embodiment of the invention interprets list operations in a time independent manner using intrinsic and extrinsic list operation attributes. Because list operations are processed in a time-independent manner, multiple users may collaborate in any order on the creation of an ordered list, including simultaneously editing the ordered list, and still obtain consistent results.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06F 17/30 - Recherche documentaire; Structures de bases de données à cet effet
This disclosure provides an approach for aggregating elements that are common across shots in the rendering of image frames. In one embodiment, common elements are aggregated via a scene editor which stores the common elements in a scene layer, which is an asset that is a container for elements such as characters, locations, and the like that are common across shots. The scene layer permits new shots to be created that inherit the common elements, rather than from scratch or by manually copying elements from other shots. The scene editor may further receive elements specific to particular shots and store such elements in shot layers that are created on top of the scene layer and store differences from the scene layer. A rendering application then renders image frames on a shot-by-shot basis using the common elements stored in the scene layer and the shot-specific elements stored in the shot layers.
The disclosure provides an approach for animating gases. A dynamic model is employed that accounts for stretching of gas vorticles in a stable manner, handles isolated particles and buoyancy, permits deformable boundaries of objects the gas flows past, and accounts for vortex shedding. The model models stretching of vorticity by applying a vector at the center of a stretched vorticle. High frequency eddies resulting from stretching may be filtered by unstretching the vorticle while preserving mean energy and enstrophy. To model boundary pressure, a boundary may be imposed by embedding into the gas the surface boundary and setting boundary conditions based on velocity of the boundary and the Green's function of the Laplacian. For computational efficiency, a vorticle cutoff proportional to a vorticle's size may be imposed. Vorticles determined to be similar based on a predefined criteria and distance threshold may be fused.
A summary spline curve can be constructed from multiple animation spline curves. Control points for each of the animation spline curves can be included to form a combined set of control points for the summary spline curve. Each of the animation spline curves can then be divided into spline curve segments between each neighboring pair of control points in the combined set of control points. For each neighboring pair, the spline curve segments can be normalized and averaged to determine a summary spline curve segment. These summary spline curve segments are combined to determine a summary spline curve. The summary spline curve can then be displayed and/or modified. Modifications to the summary spline curve can result in modifications to the animation spline curves.
Systems and methods can provide computer animation of animated scenes or interactive graphics sessions. A grid camera separate from the render camera can be created for segments where the configurations (actual or predicted) of the render camera satisfy certain properties, e.g., an amount of change is within a threshold. If a segment is eligible for the use of the separate grid camera, configurations of the grid camera during a segment can be determined, e.g., from the configurations of the render camera. The configurations of the grid camera can then be used to determine grids for rendering objects. If a segment is not eligible for the use of the grid camera, then the configurations of the render camera can be used to determine the grids for rendering.
Optimally-sized bounding boxes for scene data including procedurally generated geometry are determined by first determining whether an estimated bounding box including the procedural geometry is potentially visible in an image to be rendered. If so, the procedural geometry is generated and an optimal bounding box closely bounding the procedural geometry is determined. The generated procedural geometry may then be discarded or stored for later use. Rendering determines if the optimal bounding box is potentially visible. If so, then the associated procedural geometry is regenerated and rendered. Alternatively, after the estimated bounding box is potentially visible, the generated procedural geometry may be partitioned into subsets using a spatial partitioning scheme. A separate optimal bounding box is determined for each of the subsets. During rendering, if any of the optimal bounding boxes are potentially visible, the associated procedural geometry is regenerated and rendered.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
Systems and methods for customizing animation variables and modifications to animation variables in an animation system are provided. An animated model may be comprised of a hierarchical structure of rigs and sub-rigs. An animator may customize the location of animation variables within the hierarchical structure through a relocation operation from an original position to a relocated position. The animation system identifies the relocation operation, resulting in an association being generated between the original position and the relocated position. Modifications made to animation variables in the animation system may be received by the animation system and the animator can customize the scope of the modification and its application to the animated model or animated scene.
Systems, method, and computer program products for compressing a deep image comprising a plurality of voxels by, for each of the plurality of voxels, converting a voxel value to a corresponding value in a Lie algebra based on a logarithmic mapping function, interpolating a first subset of the plurality of values in the Lie algebra using a linear interpolation function applied to a first endpoint and a second endpoint of a first voxel column of the deep image, and upon determining that a deviation of the interpolation of each value in the first subset of the plurality of values does not exceed a threshold, storing an indication of the first endpoint, the second endpoint, and the respective values in the Lie algebra corresponding to the first and second endpoints.
Techniques are disclosed for stably simulating stylized curly hair that address artistic needs and performance demands, both found in the production of feature films. To satisfy the artistic requirement of maintaining a curl's helical shape during motion, a hair model is developed based upon an extensible elastic rod. A method is provided for stably computing a frame along a hair curve for stable simulation of curly hair. The hair model introduces a new type of spring for controlling the bending and twisting of a curl and another for maintaining the helical shape during extension. The disclosed techniques address performance concerns often associated with handling hair-hair contact interactions by efficiently parallelizing the simulation. A novel algorithm is presented for pruning both hair-hair contact pairs and hair particles. The method can be used on a full length feature film and has proven to be robust and stable over a wide range of animated motion and on a variety of hair styles, from straight to wavy to curly.
One embodiment of the present application includes an approach by which an animation system manipulates an animatable object. The animation system detects that a pointer device has positioned a pointer location at a first location, the first location coinciding with a first portion of geometry of the animatable object. The animation system indicates that a first manipulator associated with the first portion of geometry is tentatively selected. Prior to receiving a selection event from the pointer device, the animation system displays a representation of the first manipulator.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
In various embodiments, one or more control structures having sufficient detail or resolution to generate complex deformations of a computer generated model can be bound to the model. These control structures can be bound to the model in a fixed reference pose and used as intermediate control structures for controlling a variety of deformations. In one aspect, to facilitate articulation of all or a portion of the model, a set of one or more intermediate control structures may be reparameterized using one or more additional control structures. An additional control structure can be bound to an intermediate control structure dynamically at pose time. An additional control structure bound to an intermediate control structure may include only enough detail or resolution required for specific subsets of the deformations that may be produced by the intermediate controls structure.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
A system and method of animating an object using chained kinematic logic is provided. An animated object may be comprised of several components, each having a corresponding solver. An animator may designate a final desired position of a primary component. The method further includes automatically determining a hierarchical chained relationship between the primary component and one or more secondary associated components. Using chained kinematic logic defined by constraints, the statuses of the solvers for the secondary components may change based on the statuses of the primary component's solver and final desired position. Thus, a pose of the entire object, including the states of all its associated secondary components, may change based on an updated status of the solver of the first component designated by the animator.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
79.
Creation of cloth surfaces over subdivision meshes from curves
In various embodiments, a cloth weave structure is built from curves over the surface of a subdivision mesh at rendertime. A coherent woven or knitted surface is generated from interwoven curve geometry and a subdivision (or polygon) mesh. In one aspect, this is done at render-time. Accordingly, in one embodiment, a geometry generation process takes an ST map as input to control the direction of flow of curves (yarns) over the surface. Since each face is calculated independently, general global coordinates in ST space are predefined (at the beginning of the render) to make sure that each face transitions smoothly to the next.
State handles mark application data states within a sequence of operations for preservation. Applications can maintain non-linear sets of operations that include multiple sequences of operations between state handles. Applications can determine a sequence of operations between any two state handles, allowing applications to change from the data state associated with one state handle to the data state associated with another state handle. The sequence of operations between any two state handles may include executing operations and/or reversing operations. An application automatically adds new branches in the set of operations to preserve the sequences of operations necessary to reconstruct data states of previously set handles and removes branches that are not needed. Applications may use state handles to implement non-linear undo and redo functions, to validate journal entries, to combine incremental operations into a cumulative operation, and to speculatively execute operations for error detection, user guidance, or performance optimization.
Systems and methods for deformation of surface objects are disclosed. A method may include receiving an initial pose of a model comprising an underlying object and a plurality of surface objects, and a deformation of the model to a second pose. A measurement of the surface objects in the second pose can be used to determine inversely distorted surface objects, such that the lengths of the edges in the inversely distorted surface object are adjusted to counteract the distortion. Thus, when the inversely distorted surface objects are deformed to the second pose, they may appear less distorted than when the original surface objects are deformed to the second pose. Furthermore, a user may direct the level of inverse distortion, so that the surface objects, when inversely distorted and deformed to the second pose, may appear entirely rigid, entirely flexible, or some combination thereof.
In an animation authoring system wherein knots along curves are provided in only selected frames, a method of breaking down knots in adjacent poses is automated without causing discontinuities in curves between poses by setting a first pose as a guarded frame for an object so that at least some of the values for animation variables (avars) in the guarded frame are protected and an animation variable (avar) having no knot at the guarded frame is merely implicit, then introducing a new knot for that avar position at a non-guarded frame, and introducing an implicit knot by setting its avar for the guarded frame at its previous implicit value. The new position can be effected by either adding a knot or removing a knot at a non-guarded frame. The invention provides a predictable workflow that cannot be changed retroactively when adjacent animation variables on a curve are changed.
Systems and methods can be used to store data in a temporal voxel buffer. A first voxel array is stored in association with a first voxel in a voxel grid. The first voxel array includes a plurality of time values. A parameter value is stored in association with each time value of the first voxel array. A second voxel array is stored in association with a second voxel in the voxel grid. The second voxel array stores at least one time value. At least one parameter value is stored in association with the at least one time value of the second voxel array. At least one of the time values stored in the first voxel array is different from each of the at least one time value included in the second voxel array.
A method of animation of surface deformation and wrinkling, such as on clothing, uses low-dimensional linear subspaces with temporally adapted bases to reduce computation. Full space simulation training data is used to construct a pool of low-dimensional bases across a pose space. For simulation, sets of basis vectors are selected based on the current pose of the character and the state of its clothing, using an adaptive scheme. Modifying the surface configuration comprises solving reduced system matrices with respect to the subspace of the adapted basis.
Systems and methods can be used to generate data to be stored in a temporal voxel buffer. A renderer can receive at least one input primitive and a voxel grid. A sampling lattice can be generated based on the at least one input primitive and the sampling lattice can be shaded. Each voxel of the voxel grid can be sampled at a plurality of sample times and a plurality of sample positions within the voxel. A voxel buffer is generated for the voxel grid. The voxel buffer stores a voxel array in association with each voxel of the voxel grid based on the sampling.
Systems and methods can be used to render an animated scene using a temporal voxel buffer. A voxel buffer including a plurality of voxel arrays is received. A voxel array includes at least one time value associated with a voxel and at least one parameter value associated with each time value. For each pixel of an image to rendered, a plurality of rays are cast through the voxel grid. A time value is associated with each ray. A parameter value is sampled at each voxel along a ray at the time associated with the ray. A pixel value is determined based on the sampled parameter values for the plurality of rays.
A method for a computer system includes receiving a mapping schema between a plurality of asset-types within an asset-type hierarchy and a plurality of paths within an on-disk storage structure, receiving an asset-type definition list from a user, wherein the asset-type definition list comprises an asset-type from the plurality of asset types, and determining at least one path from the plurality of paths for providing access to assets of the asset-type in response to the mapping schema and the asset-type definition list.
Computer-generated images are generated by evaluating point positions of points on animated objects in animation data. The point positions of the points are used by an animation system to determine how to blend animated sequences or frames of animated sequences in order to create realistic moving animated characters and animated objects. The methods of blending are based on determining distances or deviations between corresponding points and using blending functions with varying blending windows and blending functions that can vary from point to point on the animated objects.
Particular embodiments comprise providing a surface mesh for an object, generating a voxel grid comprising volumetric masks for the mesh, and generating a lit mesh, wherein the lit mesh comprises a shaded version of the mesh as positioned in a scene. The voxel grid may be positioned over the lit mesh in the scene, and a first ray may be traced to a position of the voxel grid. If the traced ray passed through the voxel grid and hit a location on the lit mesh, then one or more second rays may be traced to the hit location on the lit mesh. If the traced ray hit a location in the voxel grid but did not hit a location on the lit mesh, then one or more second rays may be traced from the hit location in the voxel grid to the closest locations on the lit mesh. Finally, color sampled at one or more locations proximate to the position of the voxel grid may be blurred outward through the voxel grid to create a volumetric projection.
Computer-generated images are generated by evaluating point positions of points on animated objects in animation data. The point positions of the points are used by an animation system to determine how to blend animated sequences or frames of animated sequences in order to create realistic moving animated characters and animated objects. The methods of blending are based on determining distances or deviations between corresponding points and using blending functions with varying blending windows and blending functions that can vary from point to point on the animated objects.
In various embodiments, an ray-marched-tangent space shader is provided which uses adaptive, curved ray marching of an implicit weave/thread procedural texture to create the appearance of individual cloth yarns complete with sub-fibers which separate rather than stretch over the surface. The volumetric surface shader shades cloth by performing adaptive curved ray marching of an implicit tangent space distance field.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
Embodiments of the invention are directed to rendering scenes comprising one or more volumes viewed along a ray from a virtual camera. In order to render the one or more volumes, embodiments may first determine individual transmissivity functions for the one or more volumes. The individual transmissivity functions may be combined into a combined transmissivity function for the scene. The combined transmissivity function may be used to generate a cumulative density function (CDF) for the scene. The CDF may be sampled in order to determine a plurality of points along the ray. A contribution to a pixel may be determined for each sampled point. The contributions associated with the sampled points may be combined to determine a combined contribution to the pixel.
Techniques are disclosed for accounting for features of computer-generated dynamic or simulation models being at different scales. Some examples of dynamic or simulation models may include models representing hair, fur, strings, vines, tails, or the like. In various embodiments, features at different scales in a complex dynamic or simulation model can be treated differently when rendered and/or simulated.
Techniques are disclosed for generating quality renderings of volumes by sampling a volume light by generating and analyzing a sparse voxel octree. In one embodiment, a volumetric light source may be divided into voxels and importance information stored in an octree. An importance value may be determined for each voxel based on the amount of emitted light in the region associated with that voxel. Importance values regarding the individual voxels may be stored in the leaves of the octree. Each interior node may be associated with an importance value equal to the sum of the importance values of its children. The root node may be associated with the total importance of the entire octree.
Systems and methods for design and use of models can have models with rigging, controls, avars, etc., but can also have controls that are themselves models, thus leading to a hierarchy of models and rigging usable for controlling models in an animation system wherein models correspond to elements or objects for which computer-generated graphics are to be rendered.
G06F 3/048 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI]
G06F 9/44 - Dispositions pour exécuter des programmes spécifiques
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G05B 19/042 - Commande à programme autre que la commande numérique, c.à d. dans des automatismes à séquence ou dans des automates à logique utilisant des processeurs numériques
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
Techniques are disclosed for rendering scene volumes having scene dependent memory requirements. A image plane used to view a three dimensional volume (3D) volume into smaller regions of pixels referred to as buckets. The number of pixels in each bucket may be determined based on an estimated number of samples needed to evaluate a pixel. Samples are computed for each pixels in a given bucket. Should the number of samples exceed the estimated maximum sample count, the bucket is subdivided into sub-buckets, each allocated the same amount of memory as was the original bucket. Dividing a bucket in half effectively doubles both the memory available for rendering the resulting sub-buckets and the maximum number of samples which can be collected for each pixel in the sub-bucket. The process of subdividing a bucket continues until all of the pixels in the original bucket are rendered.
Computer-generated images based on force field effects are generated by evaluating force field data and animated data. The force field data includes force field directional vectors and the animated data includes density values for an animated model. The force field data and the animated data are splattered on separate multi-dimensional grids. An animation system determines a vector path, starting at a point in the grid containing the animated model, based on the directional vectors and the density values along the vector path are integrated to determine an attenuation factor for the point. The attenuation factor provides a value for accurately determining the movement of the animated model at the point when the force field is present.
Surfaces without a global surface coordinate system are divided into surface regions having local surface coordinate systems to enable the caching of surface attribute values. A surface attribute value for a surface region may include contributions from two or more adjacent surfaces. Sample points may be arranged at the corners, rather than centers, of surface regions and include prefiltered values based on two or more surfaces. A renderer may sample the surface attribute function using these prefiltered values without accessing any adjacent surfaces, even if the renderer's filter crosses a surface boundary. A multiresolution cache stores surface attribute values at different resolution levels for surface regions of one or more surfaces, which may be discontiguous. Two or more resolution levels may have the same number of sample points but have values based on filters with different areas and spatial frequency limits. Resolution levels may be selected based on geodesic distance on a surface.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
A method for a computer system includes determining a plurality of positions of portions of a hand of a user simultaneously placed upon a user interface device of the computer system, retrieving a set of display icons in response to the plurality of positions of the portions of the user hand, displaying the display icons from the set of display icons on a display relative to the plurality of positions of the portions of the user hand; while displaying the display icons on the display, determining a user selection of a display icon from the display icons, and performing a function in response to the user selection of the display icon.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/042 - Numériseurs, p.ex. pour des écrans ou des pavés tactiles, caractérisés par les moyens de transduction par des moyens opto-électroniques
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
A first color component of a pixel or scene entity is modified using a color correction curve defined at least partly by a second color component of this pixel or entity. Each pixel or entity has its own separate color correction curve, independent of the color correction curves of other pixels or entities. The saturation value of a pixel or scene entity may be modified based on its luminance value. The luminance value determines a saturation gamma function curve, mapping the original saturation value of a pixel or entity to a new saturation value. The unilluminated color of a pixel or of an illuminated entity in a scene being rendered may also be taken into account. This output color may be stored in the appropriate pixel of an image or combined with colors from other portions of the scene being rendered.