A device for graphical rendering includes a memory and processing circuitry. The processing circuitry is configured to receive sample values, transmitted by one or more servers, of samples of an object, wherein the sample values are generated by the one or more servers from inputting coordinates into a trained neural network and outputting, from the trained neural network, the sample values of the samples, store the sample values in the memory, and render image content of the object based on the sample values.
A system includes a storage system configured to store a plurality of images from a plurality of viewpoints in a scene, and processing circuitry coupled to the storage system. The processing circuitry is configured to: generate a point cloud of the scene based on the plurality of images; determine samples on a ray from a viewpoint of the plurality of viewpoints based on the point cloud; and train a neural network based on the determined samples on the ray to generate a trained model, the trained model being configured to generate image content of the scene from a viewpoint different than the plurality of viewpoints.
A system for graphical rendering includes one or more servers configured to determine a mean vector indicative of a ray through one or more conical frustums and a covariance matrix defining lobes in different directions inside the one or more conical frustums to generate an approximation of the one or more conical frustums, generate an input into a trained neural network based on the determined mean vector and the covariance matrix, wherein the trained neural network is trained based on two-dimensional images at different distances from an object and configured to generate sample values of samples within the object, generate the sample values for rendering from the trained neural network based on the input, and output the sample values.
A system includes a storage system configured to store a plurality of images from a plurality of viewpoints in a scene, and processing circuitry coupled to the storage system. The processing circuitry is configured to: generate a point cloud of the scene based on the plurality of images; determine samples on a ray from a viewpoint of the plurality of viewpoints based on the point cloud; and train a neural network based on the determined samples on the ray to generate a trained model, the trained model being configured to generate image content of the scene from a viewpoint different than the plurality of viewpoints.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
H04N 13/388 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
A device for graphical rendering includes a memory and processing circuitry. The processing circuitry is configured to receive sample values, transmitted by one or more servers, of samples of an object, wherein the sample values are generated by the one or more servers from inputting coordinates into a trained neural network and outputting, from the trained neural network, the sample values of the samples, store the sample values in the memory, and render image content of the object based on the sample values.
A system for graphical rendering includes one or more servers configured to determine a mean vector indicative of a ray through one or more conical frustums and a covariance matrix defining lobes in different directions inside the one or more conical frustums to generate an approximation of the one or more conical frustums, generate an input into a trained neural network based on the determined mean vector and the covariance matrix, wherein the trained neural network is trained based on two-dimensional images at different distances from an object and configured to generate sample values of samples within the object, generate the sample values for rendering from the trained neural network based on the input, and output the sample values.
This disclosure describes a method including determining one or more object clusters from a plurality of frames of the video content. At least one of the one or more object clusters is an object cluster with movement through the plurality of frames. The method includes extracting the determined one or more object clusters from the plurality of frames to generate a set of frames having extracted one or more object clusters and outputting the set of frames having extracted one or more object clusters. This disclosure describes a method including receiving the set of frames having the extracted one or more object clusters, rendering one or more of the set of frames in a live camera feed of a device, and generating video content based on the rendered one or more set of frames and a user interacting with the extracted one or more object clusters.
This disclosure describes a method including determining one or more object clusters from a plurality of frames of the video content. At least one of the one or more object clusters is an object cluster with movement through the plurality of frames. The method includes extracting the determined one or more object clusters from the plurality of frames to generate a set of frames having extracted one or more object clusters and outputting the set of frames having extracted one or more object clusters. This disclosure describes a method including receiving the set of frames having the extracted one or more object clusters, rendering one or more of the set of frames in a live camera feed of a device, and generating video content based on the rendered one or more set of frames and a user interacting with the extracted one or more object clusters.
Techniques are described for virtual representation creation and display. Processing circuitry may identify substantially transparent pixels in a virtual hairstyle, identify a first set of pixels that are away from the identified substantially transparent pixels, increase an opacity level for a second set of pixels that excludes the first set of pixels by a first amount, and generate the virtual hairstyle based on the first set of pixels and the second set of pixels having the increased opacity level. With the generated virtual hairstyle, processing circuitry of a personal computing device may blend, in a first pass, one or more pixels of the version of the generated virtual hairstyle having an opacity level greater than or equal to a threshold opacity level, and blend, in a second pass, one or more pixels of the version of the generated virtual hairstyle having an opacity level less than the threshold opacity level.
Techniques are described for virtual representation creation and display. Processing circuitry may identify substantially transparent pixels in a virtual hairstyle, identify a first set of pixels that are away from the identified substantially transparent pixels, increase an opacity level for a second set of pixels that excludes the first set of pixels by a first amount, and generate the virtual hairstyle based on the first set of pixels and the second set of pixels having the increased opacity level. With the generated virtual hairstyle, processing circuitry of a personal computing device may blend, in a first pass, one or more pixels of the version of the generated virtual hairstyle having an opacity level greater than or equal to a threshold opacity level, and blend, in a second pass, one or more pixels of the version of the generated virtual hairstyle having an opacity level less than the threshold opacity level.
This disclosure describes example techniques for personalized virtual look, fit, and animation of apparel, accessories, and cosmetics on a virtual representation of a user. The disclosure describes dividing an image to be rendered into n containers, wherein the image represents an avatar of a user, determining a respective category of content, from among a plurality of categories of content, for each of the n containers, determining a respective shader program for respective containers of the n containers based on the respective category of the respective container, independently rendering the n containers using the determined respective shader programs to create the avatar, and displaying the avatar.
This disclosure describes example techniques for personalized virtual look, fit, and animation of apparel, accessories, and cosmetics on a virtual representation of a user. The disclosure describes way to deliver an immersive 3D digital experience for style and fit. Processing circuitry of one or more computing devices may create the virtual representation of the user based on an image of the face of the user and a selected body type in real-time.
A device can be configured to store, in a memory device, a plurality of meshes; obtain, by processing circuitry, an image of an apparel; analyze, by processing the circuitry, the image to determine parameters of the apparel; select, by processing the circuitry, a mesh from the plurality of meshes based on the parameters of the apparel; generate, by processing the circuitry, a swatch based on the image; apply, by processing the circuitry, the swatch to the mesh to generate a virtual apparel; and output, to a display device, graphical information based on the virtual apparel.
The disclosure describes techniques for apparel simulation. For example, processing circuitry may determine a body construct used for generating a shape of a virtual representation of a user, determine that one or more points on a virtual apparel are within the body construct, and determine, for each of the one or more points, a respective normal vector. Each respective normal vector intersects each respective point and is oriented towards the body construct. The processing circuity may also extend each of the one or more points to corresponding points on the body construct based on each respective normal vector and generate graphical information of the virtual apparel based on the extension of each of the one or more points to the corresponding points on the body construct.
The disclosure describes techniques for apparel simulation. For example, processing circuitry may determine a body construct used for generating a shape of a virtual representation of a user, determine that one or more points on a virtual apparel are within the body construct, and determine, for each of the one or more points, a respective normal vector. Each respective normal vector intersects each respective point and is oriented towards the body construct. The processing circuitry may also extend each of the one or more points to corresponding points on the body construct based on each respective normal vector and generate graphical information of the virtual apparel based on the extension of each of the one or more points to the corresponding points on the body construct.
This disclosure describes example techniques for personalized virtual look, fit, and animation of apparel, accessories, and cosmetics on a virtual representation of a user. The disclosure describes way to deliver an immersive 3D digital experience for style and fit. Processing circuitry of one or more computing devices may create the virtual representation of the user based on an image of the face of the user and a selected body type in real-time.
This disclosure describes example techniques for personalized virtual look, fit, and animation of apparel, accessories, and cosmetics on a virtual representation of a user. The disclosure describes dividing an image to be rendered into n containers, wherein the image represents an avatar of a user, determining a respective category of content, from among a plurality of categories of content, for each of the n containers, determining a respective shader program for respective containers of the n containers based on the respective category of the respective container, independently rendering the n containers using the determined respective shader programs to create the avatar, and displaying the avatar.
A device can be configured to store, in a memory device, a plurality of meshes; obtain, by processing circuitry, an image of an apparel; analyze, by processing the circuitry, the image to determine parameters of the apparel; select, by processing the circuitry, a mesh from the plurality of meshes based on the parameters of the apparel; generate, by processing the circuitry, a swatch based on the image; apply, by processing the circuitry, the swatch to the mesh to generate a virtual apparel; and output, to a display device, graphical information based on the virtual apparel.