A gesture-based wake process for an AR system is described herein. The AR system places a hand-tracking input pipeline of the AR system in a suspended mode. A camera component of the hand-tracking input pipeline detects a possible visual wake command being made by a user of the AR system. On the basis of detecting the possible visual wake command, the AR system wakes the hand-tracking input pipeline and places the camera component in a fully operational mode. If the AR system, using the hand¬ tracking input pipeline, verifies the possible visual wake command as an actual wake command, the AR system initiates execution of an AR application.
A resource optimized kiosk mode that improves the mobile experience for creators and users of mobile devices such as an augmented reality (AR)-enabled wearable eyewear device. An eyewear device enters a kiosk mode by receiving a kiosk mode request for an application and, in response to the request, determining which services and application programming interfaces (APIs) are required to execute the selected application. An identification of the determined services and APIs required to execute the selected application are stored and the eyewear device is rebooted. After reboot, the selected application is started, and only the identified services and APIs are enabled. To determine which services and APIs are required to execute the selected application, metadata may be associated with the selected application specifying the services and/or APIs that the selected application requires to use when in operation.
AR-enabled wearable electronic devices such as smart glasses are adapted for use as an (Internet of Things) IoT remote control device where the user can control a pointer on a television screen, computer screen, or other IoT enabled device to select items by looking at them and making selections using gestures. Built-in six-degrees-of-freedom (6DoF) tracking capabilities are used to move the pointer on the screen to facilitate navigation. The display screen is tracked in real-world coordinates to determine the point of intersection of the user's view with the screen using raycasting techniques. Hand and head gesture detection are used to allow the user to execute a variety of control actions by performing different gestures. The techniques are particularly useful for smart displays that offer AR-enhanced content that can be viewed in the displays of the AR-enabled wearable electronic devices.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04L 67/125 - Protocoles spécialement adaptés aux environnements propriétaires ou de mise en réseau pour un usage spécial, p.ex. les réseaux médicaux, les réseaux de capteurs, les réseaux dans les véhicules ou les réseaux de mesure à distance en impliquant la commande des applications des terminaux par un réseau
Systems, methods, and computer readable media for graphical assistance with tasks using an augmented reality (AR) wearable devices are disclosed. Embodiments capture an image of a first user view of a real-world scene and access indications of surfaces and locations of the surfaces detected in the image. The AR wearable device displays indications of the surfaces on a display of the AR wearable device where the locations of the indications are based on the locations of the surfaces and a second user view of the real-world scene. The locations of the surfaces are indicated with 3D world coordinates. The user views are determined based on a location of the user. The AR wearable device enables a user to add graphics to the surfaces and select tasks to perform. Tools such as a bubble level or a measuring tool are available for the user to utilize to perform the task.
A system for hand tracking for an Augmented Reality (AR) system. The AR system uses a camera of the AR system to capture tracking video frame data of a hand of a user of the AR system. The AR system generates a skeletal model based on the tracking video frame data and determines a location of the hand of the user based on the skeletal model. The AR system causes a steerable camera of the AR system to focus on the hand of the user.
An augmented reality (AR) eyewear device has a lens system which includes an optical screening mechanism that enables switching the lens system between a conventional see- through state and an opaque state in which the lens system screens or functionally blocks out the wearer's view of the external environment. Such a screening mechanism allows for expanded use cases of the AR glasses compared to conventional devices, e.g.: as a sleep mask; to view displayed content like movies or sports events against a visually nondistracting background instead of against the external environment; and/or to enable VR functionality.
A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The coordinates of a face in the image are determined, and the face of the user or another person is added to the image at the location. The final image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.
An optical device for use in an augmented reality or virtual reality display, comprising: a waveguide; an input diffractive optical element, DOE, configured to receive light from a projector and to couple the received light into the waveguide along a plurality of optical paths; an output DOE offset from the input DOE along a first direction and configured to couple the received light out of the waveguide and towards a viewer; a first turning DOE offset from the input DOE along a second direction different from the first direction; wherein the input DOE is configured to couple a first portion of the received light in the second direction towards the first turning DOE and the first turning DOE is configured to diffract the first portion of the received light towards the output DOE, and the input DOE is configured to couple a second portion of the received light in the first direction towards the output DOE.
An eyewear device including a strain gauge sensor to determine when the eyewear device is manipulated by a user, such as being put on, taken off, and interacted with. A processor identifies a signature event based on sensor signals received from the strain gauge sensor and a data table of strain gauge sensor measurements corresponding to signature events. The processor controls the eyewear device as a function of the identified signature event, such as powering on a display of the eyewear device as the eyewear device is being put on a user's head, and then turning of the display when the eyewear device is removed from the user's head.
A waterproof UAV that records camera footage while traveling through air and while submerged in water. The UAV alters speed and direction of propellers dependent on the medium that the UAV is traveling through to provide control of the UAV. The propellers are capable of spinning in both directions to enable the UAV to change its depth and orientation in water. A machine learning (ML) model is used to identify humans and objects underwater. A housing coupled to the UAV makes the UAV positively buoyant to float in water and to control buoyancy while submerged.
B64U 101/30 - Véhicules aériens sans pilote spécialement adaptés à des utilisations ou à des applications spécifiques à l’imagerie, à la photographie ou à la vidéographie
Methods and systems are disclosed for performing real-time deforming operations. The system receives an image that includes a depiction of a real-world object. The system applies a machine learning model to the image to generate a warping field and segmentation mask, the machine learning model trained to establish a relationship between a plurality of training images depicting real -world objects and corresponding ground-truth warping fields and segmentation masks associated with a target shape. The system applies the generated warping field and segmentation mask to the image to warp the real- world object depicted in the image to the target shape.
A three-dimensional asset (3D) reconstruction technique for generating a 3D asset representing an object from images of the object. The images are captured from different viewpoints in a darkroom using one or more light sources having known locations. The system estimates camera poses for each of the captured images and then constructs a 3D surface mesh made up of surfaces using the captured images and their respective estimated camera poses. Texture properties for each of the surfaces of the 3D surface mesh are then refined to generate the 3D asset.
A finger gesture recognition system is provided. The finger gesture recognition system includes one or more audio sensors and one or more optic sensors. The finger gesture recognition system captures, using the one or more audio sensors, audio signal data of a finger gesture being made by a user, and captures, using the one or more optic sensors, optic signal data of the finger gesture. The finger gesture recognition system recognizes the finger gesture based on the audio signal data and the optic signal data and communicates finger gesture data of the recognized finger gesture to an Augmented Reality/Combined Reality/Virtual Reality (XR) application.
A pose tracking system is provided. The pose tracking system includes an EMF tracking system having a user-worn head-mounted EMF source and one or more user-worn EMF tracking sensors attached to the wrists of the user. The EMF source is associated with a VIO tracking system such as AR glasses or the like. The pose tracking system determines a pose of the user's head and a ground plane using the VIO tracking system and a pose of the user's hands using the EMF tracking system to determine a fullbody pose for the user. Metal interference with the EMF tracking system is minimized using an IMU mounted with the EMF tracking sensors. Long term drift in the IMU and the VIO tracking system are minimized using the EMF tracking system.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
Systems and methods are provided for performing AR button selection operations on an augmented reality (AR) device. The system displays, by an AR device, a plurality of AR objects on a display region that overlaps a first real-world object, each of the plurality of AR objects being associated with an object selection region. The system computes a first spatial relationship factor for a first AR object of the plurality of AR objects based on a position of the first AR object relative to a position of a second real -world object and adjusts the object selection region of the first AR object based on the first spatial relationship factor. The system activates the first AR object in response to determining that the second real -world object overlaps the object selection region of the first AR object.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
The subject technology detects a location and a position of a representation of a finger. The subject technology generates a first virtual object based on the location and the position of the representation of the finger. The subject technology detects a first collision event. The subject technology in response to the first collision event, modifies a set of dimensions of the second virtual object to a second set of dimensions. The subject technology detects a second location and a second position of the representation of the finger. The subject technology detects a second collision event. The subject technology modifies a set of dimensions of the third virtual object to a third set of dimensions. The subject technology renders the third virtual object based on the third set of dimensions within a third scene, the third scene comprising a modified scene from a second scene.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
17.
SHOOTING INTERACTION USING AUGMENTED REALITY CONTENT IN A MESSAGING SYSTEM
The subject technology receives a set of frames. The subject technology detect a first gesture correspond to an open trigger finger gesture. The subject technology receives a second set of frames. The subject technology detects from the second set of frames, a second gesture correspond to a closed trigger finger gesture. The subject technology detects a location and a position of a representation of a finger from the closed trigger finger gesture. The subject technology generates a first virtual object based at least in part on the location and the position of the representation of the finger. The subject technology renders a movement of the first virtual object along a vector away from the location and the position of the representation of the finger within a first scene. The subject technology provides for display the rendered movement of the first virtual object along the vector within the first scene.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
18.
VIRTUAL OBJECT MANIPULATION WITH GESTURES IN A MESSAGING SYSTEM
The subject technology detects a first gesture and a second gesture, each gesture corresponding to an open trigger finger gesture. The subject technology detects a third gesture and a fourth gesture, each gesture corresponding to a closed trigger finger gesture. The subject technology, selects a first virtual object in a first scene. The subject technology detects a first location and a first position of a first representation of a first finger from the third gesture and a second location and a second position of a second representation of a second finger from the fourth gesture. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology modifies a set of dimensions of the first virtual object to a different set of dimensions.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
Systems, methods, and computer readable media are described for remotely changing settings on augmented reality (AR) wearable devices. Embodiments are disclosed that enable a user to change settings of an AR wearable device on a user interface (UI) provided by a host client device that can communicate wirelessly with the AR wearable device. The host client device and AR wearable device provide remote procedure calls (RFCs) and an application program interface (API) to access settings and determine if settings have been changed. The API enables the host client device to determine the settings on the AR wearable device without any prior knowledge of the settings on the AR wearable device. The RFCs and the API enable the host client device to automatically update the settings on the AR wearable device when the user changes the settings on the host client device.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04W 4/80 - Services utilisant la communication de courte portée, p.ex. la communication en champ proche, l'identification par radiofréquence ou la communication à faible consommation d’énergie
Systems, methods, and computer readable media for selecting a tilt angle of an augmented reality (AR) display of an AR wearable device. Some examples of the present disclosure capture simulation data of gaze fixations while users are performing tasks using applications resident on the AR wearable device. The tilt angle of the AR display is selected based on including more gaze fixations that are within the field of view (FOV) of the AR display than are outside the FOV of the AR display. In some examples, an AR wearable device is manufactured with a fixed vertical tilt angle for the AR display. In some examples, the AR wearable device can dynamically adjust the vertical tilt angle of the AR display based on the applications that a user of the AR wearable device is likely to use or is using.
The subject technology receives frames of a source media content. The subject technology detects from the frames of the source media content, a first gesture indicating a cut point at a particular frame of the source media content, the cut point associated with a trimming operation to be performed on the source media content. The subject technology selects a starting frame and an ending frame from the frames based at least in part on the cut point at the particular frame. The subject technology performs the trimming operation based on the starting frame and the ending frame. The subject technology generates a second media content using the third set of frames. The subject technology provides for display at least a portion of the third set of frames of the second media content.
H04N 23/611 - Commande des caméras ou des modules de caméras en fonction des objets reconnus les objets reconnus comprenant des parties du corps humain
H04N 23/62 - Commande des paramètres via des interfaces utilisateur
22.
CURSOR FUNCTIONALITY FOR AUGMENTED REALITY CONTENT IN MESSAGING SYSTEMS
The subject technology detects a location and a position of a representation of a finger in a set of frames captured by a camera of a client device. The subject technology generates a first virtual object based at least in part on the location and the position of the representation of the finger. The subject technology renders the first virtual object within a first scene. The subject technology detects a first collision event corresponding to a first collider of the first virtual object intersecting with a second collider of a second virtual object. The subject technology modifies a set of dimensions of the second virtual object to a second set of dimensions. The subject technology renders the second virtual object based on the second set of dimensions within a second scene. The subject technology provides for display the rendered second virtual object within the second scene.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p.ex. interaction avec des règles ou des cadrans
23.
TRIGGER GESTURE FOR SELECTION OF AUGMENTED REALITY CONTENT IN MESSAGING SYSTEMS
The subject technology detects a first gesture corresponding to an open trigger finger gesture. The subject technology detects a location and a position of a representation of a finger from the open trigger finger gesture. The subject technology generates a first virtual object based at least in part on the location and the position of the representation of the finger. The subject technology detects a first collision event. The subject technology detects a second gesture corresponding to a closed trigger finger gesture. The subject technology selects the second virtual object. The subject technology renders the first virtual object as attached to the second virtual object in response to the selecting. The subject technology provides for display the rendered first virtual object as attached to the second virtual object within a first scene.
The subject technology detects from a set of frames, a first gesture, the first gesture corresponding to a pinch gesture. The subject technology detects a first location and a first position of a first representation of a first finger from the first gesture and a second location and a second position of a second representation of a second finger from the first gesture. The subject technology detects a first collision event corresponding to a first collider and a second collider intersecting with a third collider of a first virtual object. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology modifies the first virtual object to include an additional augmented reality content based at least in part on the first change and the second change.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p.ex. interaction avec des règles ou des cadrans
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
25.
GESTURES TO ENABLE MENUS USING AUGMENTED REALITY CONTENT IN A MESSAGING SYSTEM
The subject technology detects a first location and a first position of a first representation of a first finger and a second location and a second position of a second representation of a second finger. The subject technology detects a first particular location and a first particular position of a first particular representation of a first particular finger and a second particular location and a second particular position of a second particular representation of a second particular finger. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology detects a first particular change in the first particular location and the first particular position and a second particular change in the second particular location and the second particular position. The subject technology generates a set of virtual objects.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
26.
REVEALING COLLABORATIVE OBJECT USING COUNTDOWN TIMER
A collaborative session (e.g., a virtual time capsule) in which access to a collaborative object and added virtual content is selectively provided to participants/users. In one example of the collaborative session, a processor provides users with access to a collaborative object using respective physically remote devices, and associates virtual content received from the users with the collaborative object during a collaboration period. The processor maintains a timer including a countdown indicative of when the collaboration period ends for associating virtual content with the collaborative object. The processor provides the users with access to the collaborative object with associated virtual content at the end of the collaboration period.
A collaborative session (e.g., a virtual time capsule) in which access to a collaborative object with an associated material and added virtual content is provided to users. In one example of the collaborative session, a user selects the associated material of the collaborative object. Physical characteristics are assigned to the collaborative object as a function of the associated material to be perceived by the participants when the collaborative object is manipulated. In one example, the material associated to the collaborative object is metal, wherein the interaction between the users and the collaborative object generates a response of the collaborative object that is indicative of the physical properties of metal, such as inertial, acoustic, and malleability.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
Collaborative sessions in which access to added virtual content is selectively made available to participants/users by a collaborative system. The system receives a request from a user to join a session and associates a timestamp with the user corresponding to receipt of the request. Users can edit the collaborative object if the timestamp is within the collaborative duration period and can view the collaborative object if the timestamp is after the collaborative duration period.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Collaborative sessions in which access to a collaborative object and added virtual content is selectively provided to participants/users. In one example of the collaborative session, a participant crops media content by use of a hand gesture to produce an image segment that can be associated to the collaborative object. The hand gesture resembles a pair of scissors and the camera and processor of the client device track a path of the hand gesture to identify an object within a displayed image to create virtual content of the identified object. The virtual content created by the hand gesture is then associated to the collaborative object.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
30.
PHYSICAL GESTURE INTERACTION WITH OBJECTS BASED ON INTUITIVE DESIGN
A collaborative session (e.g., a virtual time capsule) in which access to a collaborative object and added virtual content is selectively provided to participants/users. In one example of the collaborative session, a user interacts with the collaborative object using hand gestures. The virtual content associated with the collaborative object can be accessed with an opening hand gesture and the virtual content can be hidden with a closing hand gesture. The hand gestures are detected by cameras of a client device used by the user. The collaborative object can be moved and manipulated using a pointing gesture, wherein the position of the collaborative object can be confirmed to a new position by titling the client device of the user.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
A collaborative session (e.g., a virtual time capsule) in which access to a collaborative object and added virtual content is selectively provided to participants/users. In one example of the collaborative session, authentication of the collaborative object is performed by all of the users to complete the collaborative session. Each user authenticates the collaborative object, such as using a stamping gesture on a user interface of a client device or in an augmented reality session. User specific data is recorded with the stamping gesture to authenticate the collaborative object and the associated virtual content. In an example, user specific data may include device information, participant profile information, or biometric signal information. Biometric signal information, such as a fingerprint from a mobile device or a heart rate received from a connected smart device can be used to provide an authenticating signature to the seal.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Collaborative sessions in which access to added virtual content is selectively made available to participants/users. A participant (the host) creates a new session and invites participants to join. The invited participants receive an invitation to join the session. The session creator (i.e., the host) and other approved participants can access the contents of a session. The session identifies a new participant when they join the session, and concurrently notifies the other participants in the session that a new participant is waiting for permission to access the added virtual content. The host or approved participants can set up the new participant with permissions for accessing added virtual content.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
33.
CHARACTER AND COSTUME ASSIGNMENT FOR CO-LOCATED USERS
Multi-player co-located AR experiences are augmented by assigning characters and costumes to respective participants (a.k.a. "users" of AR-enabled mobile devices) in multi-player AR sessions for storytelling, play acting, and the like. Body tracking technology and augmented reality (AR) software are used to decorate the bodies of the co-located participants with virtual costumes within the context of the multi-player co-located AR experiences. Tracked bodies are distinguished to determine which body belongs to which user and hence which virtual costume belongs to which tracked body so that corresponding costumes may be assigned for display in augmented reality. A host-guest mechanism is used for networked assignment of characters and corresponding costumes in the co-located multi-player AR session. Body tracking technology is used to move the costume with the body as movement of the assigned body is detected.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
A two-dimensional element is identified from one or more two-dimensional images. A volumetric content item is generated based on the two-dimensional element identified from the one or more two-dimensional images. A display device presents the volumetric content item overlaid on a real-world environment that is within a field of view of a user of the display device.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/388 - Affichages volumétriques, c. à d. systèmes où l’image est réalisée à partir d’éléments répartis dans un volume
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 16/58 - Recherche caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
Systems, methods, and computer readable media for voice-controlled user interfaces (UIs) for augmented reality (AR) wearable devices are disclosed. Embodiments are disclosed that enable a user to interact with the AR wearable device without using physical user interface devices. An application has a non-voice-controlled UI mode and a voice-controlled UI mode. The user selects the mode of the UI. The application running on the AR wearable device displays UI elements on a display of the AR wearable device. The UI elements have types. Predetermined actions are associated with each of the UI element types. The predetermined actions are displayed with other information and used by the user to invoke the corresponding UI element.
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
A system captures via one or more sensors of a computing device, data of an environment observed by the one or more sensors at a first timeslot, and stores the data in a data store as a first portion of a timelapse memory experience. The system also captures, via the one or more sensors of a computing device, data of the environment observed by the one or more sensors at a second timeslot, and stores the data in a data store as a second portion of the timelapse memory experience. The system additionally associates the timelapse memory experience with a memory experience trigger, wherein the memory experience trigger can initiate a presentation of the timelapse memory experience.
The present disclosure relates to methods and systems for providing a touch-based augmented reality (AR) experience. During a capture phase, a first user may grip an object. An intensity of a force applied on the object in the grip and/or a duration of the grip may be recorded. A volumetric representation of the first user holding the object may also be captured. During an experience phase, a second user may touch the object, the object may provide haptic feedback (e.g., a vibration) to the second user at an intensity and a duration corresponding to an intensity of the force applied on the object and a duration of the grip of the object. If a volumetric representation of the first user holding the object is captured, touching the object may also cause a presentation of the first user's volumetric body that holds the object.
An Augmented Reality (AR) system is provided. The AR system uses a combination of gesture and Direct Manipulation of Virtual Objects (DMVO) methodologies to provide for the user's selection and modification of virtual objects of an AR experience. The user indicates that they want to interact with a virtual object of the AR experience by moving their hand to overlap the virtual object. While keeping their hand in an overlapping position, the user makes gestures that cause the user's viewpoint of the virtual object to either zoom in or zoom out. To end the interaction, the user moves their hand such that their hand is no longer overlapping the virtual object.
An Augmented Reality (AR) system is provided. The AR system uses a combination of gesture and Direct Manipulation of Virtual Objects (DMVO) methodologies to provide for the user's selection and modification of virtual objects of an AR experience. The user indicates that they want to interact with a virtual object of the AR experience by moving their hand to overlap the virtual object. While keeping their hand in an overlapping position, the user rotates their wrist and the virtual object is rotated as well. To end the interaction, the user moves their hand such that their hand is no longer overlapping the virtual object.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
40.
EXTENDING USER INTERFACES OF MOBILE APPS TO AR EYEWEAR
An architecture is provided for packaging visual overlay-based user interfaces (UIs) into mobile device applications to work as user interface extensions that allow certain flows and logic to be displayed on an eyewear device when connected to the mobile device application. The extension of the UIs of the mobile device applications to the display of the eyewear device allows for inexpensive experimentation with augmented reality (AR) UIs for eyewear devices and allows for reusing of business logic across mobile devices and associated eyewear devices. For example, a mobile device application for maps or navigation may be extended to show directions on an associated eyewear device once the destination is chosen in the navigation application on the mobile device. In this example, the business logic would still live in the navigation application on the mobile device but the user would see AR directions overlaid on a display of the eyewear device.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
H04W 4/80 - Services utilisant la communication de courte portée, p.ex. la communication en champ proche, l'identification par radiofréquence ou la communication à faible consommation d’énergie
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
Collaborative sessions in which access to added virtual content is selectively made available to participants/users. A participant (the host) creates a new session and invites participants to join. The invited participants receive an invitation to join the session. The session creator (i.e., the host) and other approved participants can access the contents of a session. The session identifies a new participant when they join the session, and concurrently notifies the other participants in the session that a new participant is waiting for permission to access the added virtual content. The host or approved participants can set up the new participant with permissions for accessing added virtual content.
A collaborative session (e.g., a virtual time capsule) in which access to a collaborative object and added virtual content is selectively provided to participants/users. In one example of the collaborative session, a participant (the host) creates a new session and invites participants to join. The session creator (i.e., the host) and other approved participants can access the contents of a session (e.g., which may be recorded using an application such as lens cloud feature; available from Snap Inc. of Santa Monica, California). A timestamp is associated with each received virtual content, and the users are provided with a timelapse of the collaborative object as a function of the timestamps.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
A method for detecting full-body gestures by a mobile device includes a host mobile device detecting the tracked body of a co-located participant in a multi-party session. When the participant's tracked body provides a full-body gesture, the host's mobile device recognizes that there is a tracked body providing a full-body gesture. The host mobile device iterates through the list of participants in the multi-party session and finds the closest participant mobile device with respect to the screen-space position of the head of the gesturing participant. The host mobile device then obtains the user ID of the closest participant mobile device and broadcasts the recognized full-body gesture event to all co-located participants in the multi-party session, along with the obtained user ID. Each participant's mobile device may then handle the gesture event as appropriate for the multi-party session. For example, a character or costume may be assigned to a gesturing participant.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
44.
AUTHORING TOOLS FOR CREATING INTERACTIVE AR EXPERIENCES
Described are authoring tools for creating interactive AR experiences. The story-authoring application enables a user with little or no programming skills to create an interactive story that includes recording voice commands for advancing to the next scene, inserting and manipulating virtual objects in a mixed-reality environment, and recording a variety of interactions with connected IoT devices. The story creation interface is presented on the display as a virtual object in an AR environment.
Described are virtual AR interfaces for generating a virtual rotational interface for the purpose of controlling connected IoT devices using the inertial measurement unit (IMU) of a portable electronic device. The IMU control application enables a user of a portable electronic device to activate a virtual rotational interface overlay on a display and adjust a feature of a connected IoT product by rotating a portable electronic device. The device IMU moves a slider on the virtual rotational interface. The IMU control application sends a control signal to the IoT product which executes an action in accordance with the slider position. The virtual rotational interface is presented on the display as a virtual object in an AR environment. The IMU control application detects the device orientation (in the physical environment) and in response presents a corresponding slider element on the virtual rotational interface (in the AR environment).
H04L 67/125 - Protocoles spécialement adaptés aux environnements propriétaires ou de mise en réseau pour un usage spécial, p.ex. les réseaux médicaux, les réseaux de capteurs, les réseaux dans les véhicules ou les réseaux de mesure à distance en impliquant la commande des applications des terminaux par un réseau
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
46.
INTERACTION RECORDING TOOLS FOR CREATING INTERACTIVE AR STORIES
Recording tools for creating interactive AR experiences. An interaction recording application enables a user with little or no programming skills to perform and record user behaviors that are associated with reactions between story elements such as virtual objects and connected IoT devices. The user behaviors include a range of actions, such as speaking a trigger word and apparently touching a virtual object. The corresponding reactions include starting to record a subsequent scene and executing actions between story elements. The trigger recording interface is presented on the display as an overlay relative to the physical environment.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine
Described are recording tools for generating following behaviors and creating interactive AR experiences. The following recording application enables a user with little or no programming skills to virtually connect virtual objects to other elements, including virtual avatars representing fellow users, thereby creating an interactive story in which multiple elements are apparently and persistently connected. The following interface includes methods for selecting objects and instructions for connecting a virtual object to a target object. In one example, the recording application presents on the display a virtual tether between the objects until a connecting action is detected. The following interface is presented on the display as an overlay, in the foreground relative to the physical environment.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06F 3/0487 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
A virtual interface application presented in augmented reality (AR) is described for controlling Internet of Things (IoT) products. The virtual interface application enables a user of a portable electronic device to activate a virtual control interface overlay on a display, receive a selection from the user using her hands or feet, and send a control signal to a nearby IoT product which executes an action in accordance with the selection. The virtual control interface is presented on the display as a virtual object in an AR environment. The virtual interface application includes a foot tracking tool for detecting an intersection between the foot location (in the physical environment) and the virtual surface position (in the AR environment). When an intersection is detected, the virtual interface application sends a control signal with instructions to the IoT product.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
H04L 67/125 - Protocoles spécialement adaptés aux environnements propriétaires ou de mise en réseau pour un usage spécial, p.ex. les réseaux médicaux, les réseaux de capteurs, les réseaux dans les véhicules ou les réseaux de mesure à distance en impliquant la commande des applications des terminaux par un réseau
Input indicative of a selection of volumetric content for presentation is received. The volumetric content comprises a volumetric representation of one or more elements of a real-world three-dimensional space. In response to the input, device state data associated with the volumetric content is accessed. The device state data describes a state of one or more network-connected devices associated with the real-world three-dimensional space. The volumetric content is presented. The presentation of the volumetric content includes presentation of the volumetric representation of the one or more elements overlaid on the real-world three-dimensional space by a display device and configuring the one or more network-connected devices using the device state data.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/388 - Affichages volumétriques, c. à d. systèmes où l’image est réalisée à partir d’éléments répartis dans un volume
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
A system monitors an environment via one or more sensors included in a computing device and applies a trigger to detect that a memory experience is stored in a data store based on the monitoring. The system creates an augmented reality memory experience, a virtual reality memory experience, or a combination thereof, based on the trigger if the memory experience is detected. The system additionally projects the augmented reality memory experience, the virtual reality memory experience, or the combination thereof, via the computing device.
A system monitors a user environment via one or more sensors included in a computing device and detects, via a trigger, that event data is stored in a data store based on the monitoring. The system further detects one or more participants in the event data and invites the one or more participants to share an augmented reality event data and/or to a virtual reality event data. The system also creates, based on the event data, an augmented reality event data and/or a virtual reality event data, and presents the augmented reality event data and/or the virtual reality event data to the one or more participants in a synchronous mode and/or in an asynchronous mode, via the computing device.
The present disclosure relates to methods and systems for providing a multi-perspective augmented reality experience. A volumetric video of a three-dimensional space is captured. The volumetric video of the three-dimensional space includes a volumetric representation of a first user within the three-dimensional space. The volumetric video is displayed by a display device worn by a second user, and the second user sees the volumetric representation of the first user within the three-dimensional space. Input indicative of an interaction (e.g., entering or leaving) of the second user with the volumetric representation of the first user is detected. Based on detecting the input indicative of the interaction, the display device switches to a display of a recorded perspective of the first user. Thus, by interacting with a volumetric representation of the first user in a volumetric video, the second user views the first user's perspective of the three-dimensional space.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
A display device presents volumetric content comprising a volumetric video. The volumetric video comprises a volumetric representation of one or more elements a three-dimensional space. Input indicative of a control operation associated with the presentation of the volumetric video is received. The presentation of the volumetric video by the display device is controlled by executing the control operation. While the control operation is being executed, the volumetric representation of the one or more elements of the three-dimensional space are displayed from multiple perspectives based on movement of a user.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
54.
MIXING AND MATCHING VOLUMETRIC CONTENTS FOR NEW AUGMENTED REALITY EXPERIENCES
A volumetric content presentation system includes a head-worn display device, which includes one or more processors, and a memory storing instructions that, when executed by the one or more processors, configure the display device to access AR content items that correspond to either real -world objects or virtual objects, mix and match these AR content items, and present volumetric content that includes these mixed and matched AR content items overlaid on a real-world environment to create a new AR scene that a user can experience.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/388 - Affichages volumétriques, c. à d. systèmes où l’image est réalisée à partir d’éléments répartis dans un volume
H04N 13/332 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
55.
MULTI-DIMENSIONAL EXPERIENCE PRESENTATION USING AUGMENTED REALITY
The present disclosure relates to methods and systems for providing a presentation of an experience (e.g., a journey) to a user using augmented reality (AR). During a capture phase, persons in the journey may take videos or pictures using their smartphones, GoPros, and/or smart glasses. A drone may also take videos or pictures during the journey. During an experience phase, an AR topographical rendering of the real-world environment of the journey may be rendered on a tabletop, highlighting/animating a path persons took in the journey. The persons may be rendered as miniature avatars/dolls overlaid on the representation of the real-world environment. When the user clicks on a point in the presentation of the journey, a perspective (e.g., the videos or pictures) at that point is presented.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
H04W 4/80 - Services utilisant la communication de courte portée, p.ex. la communication en champ proche, l'identification par radiofréquence ou la communication à faible consommation d’énergie
A method for carving a 3D space using hands tracking is described. In one aspect, a method includes accessing a first frame from a camera of a display device, tracking, using a hand tracking algorithm operating at the display device, hand pixels corresponding to one or more user hands depicted in the first frame, detecting, using a sensor of the display device, depths of the hand pixels, identifying a 3D region based on the depths of the hand pixels, and applying a 3D reconstruction engine to the 3D region.
Systems, methods, and computer readable media for object counting on augmented reality (AR) wearable devices are disclosed. Embodiments are disclosed that enable display of a count of objects as part of a user view. Upon receipt of a request to count objects, the AR wearable device captures an image of the user view. The AR wearable device transmits the image to a backend for processing to determine the objects in the image. The AR wearable device selects a. group of objects of the determined objects to count and overlays boundary boxes over counted objects within the user view. The position of the boundary boxes is adjusted to account for movement of the AR wearable device. A hierarchy of objects is used to group together objects that are related but have different labels or names.
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
Systems and methods are provided for performing automated speech recognition. The systems and methods access a LM that includes a plurality of n-grams, each of the plurality of n-grams comprising a respective sequence of words and corresponding LM score and receive a list of words associated with a group classification, each word in the list of words being associated with a respective weight. The systems and method compute, based on the LM scores of the plurality of n-grams, a probability that a given word in the list of words associated with the group classification appears in an n-gram in the LM comprising an individual sequence of words and adds one or more new n-grams to the LM comprising one or more words in the list of words in combination with the individual sequence of words and associated with a particular LM score based on the computed probability.
e.g.e.g., the support of the sparse computation, layer type and size, and system overhead. The FLOPs reduction from the frozen layers and shrunken dataset leads to higher actual training acceleration than weight sparsity.
Systems and methods are provided for performing voice communication operations. The system establishes, by a first augmented reality (AR) device, a voice communication session between a plurality of users. The system displays, by the first AR device of a first user of the plurality of users, an avatar representing a second user of the plurality of users. The system receives, by the first AR device of a first user of the plurality of users, input from the first user that selects a display position for the avatar representing the second user within a real-world environment of the first user. The system animates the avatar representing the second user based on movement information received from a second AR device of the second user.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
Systems and methods are provided for performing operations on an augmented reality (AR) device using an external vision system. The system establishes, by the AR device, a communication with an external client device. The system overlays, by the AR device, a first AR object on a real -world environment being viewed using the AR device. The system receives interaction data from the external client device representing movement of a user determined by the external client device. The system, in response to receiving the interaction data from the external client device, modifies the first AR object by the AR device.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
An Augmented Reality (AR) system (352) provides stabilization of hand-tracking input data. The AR system provides for display a user interface of an AR application (328). The AR system captures, using one or more cameras (326) of the AR system, video frame tracking data of a gesture being made by a user (332) while the user interacts with the AR user interface. The AR system generates skeletal 3D model data of a hand of the user based on the video frame tracking data that includes one or more skeletal 3D model features corresponding to recognized visual landmarks of portions of the hand of the user. The AR system generates targeting data based on the skeletal 3D model data where the targeting data identifies a virtual 3D object of the AR user interface. The AR system filters the targeting data using a targeting filter component and provides the filtered targeting data to the AR application.
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p.ex. le suivi des coins ou des segments
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 20/00 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène
G06V 40/00 - Reconnaissance de formes biométriques, liées aux êtres humains ou aux animaux, dans les données d’image ou vidéo
A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The generated image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.
At least one unit of a software application is identified. The at least one unit includes source code. The source code of the at least one unit is analyzed to determine a style of the source code. Metadata is extracted from the at least one unit based on the source code analysis. One or more features of the extracted metadata are classified. A template file is modified based on the extracted metadata and the classified features to create a modified template file.
Methods and systems are disclosed for detecting whether a wearable device is being worn by a user. The system transmits a radio signal from a first communication device of a wearable device to a second communication device of the wearable device and measures a signal strength associated with the radio signal received by the second communication device. The system compares the signal strength to a threshold value and generates an indication of a wear status associated with the wearable device based on comparing the signal strength to the threshold value.
Examples include a wearable device having a frame, a temple and onboard electronics components. The frame can optionally configured to hold one or more optical elements. T temple can optionally connected to the frame at a joint such that the temple is disposable between a collapsed condition and a wearable condition in which the wearable device is wearable by a user to hold the one or more optical elements within user view. The onboard electronics components can be carried by at least one of the frame and the temple and can include a first antenna configured for cellular communication carried by the frame and a second antenna configured for cellular communication carried by one of the frame or the temple.
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for interacting with visual codes within a messaging system. The program and method provide for displaying, by a messaging application, captured image data comprising a visual code, the visual code including a custom graphic and being decodable to access a first feature of the messaging application; receiving user input selecting the visual code; displaying an updated version of the custom graphic; providing an animation which depicts the updated version of the custom graphic as moving from the visual code to an interface element comprising a group of icons, each icon within the group of icons being user-selectable to access a respective second feature of the messaging application; and updating the group of icons to include an additional icon which is user-selectable to access the first feature of the messaging application.
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
Systems and methods are provided for using an external controller with an AR device. The system establishes, by one or more processors of the AR device, a communication with an external client device. The system overlays, by the AR device, a first AR object on a real -world environment being viewed using the AR device. The system receives interaction data from the external client device representing one or more inputs received by the external client device and, in response, modifies the first AR object by the AR device.
A63F 13/213 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types comprenant des moyens de photo-détection, p.ex. des caméras, des photodiodes ou des cellules infrarouges
A63F 13/212 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs portés par le joueur, p.ex. pour mesurer le rythme cardiaque ou l’activité des jambes
A63F 13/211 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs d’inertie, p.ex. des accéléromètres ou des gyroscopes
A63F 13/428 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p.ex. les signaux générés par le joueur ou dérivés de l’environnement par mappage des signaux d’entrée en commandes de jeu, p.ex. mappage du déplacement d’un stylet sur un écran tactile en angle de braquage d’un véhicule virtuel incluant des signaux d’entrée de mouvement ou de position, p.ex. des signaux représentant la rotation de la manette d’entrée ou les mouvements des bras du joueur détectés par des accéléromètres ou des gyroscopes
A63F 13/426 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p.ex. les signaux générés par le joueur ou dérivés de l’environnement par mappage des signaux d’entrée en commandes de jeu, p.ex. mappage du déplacement d’un stylet sur un écran tactile en angle de braquage d’un véhicule virtuel incluant des informations de position sur l’écran, p.ex. les coordonnées sur l’écran d’une surface que le joueur vise avec un pistolet optique
A63F 13/22 - Opérations de configuration, p.ex. le calibrage, la configuration des touches ou l’affectation des boutons
A63F 13/214 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types pour localiser des contacts sur une surface, p.ex. des tapis de sol ou des pavés tactiles
A63F 13/537 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p.ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p.ex. en montrant l’état physique d’un personnage de jeu sur l’écran
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A projector (10) for an augmented reality or mixed reality headset is disclosed, comprising: a display (20) defining an optical axis (9), configured to receive light (41) to generate first supplied light (40); an exit pupil (80) configured to couple the first supplied light into a waveguide for an augmented reality or mixed reality headset, the exit pupil comprising a first region and a second region; a first optical arrangement (60) configured to couple the first supplied light from the display towards the exit pupil; and a first light blocker (70). A first partial reflection (50), which is an undesirable reflection of the first supplied light, can be reflected back into the light projector. The light projector is configured to separate the supplied and reflected light spatially and prevent the reflected light from being coupled into the waveguide using the light blocker.
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 27/18 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour projection optique, p.ex. combinaison de miroir, de condensateur et d'objectif
G02B 27/28 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour polariser
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for automatic quantization of a floating point model. The program and method provide for providing a floating point model to an automatic quantization library, the floating point model being configured to represent a neural network, and the automatic quantization library being configured to generate a first quantized model based on the floating point model; providing a function to the automatic quantization library, the function being configured to run a forward pass on a given dataset for the floating point model; causing the automatic quantization library to generate the first quantized model based on the floating point model; causing the automatic quantization library to calibrate the first quantized model by running the first quantized model on the function; and converting the calibrated first quantized model to a second quantized model.
Systems, methods, and computer readable media for voice input for augmented reality (AR) wearable devices are disclosed. Embodiments are disclosed that enable a user to interact with the AR wearable device without using physical user interface devices. A keyword is used to indicate that the user is about to speak an action or command. The AR wearable device divides the processing of the audio data into a keyword module that is trained to recognize the keyword and a module to process the audio data after the keyword. In some embodiments, the AR wearable device transmits the audio data after the keyword to a host device to process. The AR wearable device maintains an application registry that associates actions with applications. Applications can be downloaded, and the application registry updated where the applications indicate actions to associate with the application.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
An interaction system that provides users in mutual affinity relationships to send messages. Interaction applications of two or more users receives notifications of a mutual affinity relationship between the first user and the second user. The interaction applications configure respective mutual affinity widgets by associating the mutual affinity widget with the respective other user. Icons of the mutual affinity widgets are provided on respective home screens of the users. Upon detecting a selection of the mutual affinity widget by a first user, a message creation interface is provided to the first user and a message is generated based on an image captured by the first user using the message creation user interface. The message is then sent to a second user. The second user uses their own mutual affinity widget to access the message.
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
Methods and systems are disclosed for managing access to encrypted data and encryption keys. The system stores, by a key management server, a first encryption key associated with a first service and a second encryption key associated with a second service. The system prevents, by the key management server, the second service from accessing the second encryption key while the first service is performing a first function using the first encryption key and determines that a first threshold period of time associated with the first function has elapsed. The system, in response to determining that the first threshold period of time associated with the first function has elapsed, prevents, by the key management server, the first service from accessing the first encryption key while the second service is performing a second function using the second encryption key.
A medical image overlay application for use with augmented reality (AR) eyewear devices. The image overlay application enables a user of an eyewear device to activate an image overlay on a display when the eyewear device detects that the camera field of view includes a medical image location. Medical image locations are defined relative to virtual markers. The image overlay includes one or more medical images, presented according to a configurable transparency value. An image registration tool transforms the location and scale of each medical image to the physical environment, such that the medical image as presented on the display closely matches the location and size of real objects.
A61B 90/00 - Instruments, outillage ou accessoires spécialement adaptés à la chirurgie ou au diagnostic non couverts par l'un des groupes , p.ex. pour le traitement de la luxation ou pour la protection de bords de blessures
A61B 90/50 - Supports pour instruments chirurgicaux, p.ex. bras articulés
75.
MAGNIFIED OVERLAYS CORRELATED WITH VIRTUAL MARKERS
A magnification application for use with augmented reality (AR) eyewear devices. The magnification application enables a user of an eyewear device to activate a magnification overlay on a display whenever a camera on the eyewear device detects that the field of view includes a registered virtual marker. The magnified overlay includes one or more frames of the captured video data, presented according to a predefined and configurable magnification power. A pointer including a vector and a visual tether guides the user toward the virtual marker. When the eyewear device location is near a perimeter associated with the virtual marker, the magnified overlay appears in a predefined and configurable frame on the display.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
Aspects of the present disclosure involve a system for providing AR experiences. The system accesses, by a messaging application, an image depicting a real-world fashion item of a user and generates a three-dimensional (3D) virtual fashion item based on the real-world fashion item depicted in the image. The system stores the 3D virtual fashion item in a database that includes a virtual wardrobe comprising a plurality of 3D virtual fashion items associated with the user. The system generates, by the messaging application, an augmented reality (AR) experience that allows the user to interact with the virtual wardrobe.
Image augmentation effects are provided on a device that includes a display and a camera. A simplified augmented reality effect is applied to a stream of images captured by the camera, to generate a preview stream of images. The preview stream of images is displayed on the display. A second stream of images corresponding to the first stream of images is saved to an initial video file. A full augmented reality effect, corresponding to the simplified augmented reality affect, is then applied to the second stream of images to generate a fully-augmented stream of images, which are saved to a further video file. The further video file can then be played back on the display to show the final, fully augmented reality effect as applied to the stream of images.
H04N 21/44 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4
An electronic device with olfactory detector including an array of olfactory sensors for determining scents. Each sensor in the array is tuned to detect the presence and concentration of specific chemical compounds or molecules. A fan creates airflow of ambient air across the olfactory sensors. An analog to digital (A/D) converter receives and processes the sensor outputs of the olfactory sensors and provides the processed sensor output to a processor. Scent type and intensity can be classified by using the information from the scent sensors as input for a machine learning model, generated through supervised training using labeled example measurements from our sensor array. The processor may display information of the determined scents on a display of a smart device, and the processor can also send information indicative of the determined scents to another device.
A communication link is established between a first mobile device and a second mobile device using communication setup information in a machine-readable code that is displayed on a display of the second mobile device. The first mobile device captures and decodes an image of the machine-readable code to extract dynamically-generated communication setup information. A communication link is then established between the two devices using the communication setup information. The machine readable code may also be used as a fiducial marker to establish an initial relative pose between the two devices. Pose updates received from the second mobile device can then be used as user-interface inputs to the first mobile device.
An olfactory sticker used in chats between electronic devices to make messaging more immersive, personalized, and authentic by integrating olfactory information with traditional text-based formats and AR messaging. Olfactory stickers are used to indicate to a recipient that a message includes olfactory information. The olfactory sticker illustrates a graphical representation of a particular scent that can be sent and received via a chat or an AR message. Olfactory stickers provide the recipient control of accessing the transmitted scent at a desired time. This is particularly useful since certain olfactory information, i.e., scents, can be very direct and intrusive. Olfactory stickers are activated (i.e., release their scent) when they are tapped or rubbed by the recipient.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H04M 1/21 - Combinaisons avec un équipement auxiliaire, p.ex. avec pendule ou bloc-notes
A61L 9/12 - Appareils, p.ex. supports, à cet effet
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
B05B 17/00 - Appareils de pulvérisation ou d'atomisation de liquides ou d'autres matériaux fluides, non couverts par les autres groupes de la présente sous-classe
An electronic eyewear device including a spatial dimming pixel panel within an optical assembly that delivers improved backlighting conditions for displayed images. The spatial dimming pixel panel is spatially dimmed where an image is positioned on a display, and the unoccupied area of the display is not dimmed (undimmed), thereby unaltering the real-world view through that portion of the display. The spatial dimming pixel panel is made of multiple liquid crystal cells arranged in a gridded orientation with a dye-doped or guest-host liquid crystal system having a phase change mode with homeotropic alignment. The spatial dimming pixel panel absorbs nonpolarized light when a voltage is applied across the cells and passes nonpolarized light in the absence of a voltage across the cells.
G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
G09G 3/36 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
G02F 1/139 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique basés sur des effets d'orientation où les cristaux liquides restent transparents
A system to enable 3D hair reconstruction and rendering from a single reference image which performs a multi-stage process that utilizes both a 3D implicit representation and a 2D parametric embedding space.
Systems and methods are provided for performing automated speech recognition. The systems and methods perform operations comprising: accessing a language model that includes a plurality of n-grams, each of the plurality of n-grams comprising a respective sequence of words and corresponding LM score; selecting a target word to boost in the language model; receiving a boosting factor for the target word; identifying a target n-gram in the language model that includes the target word; identifying a subset of n-grams of the plurality of n-grams that include words in a portion of the target n-gram; and adjusting the LM score of the target n-gram based on the LM scores of the subset of n-grams and the boosting factor.
A method for generating an updated localizer map of a reference scene is provided. The method may include: acquiring a preliminary localizer map of the reference scene, the preliminary localizer map including a preliminary frame that each is associated with a set of preliminary data points; capturing a plurality of new data points on the reference scene; determining poses of the capture device related to the capture of the plurality of new data points; creating at least one new frame; selecting one or more target frames from the preliminary frame and the new frame; and generating the updated localizer map based on the one or more target frames.
Aspects of the present disclosure involve a system for providing virtual experiences. The system accesses, by a messaging application, an image depicting a person. The system generates, by the messaging application, a three- dimensional (3D) avatar based on the person depicted in the image. The system receives input that selects a pose for the 3D avatar and one or more fashion items to be worn by the 3D avatar and places, by the messaging application, the 3D avatar in the selected pose and wearing the one or more fashion items in an augmented reality (AR) experience.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
86.
LOW-POWER ARCHITECTURE FOR AUGMENTED REALITY DEVICE
A method for managing power resource in an augmented reality (AR) device is described. In one aspect, the method includes configuring a low-power mode to run on a low-power processor of the AR device using a first set of sensor data, and a high-power mode to run on a high-power processor of the AR device using a second set of sensor data, operating, using the low-power processor, a low-power application in the low-power mode based on the first set of sensor data, detecting a request to operate a high-power application at the AR device, in response to detecting the request, activating the second set of sensors of the AR device corresponding to the high-power mode, and operating, using the high-power processor, a high-power application in the high-power mode based on the second set of sensors.
G06F 1/3293 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise par transfert vers un processeur plus économe en énergie, p.ex. vers un sous-processeur
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 1/3231 - Surveillance de la présence, de l’absence ou du mouvement des utilisateurs
G06F 1/3234 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise
G06F 1/3287 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise par la mise hors tension d’une unité fonctionnelle individuelle dans un ordinateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
Aspects of the present disclosure involve a system for capturing infrared (IR) images using an RGB camera. The system captures, by a camera of a client device, a first image comprising a visible light representing a real-world environment. The system receives a strobe signal for capturing an infrared (IR) image by the camera. In response to receiving the strobe signal, the system switches a filter associated with a lens of the camera to pass IR light to an image sensor of the camera, captures a second image comprising IR illumination of the real-world environment, and switches the filter after the second image is captured to allow visible light to pass through to the image sensor of the camera.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
H04N 23/20 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image uniquement à partir d'un rayonnement infrarouge
H04N 23/11 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image à partir de différentes longueurs d'onde pour générer des signaux d'image à partir de longueurs d'onde de lumière visible et infrarouge
A closed loop control system actively regulates the battery current paths of physically separated circuits so that the current is approximately the same for each of the circuits regardless of the various system loads. The closed loop control system modulates the current paths by either modulating a high side transistor used to independently limit each battery's current path or by modulating a DC/DC converter's output voltage to independently boost each battery's current path. The closed loop control system is also designed to handle undervoltage lockout (UVLO) situations when one of the batteries is nearing empty to tilt the power balance in the chance that there is an existing battery charge mismatch to support system load bursts and to turn off the circuit when the system current draw is exceptionally low. A tilting circuit also identifies and discharges the battery with the higher charge until the charge states are substantially equal.
Aspects of the present disclosure involve a system for hiding conversation elements. The system accesses a conversation interface of a messaging application on a web browser and presents the conversation interface in a window associated with the web browser. The conversation interface comprising a plurality of conversation elements. The system determines that one or more inputs received from one or more input devices correspond to a specified combination of inputs. The system, in response to determining that the one or more inputs correspond to a specified combination of inputs, obscures a first subset of the plurality of conversation elements.
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p.ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
G06F 16/958 - Organisation ou gestion de contenu de sites Web, p.ex. publication, conservation de pages ou liens automatiques
Aspects of the present disclosure involve a system for hiding conversation elements. The system accesses a conversation interface of a messaging application on a web browser and presents the conversation interface in a window associated with the web browser. The conversation interface comprises a plurality of conversation elements. The system accesses a focus status of the window and, in response to determining that the focus status indicates that the window has lost focus, obscures a first subset of the plurality of conversation elements.
G06F 3/048 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI]
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p.ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
G06F 16/958 - Organisation ou gestion de contenu de sites Web, p.ex. publication, conservation de pages ou liens automatiques
Aspects of the present disclosure involve providing a platform user notification to users in a chat session. A user device receives from a server chat status message data for a chat session. The user device detects a specified platform being used by a user in the chat session based on the chat status message data. The user device provides a chat session user interface a platform presence icon associated with the user indicating that the user is using the specified platform.
A user input content item is presented in a viewing interface of an interaction application executing on a user device. The user input content item is shared with a viewing user by a sending user via an interaction system. A press and hold operation by the viewing user related to the presentation of the user input content item is detected. Responsive to the detection of the press and hold operation, the interaction application is automatically transitioned to a reply state. Within the reply state, a reply mechanism is activated to enable the viewing user to generate a reply message to the sending user.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
G06F 3/04883 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels pour l’entrée de données par calligraphie, p.ex. sous forme de gestes ou de texte
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
Aspects of the present disclosure involve a system for managing a conversation across multiple windows or tabs. The system accesses a conversation interface of a messaging application on a first web session and presents the conversation interface in the first web session. The system receives a request to access the conversation interface from a second web session. In response to receiving the request, the system transfers the conversation interface from the first web session to the second web session.
In order to guide the user to a target object that is located outside of the field of view of the wearer of the AR computing device, a rotational navigation system displays on a display device an arrow or a pointer, referred to as a direction indicator. The direction indicator is generated based on the angle between the direction of the user's head and the direction of the target object, and a correction coefficient. The correction coefficient is defined such that the greater the angle between the direction of the user's head and the direction of the target object, the greater is the horizontal component of the direction indicator.
An image copying assistant is a computing application configured to aid users in copying a digital image to a physical canvas using traditional media on the physical canvas. The image copying assistant utilizes augmented reality techniques to present features of the digital image projected onto the physical canvas. The image copying assistant detects previously generated markers in an output of a digital image sensor of a camera of a computing device and use the detected markers to calculate the plane and boundaries of the surface of the physical canvas. The image copying assistant uses the calculated plane and boundaries to determine a position of the digital image on a display of the computing device.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06K 19/06 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques caractérisés par le genre de marque numérique, p.ex. forme, nature, code
G06F 3/04883 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels pour l’entrée de données par calligraphie, p.ex. sous forme de gestes ou de texte
A method for a low-power hand-tracking system is described. In one aspect, a method includes polling a proximity sensor of a wearable device to detect a proximity event, the wearable device includes a low-power processor and a high-power processor, in response to detecting the proximity event, operating a low-power hand-tracking application on the low-power processor based on proximity data from the proximity sensor, and ending an operation of the low-power hand-tracking application in response to at least one of: detecting and recognizing a gesture based on the proximity data, detecting without recognizing the gesture based on the proximity data, or detecting a lack of activity from the proximity sensor within a timeout period based on the proximity data.
Volume hologram couplers are multiplexed into a same volume to increase the field of view of components in a waveguide assembly such as an augmented reality (AR) waveguide assembly for use in electronic eyewear devices. The multiplexing can be done in any direction perpendicular to the optical axis. Multiplexing of the volume hologram couplers combines different functions of the waveguide assembly into one diffractive optical element (DOE) in the form of an input coupler or an output coupler. For example, each volume holographic grating of input couplers and output couplers has a different refraction angle, a different periodicity, or both relative to any other volume holographic grating in the same volume. The resulting DOEs reduce reinteraction losses, reduce thickness, reduce the amount of DOEs and the number of layers and airgaps, and increase robustness (volume versus surface relief that can scratch or break).
G02B 27/09 - Mise en forme du faisceau, p.ex. changement de la section transversale, non prévue ailleurs
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
G02B 5/32 - Hologrammes utilisés comme éléments optiques
Aspects of the present disclosure involve a system for filtering conversations. The system generates for display, by a messaging application, a plurality of shortcut options, each of the plurality of shortcut options comprising a respective filtering criterion. In response to receiving input that selects a given shortcut option of the plurality of shortcut options, the system retrieves the filtering criterion associated with the given shortcut option. The system searches a plurality of conversations to identify a subset of conversations that match the filtering criterion. The system generates for display together with the plurality of shortcut options, a plurality of representations of the identified subset of conversations in which one or more messages have been exchanged between a user and one or more friends of the user.
H04L 51/212 - Surveillance ou traitement des messages utilisant un filtrage ou un blocage sélectif
H04L 51/04 - Messagerie en temps réel ou quasi en temps réel, p.ex. messagerie instantanée [IM]
H04L 51/216 - Gestion de l'historique des conversations, p.ex. regroupement de messages dans des sessions ou des fils de conversation
H04L 51/07 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel caractérisée par l'inclusion de contenus spécifiques
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
99.
BACKGROUND REPLACEMENT USING NEURAL RADIANCE FIELD
Aspects of the present disclosure involve a system for providing virtual experiences. The system accesses an image depicting a person and one or more camera parameters representing a viewpoint associated with a camera used to capture the image. The system extracts a portion of the image comprising the depiction of the person. The system processes, by a neural radiance field (NeRF) machine learning model, the one or more camera parameters to render an estimated depiction of a scene from the viewpoint associated with the camera used to capture the image. The system combines the portion of the image comprising the depiction of the person with the estimated depiction of the scene to generate an output image and causes the output image to be presented on a client device.
Methods and systems are disclosed for performing operations for generating a photorealistic rendering of an object. The operations include: accessing a set of albedo textures and a machine learning model associated with a real -world object, the set of albedo textures and a machine learning model having been trained based on a plurality of viewpoints of the real -world object; obtaining a three-dimensional (3D) mesh of the real -world object; receiving input that selects a new viewpoint that differs from the plurality of viewpoints of the real -world object; and generating a photorealistic rendering of the real -world object from the new viewpoint based on the 3D mesh of the real -world object, the set of albedo textures, and the machine learning model associated with the real -world object.