Examples of systems and methods for rendering an avatar in a mixed reality environment are disclosed. The systems and methods may be configured to automatically scale an avatar or to render an avatar based on a determined intention of a user, an interesting impulse, environmental stimuli, or user saccade points. The disclosed systems and methods may apply discomfort curves when rendering an avatar. The disclosed systems and methods may provide a more realistic interaction between a human user and an avatar.
Various techniques pertaining to methods, systems, and computer program products a spatial persistence process that places a virtual object relative to a physical object for an extended-reality display device based at least in part upon a persistent coordinate frame (PCF). A determination is made to decide whether a drift is detected for the virtual object relative to the physical object. upon or after detection of the drift or deviation, the drift or deviation is corrected at least by updating a tracking map into an updated tracking map and further at least by updating the persistent coordinate frame (PCF) based at least in part upon the updated tracking map, wherein the persistent coordinate frame (PCF) comprises six degrees of freedom relative to the map coordinate system.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an object recognition neural network using multiple data sources. One of the methods includes receiving training data that includes a plurality of training images from a first source and images from a second source. A set of training images are obtained from the training data. For each training image in the set of training images, contrast equalization is applied to the training image to generate a modified image. The modified image is processed using the neural network to generate an object recognition output for the modified image. A loss is determined based on errors between, for each training image in the set, the object recognition output for the modified image generated from the training image and ground-truth annotation for the training image. Parameters of the neural network are updated based on the determined loss.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
4.
SYSTEMS AND METHODS FOR END TO END SCENE RECONSTRUCTION FROM MULTIVIEW IMAGES
Systems and methods of generating a three-dimensional (3D) reconstruction of a scene or environment surrounding a user of a spatial computing system, such as a virtual reality, augmented reality or mixed reality system, using only multiview images comprising, and without the need for depth sensors or depth data from sensors. Features are extracted from a sequence of frames of RGB images and back-projected using known camera intrinsics and extrinsics into a 3D voxel volume wherein each pixel of the voxel volume is mapped to a ray in the voxel volume. The back-projected features are fused into the 3D voxel volume. The 3D voxel volume is passed through a 3D convolutional neural network to refine the and regress truncated signed distance function values at each voxel of the 3D voxel volume.
An apparatus configured to be head-worn by a user, includes: a screen configured to present graphics for the user; a camera system configured to view an environment in which the user is located; and a processing unit coupled to the camera system, the processing unit configured to: obtain locations of features for an image of the environment, wherein the locations of the features are identified by a neural network; determine a region of interest for one of the features in the image, the region of interest having a size that is less than a size of the image; and perform a corner detection using a corner detection algorithm to identify a corner in the region of interest.
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
A method and system for increasing dynamic digitized wavefront resolution, i.e., the density of output beamlets, can include receiving a single collimated source light beam and producing multiple output beamlets spatially offset when out-coupled from a waveguide. The multiple output beamlets can be obtained by offsetting and replicating a collimated source light beam. Alternatively, the multiple output beamlets can be obtained by using a collimated incoming source light beam having multiple input beams with different wavelengths in the vicinity of the nominal wavelength of a particular color. The collimated incoming source light beam can be in-coupled into the eyepiece designed for the nominal wavelength. The input beams with multiple wavelengths take different paths when they undergo total internal reflection in the waveguide, which produces multiple output beamlets.
In some embodiments, optical systems with a reflector and a lens proximate a light output opening of the reflector provide light output with high spatial uniformity and high efficiency. The reflectors are shaped to provide substantially angularly uniform light output and the lens is configured to transform this angularly uniform light output into spatially uniform light output. The light output may be directed into a spatial light modulator, which modulates the light to project an image.
Head-mounted display systems with power saving functionality are disclosed. The systems can include a frame configured to be supported on the head of the user. The systems can also include a head-mounted display disposed on the frame, one or more sensors, and processing electronics in communication with the display and the one or more sensors. In some implementations, the processing electronics can be configured to cause the system to reduce power of one or more components in response to at least in part on a determination that the frame is in a certain position (e.g., upside-down or on top of the head of the user). In some implementations, the processing electronics can be configured to cause the system to reduce power of one or more components in response to at least in part on a determination that the frame has been stationary for at least a threshold period of time.
G06F 1/3218 - Surveillance de dispositifs périphériques de dispositifs d’affichage
G06F 1/3231 - Surveillance de la présence, de l’absence ou du mouvement des utilisateurs
G06F 1/3234 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise
12.
SYSTEMS AND METHODS FOR TEMPORARILY DISABLING USER CONTROL INTERFACES DURING ATTACHMENT OF AN ELECTRONIC DEVICE
Systems and methods of disabling user control interfaces during attachment of a wearable electronic device to a portion of a user's clothing or accessory are disclosed. The wearable electronic device can include inertial measurement units (IMUs), optical sources, optical sensors or electromagnetic sensors. Based on the information provided by the IMUs, optical sources, optical sensors or electromagnetic sensors, an electrical processing and control system can make a determination that the electronic device is being grasped and picked up for attaching to a portion of a user's clothing or accessory or that the electronic device is in the process of being attached to a portion of a user's clothing or accessory and temporarily disable one or more user control interfaces disposed on the outside of the wearable electronic device.
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G01R 33/07 - Mesure de la direction ou de l'intensité de champs magnétiques ou de flux magnétiques en utilisant des dispositifs galvano-magnétiques des dispositifs à effet Hall
Head-mounted virtual and augmented reality display systems include a light projector with one or more emissive micro-displays having a first resolution and a pixel pitch. The projector outputs light forming frames of virtual content having at least a portion associated with a second resolution greater than the first resolution. The projector outputs light forming a first subframe of the rendered frame at the first resolution, and parts of the projector are shifted using actuators, such that physical positions of light output for individual pixels occupy gaps between the old locations of light output for individual pixels. The projector then outputs light forming a second subframe of the rendered frame. The first and second subframes are outputted within the flicker fusion threshold. Advantageously, an emissive micro-display (e.g., micro-LED display) having a low resolution can form a frame having a higher resolution by using the same light emitters to function as multiple pixels of that frame.
G02B 27/09 - Mise en forme du faisceau, p.ex. changement de la section transversale, non prévue ailleurs
G02B 27/18 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour projection optique, p.ex. combinaison de miroir, de condensateur et d'objectif
G02B 27/40 - Moyens optiques auxiliaires pour mise au point
G02B 27/62 - Appareils optiques spécialement adaptés pour régler des éléments optiques pendant l'assemblage de systèmes optiques
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G09G 3/32 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des panneaux électroluminescents semi-conducteurs, p.ex. utilisant des diodes électroluminescentes [LED]
H02N 2/02 - Machines électriques en général utilisant l'effet piézo-électrique, l'électrostriction ou la magnétostriction produisant un mouvement linéaire, p.ex. actionneurs; Positionneurs linéaires
Antireflection coatings for metasurfaces are described herein. In some embodiments, the metasurface may include a substrate, a plurality of nanostructures thereon, and an antireflection coating disposed over the nanostructures. The antireflection coating may be a transparent polymer, for example a photoresist layer, and may have a refractive index lower than the refractive index of the nanostructures and higher than the refractive index of the overlying medium (e.g., air). Advantageously, the antireflection coatings may reduce or eliminate ghost images in an augmented reality display in which the metasurface is incorporated.
G02B 1/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques caractérisés par la substance dont ils sont faits; Revêtements optiques pour éléments optiques
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 1/111 - Revêtements antiréfléchissants utilisant des couches comportant des matériaux organiques
An apparatus for use with an image display device configured for head-worn by a user, includes: a screen; and a processing unit configured to assign a first area of the screen to sense finger-action of the user; wherein the processing unit is configured to generate an electronic signal to cause a change in a content displayed by the display device based on the finger-action of the user sensed by the assigned first area of the screen of the apparatus.
An apparatus configured to be worn on a head of a user, includes: a screen configured to present graphics to the user; a camera system configured to view an environment in which the user is located; and a processing unit configured to determine a map based at least in part on output(s) from the camera system, wherein the map is configured for use by the processing unit to localize the user with respect to the environment; wherein the processing unit of the apparatus is also configured to obtain a metric indicating a likelihood of success to localize the user using the map, and wherein the processing unit is configured to obtain the metric by computing the metric or by receiving the metric.
Systems and methods for adaptive frequency hopping for reducing or avoiding electromagnetic interference between two radios operating in the same radio frequency (RF) band are provided. In one aspect, a host device, which includes first and second wireless radios and a hardware processor, can use adaptive frequency hopping among a plurality of RF channels to reduce interference. The device can control the first wireless radio to establish a first wireless connection with a terminal device via a first subset of channels, determine a set of performance statistics for the channels, and replace at least one of the first subset of the channels with a new channel within the plurality of channels based on the statistics. For example, a channel can be replaced if a packet error rate (PER) exceeds a threshold.
An augmented reality display system comprises a passable world model data comprises a set of map points corresponding to one or more objects of the real world. The augmented reality system also comprises a processor to communicate with one or more individual augmented reality display systems to pass a portion of the passable world model data to the one or more individual augmented reality display systems, wherein the piece of the passable world model data is passed based at least in part on respective locations corresponding to the one or more individual augmented reality display systems.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A63F 13/56 - Calcul des mouvements des personnages du jeu relativement à d’autres personnages du jeu, à d’autres objets ou d'autres éléments de la scène du jeu, p.ex. pour simuler le comportement d’un groupe de soldats virtuels ou pour l’orientation d’un personna
A63F 13/577 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p.ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant la détermination de la zone de contact entre les personnages ou les objets du jeu, p.ex. pour éviter une collision entre des voitures de course virtuelles
A63F 13/57 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p.ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile
G06Q 30/02 - Marketing; Estimation ou détermination des prix; Collecte de fonds
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
H04S 7/00 - Dispositions pour l'indication; Dispositions pour la commande, p.ex. pour la commande de l'équilibrage
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
Systems include three optical elements arranged along an optical axis each having a different cylinder axis and a variable cylinder refractive power. Collectively, the three elements form a compound optical element having an overall spherical refractive power (SPH), cylinder refractive power (CYL), and cylinder axis (Axis) that can be varied according to a prescription (Rx).
Systems and methods for displaying a cursor and a focus indicator associated with real or virtual objects in a virtual, augmented, or mixed reality environment by a wearable display device are disclosed. The system can determine a spatial relationship between a user-movable cursor and a target object within the environment. The system may render a focus indicator (e.g., a halo, shading, or highlighting) around or adjacent objects that are near the cursor. When the cursor overlaps with a target object, the system can render the object in front of the cursor (or not render the cursor at all), so the object is not occluded by the cursor. The object can be rendered closer to the user than the cursor. A group of virtual objects can be scrolled, and a virtual control panel can be displayed indicating objects that are upcoming in the scroll.
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/04812 - Techniques d’interaction fondées sur l’aspect ou le comportement du curseur, p.ex. sous l’influence de la présence des objets affichés
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0485 - Défilement ou défilement panoramique
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
22.
REAL-TIME PREVIEW OF CONNECTABLE OBJECTS IN A PHYSICALLY-MODELED VIRTUAL SPACE
Virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) systems may enable one or more users to connect two or more connectable objects together. These connectable objects may be real objects from the user's environment, virtual objects, or a combination thereof. A preview system may be included as a part of the VR, AR, and/or MR systems that provide a preview of the connection between the connectable objects prior to the user(s) connecting the connectable objects. The preview may include a representation of the connectable objects in a connected state along with an indication of whether the connected state is valid or invalid. The preview system may continuously physically model the connectable objects while simultaneously displaying a preview of the connection process to the user of the VR, AR, or MR system.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
A semiconductor substrate includes a first semiconductor layer, a first dielectric layer coupled to the first semiconductor layer, and a second semiconductor layer coupled to the first dielectric layer. The second semiconductor layer includes a base portion substantially aligned with the first dielectric layer and a cantilever portion protruding from an end of the first dielectric layer. The cantilever portion includes a tapered surface tapering from a bottom surface of the second semiconductor layer toward a top surface of the second semiconductor layer.
Techniques are described for operating an optical system. In some embodiments, light associated with a world object is received at the optical system. Virtual image light is projected onto an eyepiece of the optical system. A portion of a system field of view of the optical system to be at least partially dimmed is determined based on information detected by the optical system. A plurality of spatially-resolved dimming values for the portion of the system field of view may be determined based on the detected information. The detected information may include light information, gaze information, and/or image information. A dimmer of the optical system may be adjusted to reduce an intensity of light associated with the world object in the portion of the system field of view according to the plurality of dimming values.
Described are improved systems and methods for navigation and manipulation of interactable objects in a 3D mixed reality environment. Improved systems and methods are provided to implement physical manipulation for creation and placement of interactable objects, such as browser windows and wall hangings. A method includes receiving data indicating a selection of an interactable object contained within a first prism at the start of a user interaction. The method also includes receiving data indicating an end of the user interaction with the interactable object. The method further includes receiving data indicating a physical movement of the user corresponding to removing the interactable object from the first prism between the start and the end of the user interaction. Moreover, the method includes creating a second prism to contain the data associated with the interactable object at the end of the user interaction with the interactable object.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
26.
ANGULARLY SELECTIVE ATTENUATION OF LIGHT TRANSMISSION ARTIFACTS IN WEARABLE DISPLAYS
A wearable display system includes an eyepiece stack having a world side and a user side opposite the world side. During use, a user positioned on the user side views displayed images delivered by the wearable display system via the eyepiece stack which augment the user's field of view of the user's environment. The system also includes an optical attenuator arranged on the world side of the of the eyepiece stack, the optical attenuator having a layer of a birefringent material having a plurality of domains each having a principal optic axis oriented in a corresponding direction different from the direction of other domains. Each domain of the optical attenuator reduces transmission of visible light incident on the optical attenuator for a corresponding different range of angles of incidence.
G02B 27/28 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour polariser
G02F 1/13363 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs Éléments à biréfringence, p.ex. pour la compensation optique
G02F 1/1337 - Orientation des molécules des cristaux liquides induite par les caractéristiques de surface, p.ex. par des couches d'alignement
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs
G02F 1/139 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique basés sur des effets d'orientation où les cristaux liquides restent transparents
Systems and methods are provided for interpolation of disparate inputs. A radial basis function neural network (RBFNN) may be used to interpolate the pose of a digital character. Input parameters to the RBFNN may be separated by data type (e.g. angular vs. linear) and manipulated within the RBFNN by distance functions specific to the data type (e.g. use an angular distance function for the angular input data). A weight may be applied to each distance to compensate for input data representing different variables (e.g. clavicle vs. shoulder). The output parameters of the RBFNN may be a set of independent values, which may be combined into combination values (e.g. representing x, y, z, w angular value in SO(3) space).
An example head-mounted display device includes a light projector and an eyepiece. The eyepiece includes a light guiding layer and a first focusing optical element. The first focusing optical element includes a first region having a first optical power, and a second region having a second optical power different from the first optical power. The light guiding layer is configured to: i) receive light from the light projector, ii) direct at least a first portion of the light to a user’s eye through the first region to present a first virtual image to the user at a first focal distance, and iii) direct at least a second portion of the light to the user’s eye through the second region to present a second virtual image to the user at a second focal distance.
A user may interact and view virtual elements such as avatars and objects and/or real world elements in three-dimensional space in an augmented reality (AR) session. The system may allow one or more spectators to view from a stationary or dynamic camera a third person view of the users AR session. The third person view may be synchronized with the user view and the virtual elements of the user view may be composited onto the third person view.
Enhanced eye-tracking techniques for augmented or virtual reality display systems. An example method includes obtaining an image of an eye of a user of a wearable system, the image depicting glints on the eye caused by respective light emitters, wherein the image is a low dynamic range (LDR) image; generating a high dynamic range (HDR) image via computation of a forward pass of a machine learning model using the image; determining location information associated with the glints as depicted in the HDR image, wherein the location information is usable to inform an eye pose of the eye.
Blazed diffraction gratings provide optical elements in head-mounted display systems to, e.g., incouple light into or out-couple light out of a waveguide. These blazed diffraction gratings may be configured to have reduced polarization sensitivity. Such gratings may, for example, incouple or outcouple light of different polarizations with similar level of efficiency. The blazed diffraction gratings and waveguides may be formed in a high refractive index substrate such as lithium niobate. In some implementations, the blazed diffraction gratings may include diffractive features having a feature height of 40 nm to 120 nm, for example, 80 nm. The diffractive features may be etched into the high index substrate, e.g., lithium niobate.
One embodiment is directed to a system for enabling two or more users to interact within a virtual world comprising virtual world data, comprising a computer network comprising one or more computing devices, the one or more computing devices comprising memory, processing circuitry, and software stored at least in part in the memory and executable by the processing circuitry to process at least a portion of the virtual world data; wherein at least a first portion of the virtual world data originates from a first user virtual world local to a first user, and wherein the computer network is operable to transmit the first portion to a user device for presentation to a second user, such that the second user may experience the first portion from the location of the second user, such that aspects of the first user virtual world are effectively passed to the second user.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
H04L 67/131 - Protocoles pour jeux, simulations en réseau ou réalité virtuelle
G06T 1/20 - Architectures de processeurs; Configuration de processeurs p.ex. configuration en pipeline
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau
33.
VIRTUAL, AUGMENTED, AND MIXED REALITY SYSTEMS AND METHODS
A method for determining a focal point depth of a user of a three-dimensional (“3D”) display device includes tracking a first gaze path of the user. The method also includes analyzing 3D data to identify one or more virtual objects along the first gaze path of the user. The method further includes when only one virtual object intersects the first gaze path of the user identifying a depth of the only one virtual object as the focal point depth of the user.
Methods and systems for depth-based foveated rendering in the display system are disclosed. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. Some embodiments include monitoring eye orientations of a user of a display system based on detected sensor information. A fixation point is determined based on the eye orientations, the fixation point representing a three-dimensional location with respect to a field of view. Location information of virtual objects to present is obtained, with the location information indicating three-dimensional positions of the virtual objects. Resolutions of at least one virtual object is adjusted based on a proximity of the at least one virtual object to the fixation point. The virtual objects are presented to a user by display system with the at least one virtual object being rendered according to the adjusted resolution.
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/279 - Générateurs de signaux d’images à partir de modèles 3D d’objets, p.ex. des signaux d’images stéréoscopiques générés par ordinateur les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi
H04N 13/341 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques utilisant le multiplexage temporel
H04N 13/395 - Affichages volumétriques, c. à d. systèmes où l’image est réalisée à partir d’éléments répartis dans un volume avec échantillonnage de la profondeur, c. à d. construction du volume à partir d’un ensemble ou d’une séquence de plans d’image 2D
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
A computer implemented method for updating a point map on a system having first and second communicatively coupled hardware components includes the first component performing a first process on the point map, in a first state to generate a first change. The method also includes the second component performing a second process on the point map, in the first state to generate a second change. The method further includes the second component applying the second change to the point map, in the first state to generate a first updated point map in a second state. Moreover, the method includes the first component sending the first change to the second component. In addition, the method includes the second component applying the first change to the first updated point map in the second state to generate a second updated point map in a third state.
A virtual image generation system for use by an end user comprises memory, a display subsystem, an object selection device configured for receiving input from the end user and persistently selecting at least one object in response to the end user input, and a control subsystem configured for rendering a plurality of image frames of a three-dimensional scene, conveying the image frames to the display subsystem, generating audio data originating from the at least one selected object, and for storing the audio data within the memory.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04R 1/32 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
A63F 13/5372 - Commande des signaux de sortie en fonction de la progression du jeu incluant des informations visuelles supplémentaires fournies à la scène de jeu, p.ex. en surimpression pour simuler un affichage tête haute [HUD] ou pour afficher une visée laser dans un jeu de tir utilisant des indicateurs, p.ex. en montrant l’état physique d’un personnage de jeu sur l’écran pour marquer les personnages, les objets ou les lieux dans la scène de jeu, p.ex. en affichant un cercle autour du personnage commandé par le joueur
A63F 13/65 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p.ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu automatiquement par des dispositifs ou des serveurs de jeu, à partir de données provenant du monde réel, p.ex. les mesures en direct dans les compétitions de course réelles
A63F 13/5255 - Changement des paramètres des caméras virtuelles en fonction d’instructions dédiées d’un joueur, p.ex. utilisation d’une deuxième manette pour faire effectuer un mouvement de rotation à la caméra autour du personnage du joueur
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
A63F 13/00 - Jeux vidéo, c. à d. jeux utilisant un affichage à plusieurs dimensions généré électroniquement
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
A63F 13/424 - Traitement des signaux de commande d’entrée des dispositifs de jeu vidéo, p.ex. les signaux générés par le joueur ou dérivés de l’environnement par mappage des signaux d’entrée en commandes de jeu, p.ex. mappage du déplacement d’un stylet sur un écran tactile en angle de braquage d’un véhicule virtuel incluant des signaux d’entrée acoustiques, p.ex. en utilisant les résultats d’extraction de hauteur tonale ou de rythme ou de reconnaissance vocale
37.
Display panel or portion thereof with a transitional mixed reality graphical user interface
A method of operating a dynamic eyepiece in an augmented reality headset includes producing first virtual content associated with a first depth plane, coupling the first virtual content into the dynamic eyepiece, and projecting the first virtual content through one or more waveguide layers of the dynamic eyepiece to an eye of a viewer. The one or more waveguide layers are characterized by a first surface profile. The method also includes modifying the one or more waveguide layers to be characterized by a second surface profile different from the first surface profile, producing second virtual content associated with a second depth plane, coupling the second virtual content into the dynamic eyepiece, and projecting the second virtual content through the one or more waveguide layers of the dynamic eyepiece to the eye of the viewer.
G02B 30/52 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p.ex. des voxels le volume 3D étant construit à partir d'une pile ou d'une séquence de plans 2D, p.ex. systèmes d'échantillonnage en profondeur
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
39.
APPARATUS FOR OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAY WITH MUTUAL OCCLUSION AND OPAQUENESS CONTROL CAPABILITY
The present invention comprises a compact optical see-through head-mounted display capable of combining, a see-through image path with a virtual image path such that the opaqueness of the see-through image path can be modulated and the virtual image occludes parts of the see-through image and vice versa.
G02B 26/00 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables
G02B 13/06 - Objectifs panoramiques; Lentilles dites "de ciel"
G02B 27/10 - Systèmes divisant ou combinant des faisceaux
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
G02B 27/28 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour polariser
G03B 37/02 - Photographie panoramique ou à grand écran; Photographie de surfaces étendues, p.ex. pour la géodésie; Photographie de surfaces internes, p.ex. de tuyaux avec mouvements de balayage de l'objectif ou de l'appareil
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD]
H04N 23/698 - Commande des caméras ou des modules de caméras pour obtenir un champ de vision élargi, p. ex. pour la capture d'images panoramiques
An augmented reality display system. The system can include a first eyepiece waveguide with a first input coupling grating (ICG) region. The first ICG region can receive a set of input beams of light corresponding to an input image having a corresponding field of view (FOV), and can in-couple a first subset of the input beams. The first subset of input beams can correspond to a first sub-portion of the FOV. The system can also include a second eyepiece waveguide with a second ICG region. The second ICG region can receive and in-couple at least a second subset of the input beams. The second subset of the input beams can correspond to a second sub-portion of the FOV. The first and second sub-portions of the FOV can be at least partially different but together include the complete FOV of the input image.
A system includes: a screen configured for wear by a user, the screen configured to display a 2-dimensional (2D) element; a processing unit coupled to the display; and a user input device configured to generate a signal in response to a user input for selecting the 2D element displayed by the screen; wherein the processing unit is configured to obtain a 3-dimensional (3D) model associated with the 2D element in response to the generated signal.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/04812 - Techniques d’interaction fondées sur l’aspect ou le comportement du curseur, p.ex. sous l’influence de la présence des objets affichés
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
42.
METHODS, DEVICES, AND SYSTEMS FOR ILLUMINATING SPATIAL LIGHT MODULATORS
An optical device may include a light turning element. The optical device can include a first surface that is parallel to a horizontal axis and a second surface opposite to the first surface. The optical device may include a light module that includes a plurality of light emitters. The light module can be configured to combine light for the plurality of emitters. The optical device can further include a light input surface that is between the first and the second surfaces and is disposed with respect to the light module to receive light emitted from the plurality of emitters. The optical device may include an end reflector that is disposed on a side opposite the light input surface. The light coupled into the light turning element may be reflected by the end reflector and/or reflected from the second surface towards the first surface.
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
G03B 21/00 - Projecteurs ou visionneuses du type par projection; Leurs accessoires
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G09G 3/24 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des filaments incandescents
G02B 30/26 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type autostéréoscopique
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G02B 5/30 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques autres que les lentilles Éléments polarisants
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
Systems and methods for providing accurate and independent control of reverberation properties are disclosed. In some embodiments, a system may include a reverberation processing system, a direct processing system, and a combiner. The reverberation processing system can include a reverb initial power (RIP) control system and a reverberator. The RIP control system can include a reverb initial gain (RIG) and a RIP corrector. The RIG can be configured to apply a RIG value to the input signal, and the RIP corrector can be configured to apply a RIP correction factor to the signal from the RIG. The reverberator can be configured to apply reverberation effects to the signal from the RIP control system. In some embodiments, one or more values and/or correction factors can be calculated and applied such that the signal output from a component in the reverberation processing system is normalized to a predetermined value (e.g., unity (1.0)).
A method of operating a virtual image generation system comprises allowing an end user to interact with a three-dimensional environment comprising at least one virtual object, presenting a stimulus to the end user in the context of the three-dimensional environment, sensing at least one biometric parameter of the end user in response to the presentation of the stimulus to the end user, generating biometric data for each of the sensed biometric parameter(s), determining if the end user is in at least one specific emotional state based on the biometric data for the each of the sensed biometric parameter(s), and performing an action discernible to the end user to facilitate a current objective at least partially based on if it is determined that the end user is in the specific emotional state(s).
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 16/56 - Recherche d’informations; Structures de bases de données à cet effet; Structures de systèmes de fichiers à cet effet de données d’images fixes en format vectoriel
G06F 16/58 - Recherche caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
A63F 13/212 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs portés par le joueur, p.ex. pour mesurer le rythme cardiaque ou l’activité des jambes
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
A63F 13/21 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types
A63F 13/52 - Commande des signaux de sortie en fonction de la progression du jeu incluant des aspects de la scène de jeu affichée
A63F 13/65 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p.ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu automatiquement par des dispositifs ou des serveurs de jeu, à partir de données provenant du monde réel, p.ex. les mesures en direct dans les compétitions de course réelles
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
A method is disclosed, the method comprising the steps of identifying a first real object in a mixed reality environment, the mixed reality environment having a user; identifying a second real object in the mixed reality environment; generating, in the mixed reality environment, a first virtual object corresponding to the second real object; identifying, in the mixed reality environment, a collision between the first real object and the first virtual object; determining a first attribute associated with the collision; determining, based on the first attribute, a first audio signal corresponding to the collision; and presenting to the user, via one or more speakers, the first audio signal.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G10H 1/00 - INSTRUMENTS DE MUSIQUE ÉLECTROPHONIQUES; INSTRUMENTS DANS LESQUELS LES SONS SONT PRODUITS PAR DES MOYENS ÉLECTROMÉCANIQUES OU DES GÉNÉRATEURS ÉLECTRONIQUES, OU DANS LESQUELS LES SONS SONT SYNTHÉTISÉS À PARTIR D'UNE MÉMOIRE DE DONNÉES Éléments d'instruments de musique électrophoniques
H04S 7/00 - Dispositions pour l'indication; Dispositions pour la commande, p.ex. pour la commande de l'équilibrage
46.
INTERACTIONS WITH 3D VIRTUAL OBJECTS USING POSES AND MULTIPLE-DOF CONTROLLERS
A wearable system can comprise a display system configured to present virtual content in a three-dimensional space, a user input device configured to receive a user input, and one or more sensors configured to detect a user's pose. The wearable system can support various user interactions with objects in the user's environment based on contextual information. As an example, the wearable system can adjust the size of an aperture of a virtual cone during a cone cast (e.g., with the user's poses) based on the contextual information. As another example, the wearable system can adjust the amount of movement of virtual objects associated with an actuation of the user input device based on the contextual information.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
A virtual image generation system comprises a planar optical waveguide having opposing first and second faces, an in-coupling (IC) element configured for optically coupling a collimated light beam from an image projection assembly into the planar optical waveguide as an in-coupled light beam, a first orthogonal pupil expansion (OPE) element associated with the first face of the planar optical waveguide for splitting the in-coupled light beam into a first set of orthogonal light beamlets, a second orthogonal pupil expansion (OPE) element associated with the second face of the planar optical waveguide for splitting the in-coupled light beam into a second set of orthogonal light beamlets, and an exit pupil expansion (EPE) element associated with the planar optical waveguide for splitting the first and second sets of orthogonal light beamlets into an array of out-coupled light beamlets that exit the planar optical waveguide.
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
48.
GHOST IMAGE MITIGATION IN SEE-THROUGH DISPLAYS WITH PIXEL ARRAYS
A head-mounted apparatus include an eyepiece that include a variable dimming assembly and a frame mounting the eyepiece so that a user side of the eyepiece faces a towards a user and a world side of the eyepiece opposite the first side faces away from the user. The dynamic dimming assembly selectively modulates an intensity of light transmitted parallel to an optical axis from the world side to the user side during operation. The dynamic dimming assembly includes a variable birefringence cell having multiple pixels each having an independently variable birefringence, a first linear polarizer arranged on the user side of the variable birefringence cell, the first linear polarizer being configured to transmit light propagating parallel to the optical axis linearly polarized along a pass axis of the first linear polarizer orthogonal to the optical axis, a quarter wave plate arranged between the variable birefringence cell and the first linear polarizer, a fast axis of the quarter wave plate being arranged relative to the pass axis of the first linear polarizer to transform linearly polarized light transmitted by the first linear polarizer into circularly polarized light, and a second linear polarizer on the world side of the variable birefringence cell.
G02B 27/28 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour polariser
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs
G02F 1/13363 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs Éléments à biréfringence, p.ex. pour la compensation optique
G02F 1/139 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique basés sur des effets d'orientation où les cristaux liquides restent transparents
49.
THERMAL MANAGEMENT SYSTEM FOR PORTABLE ELECTRONIC DEVICES
A wearable electronic device is disclosed. The device can include a support structure and an electronic component disposed in or on the support structure. A heat exchanger element can be thermally coupled with the electronic component, the heat exchanger element comprising a fluid inlet port and a fluid outlet port. A first conduit can be fluidly connected to the fluid inlet port of the heat exchanger, the first conduit configured to convey, to the heat exchanger, liquid at a first temperature. A second conduit can be fluidly connected to the fluid outlet port of the heat exchanger, the second conduit configured to convey, away from the heat exchanger, liquid at a second temperature different from the first temperature.
Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors. A method performed by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
A diffractive waveguide stack includes first, second, and third diffractive waveguides for guiding light in first, second, and third visible wavelength ranges, respectively. The first diffractive waveguide includes a first material having first refractive index at a selected wavelength and a first target refractive index at a midpoint of the first visible wavelength range. The second diffractive waveguide includes a second material having a second refractive index at the selected wavelength and a second target refractive index at a midpoint of the second visible wavelength range. The third diffractive waveguide includes a third material having a third refractive index at the selected wavelength and a third target refractive index at a midpoint of the third visible wavelength range. A difference between any two of the first target refractive index, the second target refractive index, and the third target refractive index is less than 0.005 at the selected wavelength.
Systems, methods, and computer program product for displaying virtual content with a wearable display device that in response to identifying a first change from a first field of view to a second field of view, determines, based at least in part upon attribute(s) or criterion (criteria), first virtual content element(s) and second virtual content element(s) from the set that match to first surface(s) within the first field of view, determines second surface(s) within the second field of view based at least in part upon attribute(s) or criterion (criteria), the second surface(s) matching to the second virtual content element(s), moves the first virtual content element(s) from the first surface(s) to the second surface(s), and maintains the second virtual content element(s) with respect to the first surface(s) while the first field of view has been changed into the second field of view.
G06F 3/147 - Sortie numérique vers un dispositif de visualisation utilisant des panneaux de visualisation
G09G 5/38 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire avec des moyens pour commander la position de l'affichage
G09G 5/373 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire - Détails concernant le traitement de dessins graphiques pour modifier la taille du dessin graphique
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une forme; Localisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
Devices are described for high accuracy displacement of tools. In particular, embodiments provide a device for adjusting a position of a tool. The device includes a threaded shaft having a first end and a second end and a shaft axis extending from the first end to the second end, a motor that actuates the threaded shaft to move in a direction of the shaft axis. In some examples, the motor is operatively coupled to the threaded shaft. The device includes a carriage coupled to the camera, and a bearing assembly coupled to the threaded shaft and the carriage. In some examples, the bearing assembly permits a movement of the carriage with respect to the threaded shaft. The movement of the carriage allows the position of the camera to be adjusted.
H04N 23/695 - Commande de la direction de la caméra pour modifier le champ de vision, p. ex. par un panoramique, une inclinaison ou en fonction du suivi des objets
A distributed, cross reality system efficiently and accurately compares location information that includes image frames. Each of the frames may be represented as a numeric descriptor that enables identification of frames with similar content. The resolution of the descriptors may vary for different computing devices in the distributed system based on degree of ambiguity in image comparisons and/or computing resources for the device. A descriptor computed for a cloud-based component operating on maps of large areas that can result in ambiguous identification of multiple image frames may use high resolution descriptors. High resolution descriptors reduce computationally intensive disambiguation processing. A portable device, which is more likely to operate on smaller maps and less likely to have the computational resources to compute a high resolution descriptor, may use a lower resolution descriptor.
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p.ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]; Caractéristiques régionales saillantes
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
55.
LIGHT-EMITTING USER INPUT DEVICE FOR CALIBRATION OR PAIRING
A light emitting user input device can include a touch sensitive portion configured to accept user input (e.g., from a user's thumb) and a light emitting portion configured to output a light pattern. The light pattern can be used to assist the user in interacting with the user input device. Examples include emulating a multi-degree-of-freedom controller, indicating scrolling or swiping actions, indicating presence of objects nearby the device, indicating receipt of notifications, assisting pairing the user input device with another device, or assisting calibrating the user input device. The light emitting user input device can be used to provide user input to a wearable device, such as, e.g., a head mounted display device.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/041 - Numériseurs, p.ex. pour des écrans ou des pavés tactiles, caractérisés par les moyens de transduction
G06F 3/038 - Dispositions de commande et d'interface à cet effet, p.ex. circuits d'attaque ou circuits de contrôle incorporés dans le dispositif
G06V 10/145 - Caractéristiques optiques de l’appareil qui effectue l’acquisition ou des dispositifs d’éclairage Éclairage spécialement adapté à la reconnaissance de formes, p.ex. utilisant des réseaux
G06F 3/04886 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels par partition en zones à commande indépendante de la surface d’affichage de l’écran tactile ou de la tablette numérique, p.ex. claviers virtuels ou menus
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p.ex. souris 2D, boules traçantes, crayons ou palets
G06F 3/02 - Dispositions d'entrée utilisant des interrupteurs actionnés manuellement, p.ex. des claviers ou des cadrans
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
G09G 3/32 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des panneaux électroluminescents semi-conducteurs, p.ex. utilisant des diodes électroluminescentes [LED]
G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p.ex. interaction avec des règles ou des cadrans
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/04883 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels pour l’entrée de données par calligraphie, p.ex. sous forme de gestes ou de texte
H04W 4/80 - Services utilisant la communication de courte portée, p.ex. la communication en champ proche, l'identification par radiofréquence ou la communication à faible consommation d’énergie
An eyepiece for use in front of an eye of a viewer includes a waveguide having a surface and a diffractive optical element (DOE) optically coupled to the waveguide. The DOE includes a plurality of first ridges protruding from the surface of the waveguide and arranged as a periodic array having a period, each respective first ridge has a first height and a respective first width. The DOE also includes a plurality of second ridges, each respective second ridge protruding from a respective first ridge and having a second height greater than the first height and a respective second width less than the respective first width. At least one of the respective first width, the respective second width, or a respective ratio between the respective first width and the respective second width varies as a function of a distance from a first edge of the DOE.
A method for measuring performance of a head-mounted display module, the method including arranging the head-mounted display module relative to a plenoptic camera assembly so that an exit pupil of the head-mounted display module coincides with a pupil of the plenoptic camera assembly; emitting light from the head-mounted display module while the head-mounted display module is arranged relative to the plenoptic camera assembly; filtering the light at the exit pupil of the head-mounted display module; acquiring, with the plenoptic camera assembly, one or more light field images projected from the head-mounted display module with the filtered light; and determining information about the performance of the head-mounted display module based on acquired light field image.
Disclosed herein are systems and methods for presenting an audio signal associated with presentation of a virtual object colliding with a surface. The virtual object and the surface may be associated with a mixed reality environment. Generation of the audio signal may be based on at least one of an audio stream from a microphone and a video stream form a sensor. In some embodiments, the collision between the virtual object and the surface is associated with a footstep on the surface.
G10L 25/57 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour le traitement des signaux vidéo
H04S 7/00 - Dispositions pour l'indication; Dispositions pour la commande, p.ex. pour la commande de l'équilibrage
H04S 3/00 - Systèmes utilisant plus de deux canaux, p.ex. systèmes quadriphoniques
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints for thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of cornea of the user's eye using data derived from the glint images. The display system may use spherical and aspheric cornea models to estimate a location of the corneal center of the user's eye.
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A time of flight based depth detection system is disclosed that includes a projector configured to sequentially emit multiple complementary illumination patterns. A sensor of the depth detection system is configured to capture the light from the illumination patterns reflecting off objects within the sensor's field of view. The data captured by the sensor can be used to filter out erroneous readings caused by light reflecting off multiple surfaces prior to returning to the sensor.
Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device. In accordance with a determination that the one or more audio models does not comprise an audio model corresponding to the first object, an acoustic property of the first object can be determined, a custom audio model based on the acoustic property of the first object can be generated, an audio signal can be synthesized, wherein the audio signal is based on the collision and the custom audio model, and the audio signal can be presented, via a speaker of a head-wearable device, to a user.
A63F 13/577 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p.ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant la détermination de la zone de contact entre les personnages ou les objets du jeu, p.ex. pour éviter une collision entre des voitures de course virtuelles
A63F 13/285 - Génération de signaux de retour tactiles via le dispositif d’entrée du jeu, p.ex. retour de force
62.
DISPLAY DEVICE WITH DIFFRACTION GRATING HAVING REDUCED POLARIZATION SENSITIVITY
Diffraction gratings provide optical elements in head-mounted display systems to, e.g., incouple light into or out-couple light out of a waveguide. These diffraction gratings may be configured to have reduced polarization sensitivity. Such gratings may, for example, incouple or outcouple light of different polarizations with similar level of efficiency. The diffraction gratings and waveguides may include a transmissive layer and a metal layer. The diffraction grating may comprises a blazed grating.
Examples of systems and methods for interacting with content and updating the location and orientation of that content using a single controller. The system may allow a user to use the same controller for moving content around the room and interacting with that content by tracking a range of the motion of the controller.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
An augmented reality (AR) display device can display a virtual assistant character that interacts with the user of the AR device. The virtual assistant may be represented by a robot (or other) avatar that assists the user with contextual objects and suggestions depending on what virtual content the user is interacting with. Animated images may be displayed above the robot's head to display its intents to the user. For example, the robot can run up to a menu and suggest an action and show the animated images. The robot can materialize virtual objects that appear on its hands. The user can remove such an object from the robot's hands and place it in the environment. If the user does not interact with the object, the robot can dematerialize it. The robot can rotate its head to keep looking at the user and/or an object that the user has picked up.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
65.
OPTICAL EYEPIECE USING SINGLE-SIDED PATTERNING OF GRATING COUPLERS
An eyepiece includes a substrate and an in-coupling grating patterned on a single side of the substrate. A first grating coupler is patterned on the single side of the substrate and has a first grating pattern. The first grating coupler is optically coupled to the in-coupling grating. A second grating coupler is patterned on the single side of the substrate adjacent to the first grating coupler. The second grating coupler has a second grating pattern different from the first grating pattern. The second grating coupler is optically coupled to the in-coupling grating.
G02B 6/293 - Moyens de couplage optique ayant des bus de données, c. à d. plusieurs guides d'ondes interconnectés et assurant un système bidirectionnel par nature en mélangeant et divisant les signaux avec des moyens de sélection de la longueur d'onde
66.
EXPANDABLE BAND SYSTEM FOR SPATIAL COMPUTING HEADSET OR OTHER WEARABLE DEVICE
Expandable band systems for wearable devices, such as spatial computing headsets, are provided which have flexible and conformable form factors that enable users to reliably to secure such wearable devices in place. Further, in the context of spatial computing headsets with an optics assembly supported by opposing temple arms, the expandable band systems provide protection against over-extension of the temple arms or extreme deflections that may otherwise arise from undesirable torsional loading of the temple arms.
A cross reality system enables any of multiple devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. Both stored maps and tracking maps used by portable devices may have wireless fingerprints associated with them. The portable devices may maintain wireless fingerprints based on wireless scans performed repetitively, based on one or more trigger conditions, as the devices move around the physical world. The wireless information obtained from these scans may be used to create or update wireless fingerprints associated with locations in a tracking map on the devices. One or more of these wireless fingerprints may be used when a previously stored map is to be selected based on its coverage of an area in which the portable device is operating. Maintaining wireless fingerprints in this way provides a reliable and low latency mechanism for performing map-related operations.
H04W 24/02 - Dispositions pour optimiser l'état de fonctionnement
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04W 4/029 - Services de gestion ou de suivi basés sur la localisation
G01S 5/02 - Localisation par coordination de plusieurs déterminations de direction ou de ligne de position; Localisation par coordination de plusieurs déterminations de distance utilisant les ondes radioélectriques
68.
WAVEGUIDE LIGHT MULTIPLEXER USING CROSSED GRATINGS
A two-dimensional waveguide light multiplexer is described herein that can efficiently multiplex and distribute a light signal in two dimensions. An example of a two-dimensional waveguide light multiplexer can include a waveguide, a first diffraction grating, and a second diffraction grating disposed above the first diffraction grating and arranged such that the grating direction of the first diffraction grating is perpendicular to the grating direction of the second diffraction grating. Methods of fabricating a two-dimensional waveguide light multiplexer are also disclosed.
G02F 1/00 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs
69.
CAMERA EXTRINSIC CALIBRATION VIA RAY INTERSECTIONS
Embodiments provide image display systems and methods for extrinsic calibration of one or more cameras. More specifically, embodiments are directed to camera extrinsic calibration approach based on determining intersection of the rays projecting from the optical centers of the camera and a reference camera. Embodiments determine the relative position and orientation of one or more cameras given image(s) of the same target object from each camera by projecting measured image points into 3D rays in the real world. The extrinsic parameters are found by minimizing the expected 3D intersections of those rays with the known 3D target points.
G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra
G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques
H04N 23/695 - Commande de la direction de la caméra pour modifier le champ de vision, p. ex. par un panoramique, une inclinaison ou en fonction du suivi des objets
An electronic device is disclosed. The electronic device comprises a first clock configured to operate at a frequency. First circuitry of the electronic device is configured to synchronize with the first clock. Second circuitry is configured to determine a second clock based on the first clock. The second clock is configured to operate at the frequency of the first clock, and is further configured to operate with a phase shift with respect to the first clock. Third circuitry is configured to synchronize with the second clock.
G06F 1/12 - Synchronisation des différents signaux d'horloge
H03L 7/081 - Commande automatique de fréquence ou de phase; Synchronisation utilisant un signal de référence qui est appliqué à une boucle verrouillée en fréquence ou en phase - Détails de la boucle verrouillée en phase avec un déphaseur commandé additionnel
A method of performing localization of a handheld device with respect to a wearable device includes capturing, by a first imaging device mounted to the handheld device, a fiducial image containing a number of fiducials affixed to the wearable device and capturing, by a second imaging device mounted to the handheld device, a world image containing one or more features surrounding the handheld device. The method also includes obtaining, by a sensor mounted to the handheld device, handheld data indicative of movement of the handheld device, determining the number of fiducials contained in the fiducial image, and updating a position and an orientation of the handheld device using at least one of the fiducial image or the world image and the handheld data.
An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity. Advantageously, the wavefront divergence, and the accommodation cue provided to the eye of the user, may be varied by appropriate selection of parallax disparity, which may be set by selecting the amount of spatial separation between the locations of light output.
G02B 30/24 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type stéréoscopique impliquant un multiplexage temporel, p.ex. utilisant des obturateurs gauche et droit activés séquentiellement
G02B 30/34 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p.ex. visionneuses de diapositives 3D
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/341 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques utilisant le multiplexage temporel
H04N 13/339 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques utilisant le multiplexage spatial
Configurations are disclosed for presenting virtual reality and augmented reality experiences to users. An augmented reality display system comprises a handheld component housing an electromagnetic field emitter, the electromagnetic field emitter emitting a known magnetic field, the head mounted component coupled to one or more electromagnetic sensors that detect the magnetic field emitted by the electromagnetic field emitter housed in the handheld component, wherein a head pose is known, and a controller communicatively coupled to the handheld component and the head mounted component, the controller receiving magnetic field data from the handheld component, and receiving sensor data from the head mounted component, wherein the controller determining a hand pose based at least in part on the received magnetic field data and the received sensor data.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
An eye tracking system can include a first camera configured to capture a first plurality of visual data of a right eye at a first sampling rate. The system can include a second camera configured to capture a second plurality of visual data of a left eye at a second sampling rate. The second plurality of visual data can be captured during different sampling times than the first plurality of visual data. The system can estimate, based on at least some visual data of the first and second plurality of visual data, visual data of at least one of the right or left eye at a sampling time during which visual data of an eye for which the visual data is being estimated are not being captured. Eye movements of the eye based on at least some of the estimated visual data and at least some visual data of the first or second plurality of visual data can be determined.
The invention relates generally to a user interaction system having a head unit for a user to wear and a totem that the user holds in their hand and determines the location of a virtual object that is seen by the user. A fusion routine generates a fused location of the totem in a world frame based on a combination of an EM wave and a totem IMU data. The fused pose may drift over time due to the sensor's model mismatch. An unfused pose determination modeler routinely establishes an unfused pose of the totem relative to the world frame. A drift is declared when a difference between the fused pose and the unfused pose is more than a predetermined maximum distance.
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G09G 5/38 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire avec des moyens pour commander la position de l'affichage
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
76.
PRIVACY PRESERVING EXPRESSION GENERATION FOR AUGMENTED OR VIRTUAL REALITY CLIENT APPLICATIONS
Wearable systems for privacy preserving expression generation for augmented or virtual reality client applications. An example method includes receiving, by an expression manager configured to communicate expression information to client applications, a request from a client application for access to the expression information. The expression information reflects information derived from one or more sensors of the wearable system, with the client application being configured to present virtual content including an avatar rendered based on the expression information. A user interface is output for presentation which requests user authorization for the client application to access the expression information. In response to receiving user input indicating user authorization, enabling access to the expression information is enabled. The client application obtains periodic updates to the expression information, and the avatar is rendered based on the periodic updates.
An electronics system has a board with a thermal interface having an exposed surface. A thermoelectric device is placed against the thermal interface to heat the board. Heat transfers through the board from a first region where the thermal interface is located to a second region where an electronics device is mounted. The electronics device has a temperature sensor that detects the temperature of the electronics device. The temperature of the electronics device is used to calibrate an accelerometer and a gyroscope in the electronics device. Calibration data includes a temperature and a corresponding acceleration offset and a corresponding angle offset. A field computer simultaneously senses a temperature, an acceleration and an angle from the temperature sensor, accelerometer and gyroscope and adjusts the measured data with the offset data at the same temperature. The field computer provides corrected data to a controlled system.
G01C 25/00 - Fabrication, étalonnage, nettoyage ou réparation des instruments ou des dispositifs mentionnés dans les autres groupes de la présente sous-classe
B81B 7/02 - Systèmes à microstructure comportant des dispositifs électriques ou optiques distincts dont la fonction a une importance particulière, p.ex. systèmes micro-électromécaniques (SMEM, MEMS)
78.
INTERAURAL TIME DIFFERENCE CROSSFADER FOR BINAURAL AUDIO RENDERING
Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. In an example, a received first input audio signal is processed to generate a left output audio signal and a right output audio signal presented to ears of the user. Processing the first input audio signal comprises applying a delay process to the first input audio signal to generate a left audio signal and a right audio signal; adjusting gains of the left audio signal and the right audio signal; applying head-related transfer functions (HRTFs) to the left and right audio signals to generate the left and right output audio signals. Applying the delay process to the first input audio signal comprises applying an interaural time delay (ITD) to the first input audio signal, the ITD determined based on the source location.
An imaging system includes a light source configured to generate a light beam. The system also includes first and second light guiding optical elements having respective first and second entry portions, and configured to propagate at least respective first and second portions of the light beam by total internal reflection. The system further includes a light distributor having a light distributor entry portion, a first exit portion, and a second exit portion. The light distributor is configured to direct the first and second portions of the light beam toward the first and second entry portions, respectively. The light distributor entry portion and the first exit portion are aligned along a first axis. The light distributor entry portion and the second exit portion are aligned along a second axis different from the first axis.
Display devices include waveguides with metasurfaces as in-coupling and/or out-coupling optical elements. The metasurfaces may be formed on a surface of the waveguide and may include a plurality or an array of sub-wavelength-scale (e.g., nanometer-scale) protrusions. Individual protrusions may include horizontal and/or vertical layers of different materials which may have different refractive indices, allowing for enhanced manipulation of light redirecting properties of the metasurface. Some configurations and combinations of materials may advantageously allow for broadband metasurfaces. Manufacturing methods described herein provide for vertical and/or horizontal layers of different materials in a desired configuration or profile.
An augmented reality device may communicate with a map server via an API interface to provide mapping data that may be implemented into a canonical map, and may also receive map data from the map server. A visualization of map quality, including quality indicators for multiple cells of the environment, may be provided to the user as an overlay to the current real-world environment seen through the AR device. These visualizations may include, for example, a map quality minimap and/or a map quality overlay. The visualizations provide guidance to the user that allows more efficient updates to the map, thereby improving map quality and localization of users into the map.
A visual perception device has a look-up table stored in a laser driver chip. The look-up table includes relational gain data to compensate for brighter areas of a laser pattern wherein pixels are located more closely than areas where the pixels are further apart and to compensate for differences in intensity of individual pixels when the intensities of pixels are altered due to design characteristics of an eye piece.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
83.
SYSTEMS AND METHODS FOR PRESENTING PERSPECTIVE VIEWS OF AUGMENTED REALITY VIRTUAL OBJECT
Examples of the disclosure describe systems and methods for sharing perspective views of virtual content. In an example method, a virtual object is presented, via a display, to a first user. A first perspective view of the virtual object is determined, wherein the first perspective view is based on a position of the virtual object and a position of the first user. The virtual object is presented, via a display, to a second user, wherein the virtual object is presented to the second user according to the first perspective view. A second perspective view of the virtual object is determined, wherein the second perspective view is based on an input from the first user. The virtual object is presented, via a display, to the second user, wherein presenting the virtual object to the second user comprises presenting a transition from the first perspective view to the second perspective view.
An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.
The present disclosure relates to display systems and, more particularly, to augmented reality display systems. A diffraction grating includes a plurality of different diffracting zones having a periodically repeating lateral dimension corresponding to a grating period adapted for light diffraction. The diffraction grating additionally includes a plurality of different liquid crystal layers corresponding to the different diffracting zones. The different liquid crystal layers have liquid crystal molecules that are aligned differently, such that the different diffracting zones have different optical properties associated with light diffraction.
A multiple-input, multiple-output (MIMO) transceiver comprises a plurality of RF chains, a plurality of antennas, a plurality of switching components, and control circuitry operatively coupled to the plurality of switching components. In some examples, a total quantity of RF chains included in the plurality of RF chains is equal to a first value, and a total quantity of antennas included in the plurality of antennas is equal to a second value that is less than the first value.
H04L 5/14 - Fonctionnement à double voie utilisant le même type de signal, c. à d. duplex
H04B 7/08 - Systèmes de diversité; Systèmes à plusieurs antennes, c. à d. émission ou réception utilisant plusieurs antennes utilisant plusieurs antennes indépendantes espacées à la station de réception
H04B 7/06 - Systèmes de diversité; Systèmes à plusieurs antennes, c. à d. émission ou réception utilisant plusieurs antennes utilisant plusieurs antennes indépendantes espacées à la station d'émission
87.
GESTURE RECOGNITION SYSTEM AND METHOD OF USING SAME
A method of executing a gesture command includes identifying a hand centroid of a hand. The method also includes identifying a first finger tip of a first finger on the hand. The method further includes identifying a thumb tip of a thumb on the hand. Moreover, the method includes determining a surface normal relationship between the hand centroid, the first fingertip and the thumb tip.
A wearable device may include a head-mounted display (HMD) for rendering a three-dimensional (3D) virtual object which appears to be located in an ambient environment of a user of the display. The relative positions of the HMD and one or more eyes of the user may not be in desired positions to receive, or register, image information outputted by the HMD. For example, the HMD-to-eye alignment vary for different users and may change over time (e.g., as a given user moves around or as the HMD slips or otherwise becomes displaced). The wearable device may determine a relative position or alignment between the HMD and the user's eyes by determining whether features of the eye are at certain vertical positions relative to the HMD. Based on the relative positions, the wearable device may determine if it is properly fitted to the user, and provides feedback on the quality of the fit to the user, and may take actions to reduce or minimize effects of any misalignment.
A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.
An audio system and method of spatially rendering audio signals that uses modified virtual speaker panning is disclosed. The audio system may include a fixed number F of virtual speakers, and the modified virtual speaker panning may dynamically select and use a subset P of the fixed virtual speakers. The subset P of virtual speakers may be selected using a low energy speaker detection and culling method, a source geometry-based culling method, or both. One or more processing blocks in the decoder/virtualizer may be bypassed based on the energy level of the associated audio signal or the location of the sound source relative to the user/listener, respectively. In some embodiments, a virtual speaker that is designated as an active virtual speaker at a first time, may also be designated as an active virtual speaker at a second time to ensure the processing completes.
H04S 7/00 - Dispositions pour l'indication; Dispositions pour la commande, p.ex. pour la commande de l'équilibrage
G10L 19/008 - Codage ou décodage du signal audio multi-canal utilisant la corrélation inter-canaux pour réduire la redondance, p.ex. stéréo combinée, codage d’intensité ou matriçage
G10L 25/21 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits les paramètres extraits étant l’information sur la puissance
H04S 3/00 - Systèmes utilisant plus de deux canaux, p.ex. systèmes quadriphoniques
A method of presenting a signal to a speech processing engine is disclosed. According to an example of the method, an audio signal is received via a microphone. A portion of the audio signal is identified, and a probability is determined that the portion comprises speech directed by a user of the speech processing engine as input to the speech processing engine. In accordance with a determination that the probability exceeds a threshold, the portion of the audio signal is presented as input to the speech processing engine. In accordance with a determination that the probability does not exceed the threshold, the portion of the audio signal is not presented as input to the speech processing engine.
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine
G10L 15/30 - Reconnaissance distribuée, p.ex. dans les systèmes client-serveur, pour les applications en téléphonie mobile ou réseaux
G10L 15/14 - Classement ou recherche de la parole utilisant des modèles statistiques, p.ex. des modèles de Markov cachés [HMM]
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G10L 15/25 - Reconnaissance de la parole utilisant des caractéristiques non acoustiques utilisant la position des lèvres, le mouvement des lèvres ou l’analyse du visage
93.
SYSTEMS AND METHODS FOR OPERATING A HEAD-MOUNTED DISPLAY SYSTEM BASED ON USER IDENTITY
Systems and methods for depth plane selection in display system such as augmented reality display systems, including mixed reality display systems, are disclosed. A display(s) may present virtual image content via image light to an eye(s) of a user. The display(s) may output the image light to the eye(s) of the user, the image light to have different amounts of wavefront divergence corresponding to different depth planes at different distances away from the user. A camera(s) may capture images of the eye(s). An indication may be generated based on obtained images of the eye(s), indicating whether the user is identified. The display(s) may be controlled to output the image light to the eye(s) of the user, the image light to have the different amounts of wavefront divergence based at least in part on the generated indication indicating whether the user is identified.
An exit pupil expander (EPE) has entrance and exit pupils, a back surface adjacent to the entrance pupil, and an opposed front surface. In one embodiment the EPE is geometrically configured such that light defining a center wavelength that enters at the entrance pupil perpendicular to the back surface experiences angularly varying total internal reflection between the front and back surfaces such that the light exiting the optical channel perpendicular to the exit pupil is at a wavelength shifted from the center wavelength. In another embodiment a first distance at the entrance pupil between the front and back surfaces is different from a second distance at the exit pupil between the front and back surfaces. The EPE may be deployed in a head-wearable imaging device (e.g., virtual or augmented reality) where the entrance pupil in-couples light from a micro display and the exit pupil out-couples light from the EPE.
Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
A cross reality system enables any of multiple devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. The cross reality system may include a cloud-based localization service that responds to requests from devices to localize with respect to a stored map. The request may include one or more sets of feature descriptors extracted from an image of the physical world around the device. Those features may be posed relative to a coordinate frame used by the local device. The localization service may identify one or more stored maps with a matching set of features. Based on a transformation required to align the features from the device with the matching set of features, the localization service may compute and return to the device a transformation to relate its local coordinate frame to a coordinate frame of the stored map.
An eyepiece includes a planar waveguide having a front surface and a back surface. The eyepiece also includes a grating coupled to the back surface of the planar waveguide and configured to diffract a first portion of the light propagating in the planar waveguide out of a plane of the planar waveguide toward a first direction and to diffract a second portion of the light propagating in the planar waveguide out of the plane of the planar waveguide toward a second direction opposite to the first direction and a wavelength-selective reflector coupled to the front surface of the planar waveguide. The wavelength-selective reflector comprises a multilevel metasurface comprising a plurality of spaced apart protrusions having a pitch and formed of a first optically transmissive material and a second optically transmissive material disposed between the spaced apart protrusions.
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/147 - Sortie numérique vers un dispositif de visualisation utilisant des panneaux de visualisation
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
Disclosed herein are systems and methods for calculating angular acceleration based on inertial data using two or more inertial measurement units (IMUs). The calculated angular acceleration may be used to estimate a position of a wearable head device comprising the IMUs. Virtual content may be presented based on the position of the wearable head device. In some embodiments, a first IMU and a second IMU share a coincident measurement axis.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G01P 15/08 - Mesure de l'accélération; Mesure de la décélération; Mesure des chocs, c. à d. d'une variation brusque de l'accélération en ayant recours aux forces d'inertie avec conversion en valeurs électriques ou magnétiques
G01P 15/16 - Mesure de l'accélération; Mesure de la décélération; Mesure des chocs, c. à d. d'une variation brusque de l'accélération en calculant la dérivée par rapport au temps d'un signal de vitesse mesuré
100.
AMBIENT LIGHT MANAGEMENT SYSTEMS AND METHODS FOR WEARABLE DEVICES
Techniques are described for operating an optical system. In some embodiments, light associated with a world object is received at the optical system. Virtual image light is projected onto an eyepiece of the optical system. A portion of a system field of view of the optical system to be at least partially dimmed is determined based on information detected by the optical system. A plurality of spatially-resolved dimming values for the portion of the system field of view may be determined based on the detected information. The detected information may include light information, gaze information, and/or image information. A dimmer of the optical system may be adjusted to reduce an intensity of light associated with the world object in the portion of the system field of view according to the plurality of dimming values.
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs
G02F 1/13363 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs Éléments à biréfringence, p.ex. pour la compensation optique