According to an example method, it is determined whether a difference between first acoustic data and second acoustic data exceeds a threshold. The first acoustic data is associated with a first client application in communication with an audio service. The second acoustic data is associated with a second client application. A first input audio signal associated with the first client application is received via the audio service. In accordance with the determination that the difference does not exceed the threshold, the second acoustic data is applied to the first input audio signal to produce a first output audio signal. In accordance with a determination that the difference exceeds the threshold, the first acoustic data is applied to the first input audio signal to produce the first output audio signal. The first output audio signal is presented to a user of a wearable head device in communication with the audio service.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
A63F 13/87 - Communiquer avec d’autres joueurs, p.ex. par courrier électronique ou messagerie instantanée
A63F 13/215 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types comprenant des moyens de détection des signaux acoustiques, p.ex. utilisant un microphone
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
According to an example method, a location of a first virtual speaker array is determined. A first virtual speaker density is determined. Based on the first virtual speaker density, a location of a second virtual speaker of the first virtual speaker array is determined. A source location in a virtual environment is determined for an audio signal. A virtual speaker of the first virtual speaker array is selected based on the source location and based further on a position or an orientation of a listener in the virtual environment. A head-related transfer function (HRTF) is identified that corresponds to the selected virtual speaker of the first virtual speaker array. The HRTF is applied to the audio signal to produce a first filtered audio signal. The first filtered audio signal is presented to the listener via a first speaker.
An augmented reality device includes a projector, projector optics optically coupled to the projector, and an eyepiece optically coupled to the projector optics. The eyepiece includes an eyepiece waveguide characterized by lateral dimensions and an optical path length difference as a function of one or more of the lateral dimensions.
An extended reality (XR) system includes an XR display configured to be movably disposed in a line of sight of a user. The system also includes a vision correction component configured to be disposed in the line of sight of the user. The system further includes a displacement mechanism configured to guide the XR display out of the line of sight of the user while the vision correction component remains in the line of sight of the user, and limit relative positions of the XR display and the vision correction component.
The disclosure describes an improved drop-on-demand, controlled volume technique for dispensing resist onto a substrate, which is then imprinted to create a patterned optical device suitable for use in optical applications such as augmented reality and/or mixed reality systems. The technique enables the dispensation of drops of resist at precise locations on the substrate, with precisely controlled drop volume corresponding to an imprint template having different zones associated with different total resist volumes. Controlled drop size and placement also provides for substantially less variation in residual layer thickness across the surface of the substrate after imprinting, compared to previously available techniques. The technique employs resist having a refractive index closer to that of the substrate index, reducing optical artifacts in the device. To ensure reliable dispensing of the higher index and higher viscosity resist in smaller drop sizes, the dispensing system can continuously circulate the resist.
B29C 59/02 - Façonnage de surface, p.ex. gaufrage; Appareils à cet effet par des moyens mécaniques, p.ex. par pressage
B05C 9/00 - Appareillages ou installations pour appliquer des liquides ou d'autres matériaux fluides aux surfaces par des moyens non prévus dans l'un des groupes , ou dans lesquels le moyen pour déposer le liquide ou autre matériau fluide n'est pas important
F04D 7/00 - Pompes adaptées à la manipulation de liquides particuliers, p.ex. par choix de matériaux spéciaux pour les pompes ou pièces de pompe
G02B 1/04 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques caractérisés par la substance dont ils sont faits; Revêtements optiques pour éléments optiques faits de substances organiques, p.ex. plastiques
An image projection system includes an illumination source, a linear polarizer, and an eyepiece waveguide including a plurality of diffractive in-coupling optical elements. The eyepiece waveguide includes a region operable to transmit illumination light from the illumination source. The image projection system also includes a polarizing beamsplitter, a reflective structure, a quarter waveplate disposed between the polarizing beamsplitter and the reflective structure, and a reflective spatial light modulator.
An image projection system includes an illumination source and an eyepiece waveguide including a plurality of diffractive incoupling optical elements. The eyepiece waveguide includes a region operable to transmit light from the illumination source. The image projection system also includes a first optical element including a reflective polarizer, a second optical element including a partial reflector, a first quarter waveplate disposed between the first optical element and the second optical element, a reflective spatial light modulator, and a second quarter waveplate disposed between the second optical element and the reflective spatial light modulator.
G02F 1/1334 - Dispositions relatives à la structure basées sur des cristaux liquides dispersés dans un polymère, p.ex. cristaux liquides micro-encapsulés
G02F 1/313 - Dispositifs de déflexion numérique dans une structure de guide d'ondes optique
8.
AREA SPECIFIC COLOR ABSORPTION IN NANOIMPRINT LITHOGRAPHY
An eyepiece includes an optical waveguide, a transmissive input coupler at a first end of the optical waveguide, an output coupler at a second end of the optical waveguide, and a polymeric color absorbing region along a portion of the optical waveguide between the transmissive input coupler and the output coupler. The transmissive input coupler is configured to couple incident visible light to the optical waveguide, and the color-absorbing region is configured to absorb a component of the visible light as the visible light propagates through the optical waveguide.
G02B 6/293 - Moyens de couplage optique ayant des bus de données, c. à d. plusieurs guides d'ondes interconnectés et assurant un système bidirectionnel par nature en mélangeant et divisant les signaux avec des moyens de sélection de la longueur d'onde
G02B 6/34 - Moyens de couplage optique utilisant des prismes ou des réseaux
A waveguide stack having color-selective regions on one or more waveguides. The color-selective regions are configured to absorb incident light of a first wavelength range in such a way as to reduce or prevent the incident light of the first wavelength range from coupling into a waveguide configured to transmit a light of a second wavelength range.
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
Systems, methods, and computer program products for displaying virtual contents using a wearable electronic device determine the location of the sighting centers of both eyes of a user wearing the wearable electronic device and estimate an error or precision for these sighting center. A range of operation may be determined for a focal distance or a focal plane at the focal distance based at least in part upon the error or the precision and a criterion pertaining to vergence and accommodation of binocular vision of the virtual contents with the wearable electronic device. A virtual content may be adjusted into an adjusted virtual content for presentation with respect to the focal plane or the focal distance based at least in part upon the range of operation. The adjusted virtual content may be presented to the user with respect to the focal distance or the focal plane.
A61B 3/10 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
G02B 30/20 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
Examples of the disclosure describe systems and methods for reducing audio effects of fan noise, specifically, for a wearable system. A method wherein operating a fan of a wearable head device; detecting, with a microphone of the wearable head device, noise generated by the fan; generating a fan reference signal, wherein the fan reference signal represents at least one of a speed of the fan, a mode of the fan, a power output of the fan, and a phase of the fan; deriving a transfer function based on the fan reference signal and based further on the detected noise of the fan; generating a compensation signal based on the transfer function; and while operating the fan of the wearable head device, outputting, by a speaker of the wearable head device, an anti-noise signal, wherein the anti-noise signal is based on the compensation signal.
G10K 11/178 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférence; Masquage du son par régénération électro-acoustique en opposition de phase des ondes acoustiques originales
F01N 1/06 - Silencieux caractérisés par leur principe de fonctionnement utilisant les effets d'interférence
12.
MAPPING OF ENVIRONMENTAL AUDIO RESPONSE ON MIXED REALITY DEVICE
This disclosure relates in general to augmented reality (AR), mixed reality (MR), or extended reality (XR) environmental mapping. Specifically, this disclosure relates to AR, MR, or XR audio mapping in an AR, MR, or XR environment. In some embodiments, the disclosed systems and methods allow the environment to be mapped based on a recording. In some embodiments, the audio mapping information is associated to voxels located in the environment.
Embodiments of the present disclosure are directed to an acoustic waveguide for presenting an audio signal. An apparatus in accordance with embodiments of this disclosure can include a waveguide member comprising a hollow body having a first end and a second end. The apparatus can further include a sound source disposed at the first end of the waveguide configured to emit at least a first sound wave. The apparatus can further include a plurality of acoustic vents disposed on a lower surface of the body of the waveguide, wherein each of the plurality of acoustic vents is configured to receive the first sound wave and further configured to emit a respective sound wave based on the first sound wave, wherein each respective sound wave corresponds to a respective point sound source.
H04R 1/34 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en utilisant un seul transducteur avec des moyens réfléchissant, diffractant, dirigeant ou guidant des sons
Embodiments of the present disclosure can provide systems and methods for presenting audio signals based on an analysis of a voice of a speaker in an augmented reality or mixed reality environment. Methods according to embodiments of this disclosure can include receiving audio data from a microphone of a first wearable head device, the first wearable head device in communication with a virtual environment, the audio data comprising speech data. In some examples, the methods can include identifying a voice parameter based on the audio data. In some examples, the methods can include determining an acoustic parameter based on the voice parameter. In some examples, the methods can include applying the acoustic parameter to the audio data to generate a spatialized audio signal. In some examples, the methods can include presenting the spatialized audio signal to a second wearable head device in communication with the virtual environment.
This disclosure is related to systems and methods for rendering audio for a mixed reality environment. Methods according to embodiments of this disclosure include receiving an input audio signal, via a wearable device in communication with a mixed reality environment, the input audio signal corresponding to a sound source originating from a real environment. In some embodiments, the system can determine one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can determine a signal modification parameter based on the one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can apply the signal modification parameter to the input audio signal to determine a second audio signal. The system can present the second audio signal to the user.
G10L 25/54 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour la recherche
G10L 25/84 - Détection de la présence ou de l’absence de signaux de voix pour différencier la parole du bruit
G10L 25/84 - Détection de la présence ou de l’absence de signaux de voix pour différencier la parole du bruit
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
Disclosed herein are systems and methods for capturing a sound field, in particular, using a mixed reality device. In some embodiments, a method comprises: detecting, with a microphone of a first wearable-head device, a sound of an environment; determining a digital audio signal based on the detected sound, the digital audio signal associated with a sphere having a position in the environment; concurrently with detecting the sound, a microphone movement with respect to the environment; adjusting the digital audio signal, wherein the adjusting comprises adjusting the position of the sphere based on based on the detected microphone movement.
A method of forming a waveguide for an eyepiece for a display system to reduce optical degradation of the waveguide during segmentation is disclosed herein. The method includes providing a substrate having top and bottom major surfaces and a plurality of surface features, and using a laser beam to cut out a waveguide from said substrate by cutting along a path contacting and/or proximal to said plurality of surface features. The waveguide has edges formed by the laser beam and a main region and a peripheral region surrounding the main region. The peripheral region is surrounded by the edges.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using depth data to update camera calibration data. In some implementations, a frame of data is captured including (i) depth data from a depth sensor of a device, and (ii) image data from a camera of the device. Selected points from the depth data are transformed, using camera calibration data for the camera, to a three-dimensional space that is based on the image data. The transformed points are projected onto the two-dimensional image data from the camera. Updated camera calibration data is generated based on differences between (i) the locations of the projected points and (ii) locations that features representing the selected points appear in the two-dimensional image data from the camera. The updated camera calibration data can be used in a simultaneous localization and mapping process.
In an example method for forming a variable optical viewing optics assembly (VOA) for a head mounted display, a prepolymer is deposited onto a substrate having a first optical element for the VOA. Further, a mold is applied to the prepolymer to conform the prepolymer to a curved surface of the mold on a first side of the prepolymer and to conform the prepolymer to a surface of the substrate on a second side of the prepolymer opposite the first side. Further, the prepolymer is exposed to actinic radiation sufficient to form a solid polymer from the prepolymer, such that the solid polymer forms an ophthalmic lens having a curved surface corresponding to the curved surface of the mold, and the substrate and the ophthalmic lens form an integrated optical component. The mold is released from the solid polymer, and the VOA is assembled using the integrated optical component.
H04N 5/64 - TRANSMISSION D'IMAGES, p.ex. TÉLÉVISION - Détails des systèmes de télévision - Détails de structure des récepteurs, p.ex. ébénisterie ou housses
G02B 5/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques autres que les lentilles
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
21.
METHOD OF FABRICATING MOLDS FOR FORMING WAVEGUIDES AND RELATED SYSTEMS AND METHODS USING THE WAVEGUIDES
Methods are disclosed for fabricating molds for forming waveguides with integrated spacers for forming eyepieces. The molds are formed by etching features (e.g., 1µm to 1000µm deep) into a substrate comprising single crystalline material using an anisotropic wet etch. The etch masks for defining the large features may comprise a plurality of holes, wherein the size and shape of each hole at least partially determine the depth of the corresponding large feature. The holes may be aligned along a crystal axis of the substrate and the etching may automatically stop due to the crystal structure of the substrate. The patterned substrate may be utilized as a mold onto which a flowable polymer may be introduced and allowed to harden. Hardened polymer in the holes may form a waveguide with integrated spacers. The mold may be also used to fabricate a platform comprising a plurality of vertically extending microstructures of precise heights, to test the curvature or flatness of a sample, e.g., based on the amount of contact between the microstructures and the sample.
A method of generating foveated rendering using temporal multiplexing includes generating a first spatial profile for an FOV by dividing the FOV into a first foveated zone and a first peripheral zone. The first foveated zone will be rendered at a first pixel resolution, and the first peripheral zone will be rendered at a second pixel resolution lower than the first pixel resolution. The method further includes generating a second spatial profile for the FOV by dividing the FOV into a second foveated zone and a second peripheral zone, the second foveated zone being spatially offset from the first foveated zone. The second foveated zone and the second peripheral zone will be rendered at the first pixel resolution and the second pixel resolution, respectively. The method further includes multiplexing the first spatial profile and the second spatial profile temporally in a sequence of frames.
An optical device includes one or more volume phase holographic gratings each of which includes a photosensitive layer whose optical properties are spatially modulated. The spatial modulation of optical properties are recorded in the photosensitive layer by generating an optical interference pattern using a beam of light and one or more liquid crystal master gratings. The volume phase holograms may be configured to redirect light of visible or infrared wavelengths propagating in free space or through a waveguide. Advantageously, fabricating the volume phase holographic gratings using liquid crystal master grating allows independent control of the optical function and the selectivity of the volume phase holographic grating during the fabrication process.
G03H 1/04 - Procédés ou appareils pour produire des hologrammes
G03H 1/00 - Procédés ou appareils holographiques utilisant la lumière, les infrarouges ou les ultraviolets pour obtenir des hologrammes ou pour en obtenir une image; Leurs détails spécifiques
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
22 laser to yield a multilayer optical component having a first surface, a second surface opposite the first surface, and a blackened edge along a perimeter of the multilayer optical component. The multiplicity of polymer layers is sealed along the blackened edge. The resulting multilayer optical component includes a multiplicity of polymer layers and a blackened edge seal around the multiplicity of polymer layers. The blackened edge seal includes polymer melt from the multiplicity of polymer layers.
B23K 26/324 - Assemblage tenant compte des propriétés du matériau concerné faisant intervenir des parties non métalliques
B23K 26/354 - Travail par rayon laser, p.ex. soudage, découpage ou perçage pour le traitement de surface par fusion
B23K 26/38 - Enlèvement de matière par perçage ou découpage
B23K 26/402 - Enlèvement de matière en tenant compte des propriétés du matériau à enlever en faisant intervenir des matériaux non métalliques, p.ex. des isolants
B23K 103/00 - Matières à braser, souder ou découper
25.
THIN ILLUMINATION LAYER WAVEGUIDE AND METHODS OF FABRICATION
Disclosed herein are systems and methods for displays, such as for a head wearable device. An example display can include an infrared illumination layer, the infrared illumination layer including a waveguide having a first face and a second face, the first face disposed opposite the second face. The illumination layer may also include an in-coupling grating disposed on the first face, the in-coupling grating configured to couple light into the waveguide to generate internally reflected light propagating in a first direction. The illumination layer may also include a plurality of out-coupling gratings disposed on at least one of the first face and the second face, the plurality of out-coupling gratings configured to receive the internally reflected light and couple the internally reflected light out of the waveguide.
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
26.
IMPRINT LITHOGRAPHY PROCESS AND METHODS ON CURVED SURFACES
Methods for creating a pattern on a curved surface and an optical structure (e.g., curved waveguide, a lens having an antireflective feature, an optical structure of a wearable head device) are disclosed. In some embodiments, the method comprises: depositing a patterning material on a curved surface; positioning a superstrate over the patterning material, the superstrate comprising a template for creating the pattern; applying, using the patterning material, a force between the curved surface and the superstrate; curing the patterning material, wherein the cured patterning material comprises the pattern; and removing the superstrate. In some embodiments, the method comprises forming the optical structure using the pattern.
B81C 1/00 - Fabrication ou traitement de dispositifs ou de systèmes dans ou sur un substrat
B29C 37/00 - FAÇONNAGE OU ASSEMBLAGE DES MATIÈRES PLASTIQUES; FAÇONNAGE DES MATIÈRES À L'ÉTAT PLASTIQUE NON PRÉVU AILLEURS; POST-TRAITEMENT DES PRODUITS FAÇONNÉS, p.ex. RÉPARATION - Eléments constitutifs, détails, accessoires ou opérations auxiliaires non couverts par le groupe ou
Disclosed herein are systems and methods for fabricating nano-structures on a substrate that can be used in eyepieces for displays, e.g., in head wearable devices. Fabricating and/or etching such a substrate can include submerging the substrate in a bath and applying ultrasonication to the bath for a first time period. The ultrasonication applied to the first bath can agitate the fluid to provide a substantially uniform first reactive environment across the surface of the substrate. The substrate can be submerged in a second bath and ultrasonication can be applied to the second bath for a second time period. The ultrasonication applied to the second bath can agitate the fluid to provide a substantially uniform second reactive environment across the surface of the substrate. A predetermined amount of material can be removed from the surface of the substrate during the second time period to produce an etched substrate.
Structures for forming an optical feature and methods for forming the optical feature are disclosed. In some embodiments, the structure comprises a patterned layer comprising a pattern corresponding to the optical feature; a base layer; and an intermediate layer bonded to the patterned layer and the base layer.
G02B 1/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques caractérisés par la substance dont ils sont faits; Revêtements optiques pour éléments optiques
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
29.
NANOPATTERN ENCAPSULATION FUNCTION, METHOD AND PROCESS IN COMBINED OPTICAL COMPONENTS
Disclosed herein are systems and methods for displays, such as for a head wearable device. An example display can include an infrared illumination layer, the infrared illumination layer including a substrate, one or more LEDs disposed on a first surface of the substrate, and a first encapsulation layer disposed on the first surface of the substrate, where the encapsulation layer can include a nano-patterned surface. In some examples, the nano-patterned surface can be configured to improve a visible light transmittance of the illumination layer. In one or more examples, embodiments disclosed herein may provide a robust illumination layer that can reduce the haze associated with an illumination layer.
G02B 6/124 - Lentilles géodésiques ou réseaux intégrés
G02B 15/04 - Objectifs optiques avec moyens de faire varier le grossissement en modifiant, ajoutant ou retirant une partie de l'objectif, p.ex. objectifs transformables par changement d'une partie
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
30.
COVER ARCHITECTURES IN CURVED EYEPIECE STACKS FOR MIXED REALITY APPLICATIONS
Eyepieces and methods of fabricating the eyepieces are disclosed. In some embodiments, the eyepiece comprises a curved cover layer and a waveguide layer for propagating light. In some embodiments, the curved cover layer comprises an antireflective feature.
This disclosure describes in-plane switching mode liquid crystal geometric phase tunable lenses that can be integrated into an eyepiece of an optical device for the correction of non-emmetropic vision, such as in an augmented reality display system. The eyepiece can include an integrated, field-configurable optic arranged with respect to a waveguide used to project digital imagery to the user, the optic being capable of providing a tunable Rx for the user including variable spherical refractive power (SPH), cylinder refractive power, and cylinder axis values. In certain configuration, each tunable eyepiece includes two variable compound lenses: one on the user-side of the waveguide with variable SPH, cylinder power, and axis values; and a second on the world side of the waveguide with variable SPH.
G02F 1/133 - Dispositions relatives à la structure; Excitation de cellules à cristaux liquides; Dispositions relatives aux circuits
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
32.
METHOD AND APPARATUS FOR IMPROVED SPEAKER IDENTIFICATION AND SPEECH ENHANCEMENT
A headwear device comprises a frame structure configured for being worn on the head of a user, a vibration voice pickup (VVPU) sensor affixed to the frame structure for capturing vibration originating from a voiced sound of a user and generating a vibration signal, at least one microphone affixed to the frame structure for capturing voiced sound from the user and ambient noise, and at least one processor configured for performing an analysis of the vibration signal, and determining that the user has generated the voice sound based on the analysis of the vibration signal.
An eyewear device for being worn on a head of a user for presenting virtual content to a user comprises an optics system and a frame front operatively coupled to the optics system for presenting virtual content to a user wearing the eyewear device. The eyewear device further comprises left and right opposing temple arms affixed to the frame front, and a torsion band assembly having opposing ends that connect the left and right opposing temple arms together. The eyewear device further comprises at least a first floating boss that protrudes partially into one of the left and right opposing temple arms, such that the first floating boss(es) moves within the one of the left and right opposing temple arms in one or more axes in a constrained manner.
Embodiments of this disclosure provides systems and methods for displays. In embodiments, a display system includes a frame, an eyepiece coupled to the frame, and a first adhesive bond disposed between the frame and the eyepiece. The eyepiece can include a light input region and a light output region. The first adhesive bond can be disposed along a first portion of a perimeter of the eyepiece, where the first portion of the perimeter of the eyepiece borders the light input region such that the first adhesive bond is configured to maintain a position of the light input region relative to the frame.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
Disclosed here in are systems and methods for mapping environment information. In some embodiments, the systems and methods are configured for mapping information in a mixed reality environment. In some embodiments, the system is configured to perform a method including scanning an environment including capturing, with a sensor, a plurality of points of the environment; tracking a plane of the environment; updating observations associated with the environment by inserting a keyframe into the observations; determining whether the plane is coplanar with a second plane of the environment; in accordance with a determination that the plane is coplanar with the second plane, performing planar bundle adjustment on the observations associated with the environment; and in accordance with a determination that the plane is not coplanar with the second plane, performing planar bundle adjustment on a portion of the observations associated with the environment.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G01C 21/20 - Instruments pour effectuer des calculs de navigation
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G01S 17/42 - Mesure simultanée de la distance et d'autres coordonnées
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
This document describes scene understanding for cross reality systems using occupancy grids. In one aspect, a method includes recognizing one or more objects in a model of a physical environment generated using images of the physical environment. For each object, a bounding box is fit around the object. An occupancy grid that includes a multiple cells is generated within the bounding box around the object. A value is assigned to each cell of the occupancy grid based on whether the cell includes a portion of the object. An object representation that includes information describing the occupancy grid for the object is generated. The object representations are sent to one or more devices.
Some embodiments herein are directed to head-mounted Virtual Retina Display (VRD) systems with actuated reflective pupil steering. The display systems include a projection system for generating image content and an actuated reflective optical architecture, which may be part of an optical combiner, that reflects light from the projection system into the user's eyes. The display systems are configured to track the position of a user's eyes and to actuate the reflective optical architecture to change the direction of reflected light so that the reflected light is directed into the user's eyes. The VRDs described herein may be highly efficient, and may have improved size, weight, and luminance such that they are capable of all-day, everyday use.
Embodiments of this disclosure systems and methods for displays. In embodiments, a display system includes a light source configured to emit a first light, a lens configured to receive the first light, and an image generator configured receive the first light and emit a second light. The display system further includes a plurality of waveguides, where at least two of the plurality of waveguides include an in-coupling grating configured to selectively couple the second light. In some embodiments, the light source can comprise a single pupil light source having a reflector and a micro-LED array disposed in the reflector.
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
A head mounted display can include a frame, an eyepiece, an image injection device, a sensor array, a reflector, and an off-axis optical element. The frame can be configured to be supported on the head of the user. The eyepiece can be coupled to the frame and configured to be disposed in front of an eye of the user. The eyepiece can include a plurality of layers. The image injection device can be configured to provide image content to the eyepiece for viewing by the user. The sensor array can be integrated in or one the eyepiece. The reflector can be disposed in or on the eyepiece and configured to reflect light received from an object for imaging by the sensor array. The off-axis optical element can be disposed in or one the eyepiece. The off-axis optical element can be configured to receive light reflected from the reflector and direct at least a portion of the light toward the sensor array.
Techniques for addressing deformations in a virtual or augmented headset described. In some implementations, cameras in a headset can obtain image data at different times as the headset moves through a series of poses of the headset. One or more miscalibration conditions for the headset that have occurred as the headset moved through the series of poses can be detected. The series of poses can be divided into groups of poses based on the one or more miscalibration conditions, and bundle adjustment for the groups of poses can be performed using a separate set of camera calibration data. The bundle adjustment for the poses in each group is performed using a same set of calibration data for the group. The camera calibration data for each group is estimated jointly with bundle adjustment estimation for the poses in the group.
In some embodiments, a near-eye, near-eye display system comprises a stack of waveguides having pillars in a central, active portion of the waveguides. The active portion may include light outcoupling optical elements configured to outcouple image light from the waveguides towards the eye of a viewer. The pillars extend between and separate neighboring ones of the waveguides. The light outcoupling optical elements may include diffractive optical elements that are formed simultaneously with the pillars, for example, by imprinting or casting. The pillars are disposed on one or more major surfaces of each of the waveguides. The pillars may define a distance between two adjacent waveguides of the stack of waveguides. The pillars may be bonded to adjacent waveguides may be using one or more of the systems, methods, or devices herein. The bonding provides a high level of thermal stability to the waveguide stack, to resist deformation as temperatures change.
C09J 5/02 - Procédés de collage en général; Procédés de collage non prévus ailleurs, p.ex. relatifs aux amorces comprenant un traitement préalable des surfaces à joindre
Methods, systems, and apparatus for performing bundling adjustment using epipolar constraints. A method includes receiving image data from a headset for a particular pose. The image data includes a first image from a first camera of the headset and a second image from a second camera of the headset. The method includes identifying at least one key point in a three-dimensional model of an environment at least partly represented in the first image and the second image and performing bundle adjustment. Bundle adjustment is performed by jointly optimizing a reprojection error for the at least one key point and an epipolar error for the at least one key point. Results of the bundle adjustment are used to perform at least one of (i) updating the three-dimensional model, (ii) determining a position of the headset at the particular pose, or (iii) determining extrinsic parameters of the first camera and second camera.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
43.
MISCALIBRATION DETECTION FOR VIRTUAL REALITY AND AUGMENTED REALITY SYSTEMS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing miscalibration detection. One of the methods includes receiving sensor data from each of multiple sensors of a device in a system configured to provide augmented reality or mixed reality output to a user. Feature values are determined based on the sensor data for a predetermined set of features. The determined feature values are processed using a miscalibration detection model that has been trained, based on examples of captured sensor data from one or more devices, to predict whether a miscalibration condition of one or more of the multiple sensors has occurred. Based on the output of the miscalibration detection model, the system determines whether to initiate recalibration of extrinsic parameters for at least one of the multiple sensors or to bypass recalibration.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
A computer-implemented method includes receiving gaze information about an observer of a video stream; determining a video compression spatial map for the video stream based on the received gaze information and performance characteristics of a network connection with the observer; compressing the video stream according to the video compression spatial map; and sending the compressed video stream to the observer.
H04B 1/66 - TRANSMISSION - Détails des systèmes de transmission non caractérisés par le milieu utilisé pour la transmission pour améliorer l'efficacité de la transmission
H04N 7/12 - Systèmes dans lesquels le signal de télévision est transmis par un canal ou une pluralité de canaux parallèles, la bande passante de chaque canal étant inférieure à la largeur de bande du signal de télévision
45.
TRANSMODAL INPUT FUSION FOR MULTI-USER GROUP INTENT PROCESSING IN VIRTUAL ENVIRONMENTS
This document describes imaging and visualization systems in which the intent of a group of users in a shared space is determined and acted upon. In one aspect, a method includes identifying, for a group of users in a shared virtual space, a respective objective for each of two or more of the users in the group of users. For each of the two or more users, a determination is made, based on inputs from multiple sensors having different input modalities, a respective intent of the user. At least a portion of the multiple sensors are sensors of a device of the user that enables the user to participate in the shared virtual space. A determination is made, based on the respective intent, whether the user is performing the respective objective for the user. Output data is generated and provided based on the respective objectives respective intents.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
46.
BRAGG GRATINGS FOR AN AUGMENTED REALITY DISPLAY SYSTEM
A head-mounted display system can include a head-mountable frame, a light projection system configured to output light to provide image content to a user's eye, and a waveguide supported by the frame. The waveguide can be configured to guide at least a portion of the light from the light projection system coupled into the waveguide to present the image content to the user's eye. The system can include a grating that includes a first reflective diffractive optical element and a second reflective diffractive optical element. The combination of the first and second reflective diffractive optical elements can operate as a transmissive diffractive optical element. The first reflective diffractive optical element can be a volume phase holographic grating. The second reflective diffractive optical element can be a liquid crystal polarization grating.
G02B 5/30 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques autres que les lentilles Éléments polarisants
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
G02B 30/00 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques
G02F 1/133 - Dispositions relatives à la structure; Excitation de cellules à cristaux liquides; Dispositions relatives aux circuits
G02F 1/295 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c. à d. déflexion dans une structure de guide d'ondes optique
Systems and methods for presenting audio using an audio system supporting multiple modes of operation are disclosed. In some embodiments, elements of the audio system are configured to operate in the different modes. For example, the audio system is configured to operate in a first mode and a second mode. The audio system may be operating in the first mode or the second mode based on an application running on a system or a signal generated by the system.
A voice user interface (VUI) and methods for operating the VUI are disclosed. In some embodiments, the VUI configured to receive and process linguistic and non-linguistic inputs. For example, the VUI receives an audio signal, and the VUI determines whether the audio input comprises a linguistic and/or a non-linguistic input. In accordance with a determination that the audio signal comprises a non-linguistic input, the VUI causes a system to perform an action associated with the non-linguistic input.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for calibrating an augmented reality device using camera and inertial measurement unit data. In some implementations, a bundle adjustment process jointly optimizes or estimates states of the augmented reality device. The process can use, as input, visual and inertial measurements as well as factory-calibrated sensor extrinsic parameters. The process performs bundle adjustment and uses non-linear optimization of estimated states constrained by the measurements and the factory calibrated extrinsic parameters. The process can jointly optimize inertial constraints, IMU calibration, and camera calibrations. Output of the process can include most likely estimated states, such as data for a 3D map of an environment, a trajectory of the device, and/or updated extrinsic parameters of the visual and inertial sensors (e.g., cameras and IMUs).
An eyepiece waveguide for an augmented reality display system. The eyepiece waveguide can include an optically transmissive substrate with an input coupling grating (ICG) region. The ICG region can receive a beam of light and couple the beam into the substrate in a guided propagation mode. The eyepiece waveguide can also include a combined pupil expander-extractor (CPE) grating region that receives the beam of light from the ICG region and alters the propagation direction of the beam with a first interaction and out-couples the beam with a second interaction. The diffractive features of the CPE grating region can be arranged in rows and columns of alternating higher and lower quadrilateral surfaces or the diffractive features can comprise diamond shaped raised ridges. The eyepiece waveguide can also include one or more recycler grating regions.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for camera calibration during bundle adjustment. One of the methods includes maintaining a three-dimensional model of an environment and a plurality of image data clusters that each include data generated from images captured by two or more cameras included in a device. The method includes jointly determining, for a three-dimensional point represented by an image data cluster (i) the newly estimated coordinates for the three-dimensional point for an update to the three-dimensional model or a trajectory of the device, and (ii) the newly estimated calibration data that represents the spatial relationship between the two or more cameras.
Systems include three optical elements arranged along an optical axis each having a different cylinder axis and a variable cylinder refractive power. Collectively, the three elements form a compound optical element having an overall spherical refractive power (SPH), cylinder refractive power (CYL), and cylinder axis (Axis) that can be varied according to a prescription (Rx).
Systems and methods for managing multi-objective alignments in imprinting (e.g., single-sided or double-sided) are provided. An example system includes rollers for moving a template roll, a stage for holding a substrate, a dispenser for dispensing resist on the substrate, a light source for curing the resist to form an imprint on the substrate when a template of the template roll is pressed into the resist on the substrate, a first inspection system for registering a fiducial mark of the template to determine a template offset, a second inspection system for registering the imprint on the substrate to determine a wafer registration offset between a target location and an actual location of the imprint, and a controller for controlling to move the substrate with the resist below the template based on the template offset, and determine an overlay bias of the imprint on the substrate based on the wafer registration offset.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an object recognition neural network using multiple data sources. One of the methods includes receiving training data that includes a plurality of training images from a first source and images from a second source. A set of training images are obtained from the training data. For each training image in the set of training images, contrast equalization is applied to the training image to generate a modified image. The modified image is processed using the neural network to generate an object recognition output for the modified image. A loss is determined based on errors between, for each training image in the set, the object recognition output for the modified image generated from the training image and ground-truth annotation for the training image. Parameters of the neural network are updated based on the determined loss.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/32 - Alignement ou centrage du capteur d'image ou de la zone image
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
55.
CAMERA EXTRINSIC CALIBRATION VIA RAY INTERSECTIONS
Embodiments provide image display systems and methods for extrinsic calibration of one or more cameras. More specifically, embodiments are directed to camera extrinsic calibration approach based on determining intersection of the rays projecting from the optical centers of the camera and a reference camera. Embodiments determine the relative position and orientation of one or more cameras given image(s) of the same target object from each camera by projecting measured image points into 3D rays in the real world. The extrinsic parameters are found by minimizing the expected 3D intersections of those rays with the known 3D target points.
An eye tracking system can include a first camera configured to capture a first plurality of visual data of a right eye at a first sampling rate. The system can include a second camera configured to capture a second plurality of visual data of a left eye at a second sampling rate. The second plurality of visual data can be captured during different sampling times than the first plurality of visual data. The system can estimate, based on at least some visual data of the first and second plurality of visual data, visual data of at least one of the right or left eye at a sampling time during which visual data of an eye for which the visual data is being estimated are not being captured. Eye movements of the eye based on at least some of the estimated visual data and at least some visual data of the first or second plurality of visual data can be determined.
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
A61B 3/14 - Dispositions spécialement adaptées à la photographie de l'œil
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints for thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of cornea of the user's eye using data derived from the glint images. The display system may use spherical and aspheric cornea models to estimate a location of the corneal center of the user's eye.
A61B 3/107 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer la forme ou mesurer la courbure de la cornée
A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
A61B 5/1171 - Identification des personnes basée sur la morphologie ou l’aspect de leur corps ou de parties de celui-ci
A method for measuring performance of a head-mounted display module, the method including arranging the head-mounted display module relative to a plenoptic camera assembly so that an exit pupil of the head-mounted display module coincides with a pupil of the plenoptic camera assembly; emitting light from the head-mounted display module while the head-mounted display module is arranged relative to the plenoptic camera assembly; filtering the light at the exit pupil of the head-mounted display module; acquiring, with the plenoptic camera assembly, one or more light field images projected from the head-mounted display module with the filtered light; and determining information about the performance of the head-mounted display module based on acquired light field image.
A method, includes providing a wafer including a first surface grating extending over a first area of a surface of the wafer and a second surface grating extending over a second area of the surface of the wafer; de-functionalizing a portion of the surface grating in at least one of the first surface grating area and the second surface grating area; and singulating an eyepiece from the wafer, the eyepiece including a portion of the first surface grating area and a portion of the second surface grating area. The first surface grating in the eyepiece corresponds to an input coupling grating for a head-mounted display and the second surface grating corresponds to a pupil expander grating for the head-mounted display.
H01L 21/66 - Test ou mesure durant la fabrication ou le traitement
H01L 21/67 - Appareils spécialement adaptés pour la manipulation des dispositifs à semi-conducteurs ou des dispositifs électriques à l'état solide pendant leur fabrication ou leur traitement; Appareils spécialement adaptés pour la manipulation des plaquettes pendant la fabrication ou le traitement des dispositifs à semi-conducteurs ou des dispositifs électriques à l'état solide ou de leurs composants
H01L 21/68 - Appareils spécialement adaptés pour la manipulation des dispositifs à semi-conducteurs ou des dispositifs électriques à l'état solide pendant leur fabrication ou leur traitement; Appareils spécialement adaptés pour la manipulation des plaquettes pendant la fabrication ou le traitement des dispositifs à semi-conducteurs ou des dispositifs électriques à l'état solide ou de leurs composants pour le positionnement, l'orientation ou l'alignement
A method for displaying an image using a wearable display system including directing display light from a display towards a user through an eyepiece to project images in the users field of view, determining a relative location between an ambient light source and the eyepiece, and adjusting an attenuation of ambient light from the ambient light source through the eyepiece depending on the relative location between the ambient light source and the eyepiece.
Disclosed are techniques for improving the color uniformity of a display of a display device. A plurality of images of the display are captured using an image capture device. The plurality of images are captured in a color space, with each image corresponding to one of a plurality of color channels. A global white balance is performed to the plurality of images to obtain a plurality of normalized images. A local white balance is performed to the plurality of normalized images to obtain a plurality of correction matrices. Performing the local white balance includes defining a set of weighting factors based on a figure of merit and computing a plurality of weighted images based on the plurality of normalized images and the set of weighting factors. The plurality of correction matrices are computed based on the plurality of weighted images.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
62.
OBJECT RECOGNITION NEURAL NETWORK FOR AMODAL CENTER PREDICTION
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object recognition neural network for amodal center prediction. One of the methods includes receiving an image of an object captured by a camera. The image of the object is processed using an object recognition neural network that is configured to generate an object recognition output. The object recognition output includes data defining a predicted two-dimensional amodal center of the object, wherein the predicted two-dimensional amodal center of the object is a projection of a predicted three-dimensional center of the object under a camera pose of the camera that captured the image.
Embodiments provide image display systems and methods for one or more camera calibration using a two-sided diffractive optical element (DOE). More specifically, embodiments are directed to determining intrinsic parameters of one or more cameras using a single image obtained using a two-sided DOE. The two-sided DOE has a first pattern on a first surface and a second pattern on a second surface. Each of the first and second patterns may be formed by repeating sub-patterns that are lined when tiled on each surface. The patterns on the two-sided DOE are formed such that the brightness of the central intensity peak on the image of the image pattern formed by the DOE is reduced to a predetermined amount.
Techniques for tracking eye movement in an augmented reality system identify a plurality of base images of an object or a portion thereof. A search image may be generated based at least in part upon at least some of the plurality of base images. A deep learning result may be generated at least by performing a deep learning process on a base image using a neural network in a deep learning mode. A captured image may be localized at least by performing an image registration process on the captured image and the search image using a Kalman filter model and the deep learning result.
Enhanced eye-tracking techniques for augmented or virtual reality display systems. An example method includes obtaining an image of an eye of a user of a wearable system, the image depicting glints on the eye caused by respective light emitters, wherein the image is a low dynamic range (LDR) image; generating a high dynamic range (HDR) image via computation of a forward pass of a machine learning model using the image; determining location information associated with the glints as depicted in the HDR image, wherein the location information is usable to inform an eye pose of the eye.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/048 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI]
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
Wearable and optical display systems and methods for operation thereof incorporating monovision display techniques are disclosed. A wearable device may include left and right optical stacks configured to switch between displaying virtual content at a first focal plane or a second focal plane. The wearable device may determine whether or not an activation condition is satisfied. In response to determining that the activation condition is satisfied, a monovision display mode associated with the wearable device may be activated, which may include causing the left optical stack to display the virtual content at the first focal plane and causing the right optical stack to display the virtual content at the second focal plane.
Disclosed herein are systems and methods for presenting an audio signal associated with presentation of a virtual object colliding with a surface. The virtual object and the surface may be associated with a mixed reality environment. Generation of the audio signal may be based on at least one of an audio stream from a microphone and a video stream form a sensor. In some embodiments, the collision between the virtual object and the surface is associated with a footstep on the surface.
Disclosed herein are systems and methods for calculating angular acceleration based on inertial data using two or more inertial measurement units (IMUs). The calculated angular acceleration may be used to estimate a position of a wearable head device comprising the IMUs. Virtual content may be presented based on the position of the wearable head device. In some embodiments, a first IMU and a second IMU share a coincident measurement axis.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G06T 1/00 - Traitement de données d'image, d'application générale
69.
PIECEWISE PROGRESSIVE AND CONTINUOUS CALIBRATION WITH COHERENT CONTEXT
A piecewise progressive continuous calibration method with context coherence is utilized to improve display of virtual content. When a set of frames are rendered to depict a virtual image, the VAR system may identify a location of the virtual content in the frames. The system may convolve a test pattern at the location of the virtual content to generate a calibration frame. The calibration frame is inserted within the set of frames in a manner that is imperceptible to the user.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
70.
AUGMENTED AND VIRTUAL REALITY DISPLAY SYSTEMS WITH CORRELATED IN-COUPLING AND OUT-COUPLING OPTICAL REGIONS
Augmented reality and virtual reality display systems and devices are configured for efficient use of projected light. In some aspects, a display system includes a light projection system and a head-mounted display configured to project light into an eye of the user to display virtual image content. The head-mounted display includes at least one waveguide comprising a plurality of in-coupling regions each configured to receive, from the light projection system, light corresponding to a portion of the user's field of view and to in-couple the light into the waveguide; and a plurality of out-coupling regions configured to out-couple the light out of the waveguide to display the virtual content, wherein each of the out-coupling regions are configured to receive light from different ones of the in-coupling regions. In some implementations, each in-coupling region has a one-to-one correspondence with a unique corresponding out-coupling region.
An eyepiece waveguide for an augmented reality display system includes a substrate having a first surface and a second surface and a diffractive input coupling element. The diffractive input coupling element is configured to receive an input beam of light and to couple the input beam into the substrate as a guided beam. The eyepiece waveguide also includes a diffractive combined pupil expander-extractor (CPE) element formed on or in the first surface or the second surface of the substrate. The diffractive CPE element includes a first portion and a second portion divided by an axis. A first set of diffractive optical elements is disposed in the first portion and oriented at a positive angle with respect to the axis and a second set of diffractive optical elements is disposed in the second portion and oriented at a negative angle with respect to the axis.
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
Embodiments transform an image frame based on a position of pupils of a viewer to eliminate visual artefacts formed on an image frame displayed on a scanning-type display device. An MR system obtains a first image frame corresponding to a first view perspective associated with a first pupil position. The system receives data from an eye tracking device, determines a second pupil position, and generates a second image frame corresponding to a second view perspective associated with the second pupil position. A first set of pixels of the second image frame are shifted by a first shift value, and a second set of pixels of the second image frame are shifted by a second shift value, where the shift values are calculated based on at least the second pupil position. The system transmits the second image frame to a near-eye display device to be displayed thereon.
Embodiments shifts the color fields of a rendered image frame based on the eye tracking data (e.g. position of the user's pupils). An MR device obtains a first image frame having a set of color fields. The first image frame corresponds to a first position of the pupils of the viewer. The MR device then determines a second position of the pupils of the viewer based on, for example, data receive from an eye tracking device coupled to the MR device. The MR device generates, based on the first image frame, a second image frame corresponding to the second position of the pupils. The second image frame is generated by shifting color fields by a shift value based on the second position of the pupils of the viewer. The MR device transmits the second image frame to a display device of the MR device to be displayed thereon.
A method for fabricating a cantilever having a device surface, a tapered surface, and an end region includes providing a semiconductor substrate having a first side and a second side opposite to the first side and etching a predetermined portion of the second side to form a plurality of recesses in the second side. Each of the plurality of recesses comprises an etch termination surface. The method also includes anisotropically etching the etch termination surface to form the tapered surface of the cantilever and etching a predetermined portion of the device surface to release the end region of the cantilever.
H01L 21/31 - Traitement des corps semi-conducteurs en utilisant des procédés ou des appareils non couverts par les groupes pour former des couches isolantes en surface, p.ex. pour masquer ou en utilisant des techniques photolithographiques; Post-traitement de ces couches; Emploi de matériaux spécifiés pour ces couches
H01L 21/469 - Traitement de corps semi-conducteurs en utilisant des procédés ou des appareils non couverts par les groupes pour changer les caractéristiques physiques ou la forme de leur surface, p.ex. gravure, polissage, découpage pour y former des couches isolantes, p.ex. pour masquer ou en utilisant des techniques photolithographiques; Post-traitement de ces couches
75.
COMPUTATIONALLY EFFICIENT METHOD FOR COMPUTING A COMPOSITE REPRESENTATION OF A 3D ENVIRONMENT
Methods and apparatus for providing a representation of an environment, for example, in an XR system, and any suitable computer vision and robotics applications. A representation of an environment may include one or more planar features. The representation of the environment may be provided by jointly optimizing plane parameters of the planar features and sensor poses that the planar features are observed at. The joint optimization may be based on a reduced matrix and a reduced residual vector in lieu of the Jacobian matrix and the original residual vector.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A pupil separation system includes an input surface, a central portion including a set of dichroic mirrors, a first reflective surface disposed laterally with respect to the central portion, and a second reflective surface disposed laterally with respect to the central portion. The pupil separation system also includes an exit face including a central surface operable to transmit light in a first wavelength range, a first peripheral surface adjacent the central surface and operable to transmit light in a second wavelength range, and a second peripheral surface adjacent the central surface and opposite to the first peripheral surface. The second peripheral surface is operable to transmit light in a third wavelength range.
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 27/09 - Mise en forme du faisceau, p.ex. changement de la section transversale, non prévue ailleurs
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
77.
METHOD AND SYSTEM FOR INTEGRATION OF REFRACTIVE OPTICS WITH A DIFFRACTIVE EYEPIECE WAVEGUIDE DISPLAY
An eyepiece waveguide includes a set of waveguide layers having a world side and a user side. The eyepiece waveguide also includes a first cover plate having a first optical power and disposed adjacent the world side of the set of waveguide layers and a second cover plate having a second optical power and disposed adjacent the user side of the set of waveguide layers.
Techniques for calibrating cameras and displays are disclosed. An image of a target is captured using a camera. The target includes a tessellation having a repeated structure of tiles. The target further includes unique patterns superimposed onto the tessellation. Matrices are formed based on pixel intensities within the captured image. Each of the matrices includes values each corresponding to the pixel intensities within one of the tiles. The matrices are convolved with kernels to generate intensity maps. Each of the kernels is generated based on a corresponding unique pattern of the unique patterns. An extrema value is identified in each of the intensity maps. A location of each of the unique patterns within the image is determined based on the extrema value for each of the intensity maps. A device calibration is performed using the location of each of the unique patterns.
Techniques are disclosed for using and training a descriptor network. An image may be received and provided to the descriptor network. The descriptor network may generate an image descriptor based on the image. The image descriptor may include a set of elements distributed between a major vector comprising a first subset of the set of elements and a minor vector comprising a second subset of the set of elements. The second subset of the set of elements may include more elements than the first subset of the set of elements. A hierarchical normalization may be imposed onto the image descriptor by normalizing the major vector to a major normalization amount and normalizing the minor vector to a minor normalization amount. The minor normalization amount may be less than the major normalization amount.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/36 - Prétraitement de l'image, c. à d. traitement de l'information image sans se préoccuper de l'identité de l'image
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/68 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons successives des signaux images avec plusieurs références, p.ex. mémoire adressable
H04N 5/445 - Circuits de réception pour visualisation d'information additionnelle
A cross reality system enables portable devices to access stored maps and efficiently and accurately render virtual content specified in relation to those maps. The system may process images acquired with a portable device to quickly and accurately localize the portable device to the persisted maps by constraining the result of localization based on the estimated direction of gravity of a persisted map and the coordinate frame in which data in a localization request is posed. The system may actively align the data in the localization request with an estimated direction of gravity during the localization processing, and/or a portable device may establish a coordinate frame in which the data in the localization request is posed aligned with an estimated direction of gravity such that the subsequently acquired data for inclusion in a localization request, when posed in that coordinate frame, is passively aligned with the estimated direction of gravity.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
Techniques are disclosed for training a machine learning model to predict user expression. A plurality of images are received, each of the plurality of images containing at least a portion of a user's face. A plurality of values for a movement metric are calculated based on the plurality of images, each of the plurality of values for the movement metric being indicative of movement of the user's face. A plurality of values for an expression unit are calculated based on the plurality of values for the movement metric, each of the plurality of values for the expression unit corresponding to an extent to which the user's face is producing the expression unit. The machine learning model is trained using the plurality of images and the plurality of values for the expression unit.
Described herein are techniques and technologies to identify an encrypted content within a field of view of a user of a VR/AR system and process the encrypted content appropriately. The user of the VR/AR technology may have protected content in a field of view of the user. Encrypted content is mapped to one or more protected surfaces on a display device. Contents mapped to a protected surface may be rendered on the display device but prevented from being replicated from the display device.
Devices are described for high accuracy displacement of tools. In particular, embodiments provide a device for adjusting a position of a tool. The device includes a threaded shaft having a first end and a second end and a shaft axis extending from the first end to the second end, a motor that actuates the threaded shaft to move in a direction of the shaft axis. In some examples, the motor is operatively coupled to the threaded shaft. The device includes a carriage coupled to the camera, and a bearing assembly coupled to the threaded shaft and the carriage. In some examples, the bearing assembly permits a movement of the carriage with respect to the threaded shaft. The movement of the carriage allows the position of the camera to be adjusted.
A wearable display system includes one or more nanowire LED micro-displays. The nanowire micro-LED displays may be monochrome or full-color. The nanowire LEDs forming the arrays may have an advantageously narrow angular emission profile and high light output. Where a plurality of nanowire LED micro-displays is utilized, the micro-displays may be positioned at different sides of an optical combiner, for example, an X-cube prism which receives light rays from different micro-displays and outputs the light rays from the same face of the cube. The optical combiner directs the light to projection optics, which outputs the light to an eyepiece that relays the light to a user's eye. The eyepiece may output the light to the user's eye with different amounts of wavefront divergence, to place virtual content on different depth planes.
Examples of systems and methods for rendering an avatar in a mixed reality environment are disclosed. The systems and methods may be configured to automatically select avatar characteristics that optimize gaze perception by the user, based on context parameters associated with the virtual environment.
G06F 17/18 - Opérations mathématiques complexes pour l'évaluation de données statistiques
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
In some implementations, an optical device includes a one-way mirror formed by a polarization selective mirror and an absorptive polarizer. The absorptive polarizer has a transmission axis aligned with the transmission axis of the reflective polarizer. The one-way mirror may be provided on the world side of a head-mounted display system. Advantageously, the one-way mirror may reflect light from the world, which provides privacy and may improve the cosmetics of the display. In some implementations, the one-way mirror may include one or more of a depolarizer and a pair of opposing waveplates to improve alignment tolerances and reduce reflections to a viewer. In some implementations, the one-way mirror may form a compact integrated structure with a dimmer for reducing light transmitted to the viewer from the world.
An apparatus for providing a virtual or augmented reality experience, includes: a screen, wherein the screen is at least partially transparent for allowing a user of the apparatus to view an object in an environment surrounding the user; a surface detector configured to detect a surface of the object; an object identifier configured to obtain an orientation and/or an elevation of the surface of the object, and to make an identification for the object based on the orientation and/or the elevation of the surface of the object; and a graphic generator configured to generate an identifier indicating the identification for the object for display by the screen, wherein the screen is configured to display the identifier.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
88.
SYSTEMS AND METHODS FOR RETINAL IMAGING AND TRACKING
A head mounted display system configured to project light to an eye of a user to display augmented reality images can include a frame configured to be supported on a head of the user, a camera disposed temporally on said frame, an eyepiece configured to direct light into said user's eye to display augmented reality image content to the user's vision field, a reflective element disposed on the frame, and at least one VCSEL disposed to illuminate said eye, wherein the camera is disposed with respect to the reflective element such that light from the VCSEL is reflected from the user's eye to the reflective element and is reflected from the reflective element to the camera to form images of the eye that are captured by the camera.
In an example method of forming an optical film for an eyepiece, a curable material is dispensed into a space between a first and a second mold surface. A position of the first mold surface relative to the second mold surface is measured using a plurality of sensors. Each sensor measures a respective relative distance along a respective measurement axis between a respective point on a planar portion of the first mold surface and a respective point on a planar portion of the second mold surface. The measurement axes are parallel to each other, and the points define corresponding triangles on the first and second mold surfaces, respectively. The position of the first mold surface is adjusted relative to the second mold surface based on the measured position, and the curable material is cured to form the optical film.
G02B 1/04 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques caractérisés par la substance dont ils sont faits; Revêtements optiques pour éléments optiques faits de substances organiques, p.ex. plastiques
An apparatus for providing a virtual content in an environment which first and second users can interact with each other, comprising: a communication interface configured to communicate with a first display screen worn by the first user and/or a second display screen worn by the second user; and a processing unit configured to: obtain a first position of the first user, determine a first set of anchor point(s) based on the first position of the first user, obtain a second position of the second user, determine a second set of anchor point(s) based on the second position of the second user, determine one or more common anchor points that are in both the first set and the second set, and provide the virtual content for experience by the first user and/or the second user based on at least one of the one or more common anchor points.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
91.
EFFICIENT LOCALIZATION BASED ON MULTIPLE FEATURE TYPES
A method of efficiently and accurately computing a pose of an image with respect to other image information. The image may be acquired with a camera on a portable device and the other information may be a map, such that the computation of pose localizes the device relative to the map. Such a technique may be applied in a cross reality system to enable devices to efficiently and accurately access previously persisted maps. Localizing with respect to a map may enable multiple cross reality devices to render virtual content at locations specified in relation to those maps, providing an enhanced experience for uses of the system. The method may be used in other devices and for other purposes, such as for navigation of autonomous vehicles.
Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
Methods, systems, and wearable extended reality devices for generating a floorplan of an indoor scene are provided. A room classification of a room and a wall classification of a wall for the room may be determined from an input image of the indoor scene. A floorplan may be determined based at least in part upon the room classification and the wall classification without constraining a total number of rooms in the indoor scene or a size of the room.
A wearable display system includes an eyepiece stack having a world side and a user side opposite the world side. During use, a user positioned on the user side views displayed images delivered by the wearable display system via the eyepiece stack which augment the user's field of view of the user's environment. The system also includes an optical attenuator arranged on the world side of the eyepiece stack, the optical attenuator having a layer of a birefringent material having a plurality of domains each having a principal optic axis oriented in a corresponding direction different from the direction of other domains. Each domain of the optical attenuator reduces transmission of visible light incident on the optical attenuator for a corresponding different range of angles of incidence.
Systems and methods for estimating depths of features in a scene or environment surrounding a user of a spatial computing system, such as a virtual reality, augmented reality or mixed reality (collectively, cross reality) system, in an end-to-end process. The estimated depths can be utilized by a spatial computing system, for example, to provide an accurate and effective 3D cross reality experience.
Systems and methods of generating a three-dimensional (3D) reconstruction of a scene or environment surrounding a user of a spatial computing system, such as a virtual reality, augmented reality or mixed reality system, using only multiview images comprising RGB images, and without the need for depth sensors or depth data from sensors. Features are extracted from a sequence of frames of RGB images and back-projected using known camera intrinsics and extrinsics into a 3D voxel volume wherein each pixel of the voxel volume is mapped to a ray in the voxel volume. The back-projected features are fused into the 3D voxel volume. The 3D voxel volume is passed through a 3D convolutional neural network to refine the and regress truncated signed distance function values at each voxel of the 3D voxel volume.
A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps, even maps of very large environments, and render virtual content specified in relation to those maps. The cross reality system may quickly process a batch of images acquired with a portable device to determine whether there is sufficient consistency across the batch in the computed localization. Processing on at least one image from the batch may determine a rough localization of the device to the map. This rough localization result may be used in a refined localization process for the image for which it was generated. The rough localization result may also be selectively propagated to a refined localization process for other images in the batch, enabling rough localization processing to be skipped for the other images.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 15/16 - Associations de plusieurs calculateurs numériques comportant chacun au moins une unité arithmétique, une unité programme et un registre, p.ex. pour le traitement simultané de plusieurs programmes
A system includes: a screen configured for wear by a user, the screen configured to display a 2-dimensional (2D) element; a processing unit coupled to the display; and a user input device configured to generate a signal in response to a user input for selecting the 2D element displayed by the screen; wherein the processing unit is configured to obtain a 3-dimensional (3D) model associated with the 2D element in response to the generated signal.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Techniques are disclosed for allowing a user's hands to interact with virtual objects. An image of at least one hand may be received from an image capture devices. A plurality of keypoints associated with at least one hand may be detected. In response to determining that a hand is making or is transitioning into making a particular gesture, a subset of the plurality of keypoints may be selected. An interaction point may be registered to a particular location relative to the subset of the plurality of keypoints based on the particular gesture. A proximal point may be registered to a location along the user's body. A ray may be cast from the proximal point through the interaction point. A multi-DOF controller for interacting with the virtual object may be formed based on the ray.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
100.
CROSS REALITY SYSTEM WITH BUFFERING FOR LOCALIZATION ACCURACY
A system form localizing an electronic device with dynamic buffering identifies, from the buffer, a first set of features that is extracted from a first image captured by the electronic device and receives, at the system, a second set of features that is extracted from a second image captured by the electronic device. The system further determines a first characteristic for the first set of features and a second characteristic for the second set of features and determines whether a triggering condition for dynamically changing a size of the buffer is satisfied based at least in part upon the first characteristic for the first set of features and the second characteristic for the second set of features.