This disclosure describes techniques for manufacturing waveguides that include spacer(s) on at least one surface of the waveguide, such that the spacers maintain mechanical stability and separation between the waveguides when the waveguides as assembled into a waveguide stack that is usable as an optical device. The disclosure also describes the various implementations of waveguides and optical devices that include spacers. The spacers may be created using a drop dispenser, in which drops of a (e.g., polymer) fluid are dispensed onto at least one surface of a substrate to be used as a waveguide. After being dispensed, the fluid drops can be cured to create the final, solidified spacers. Curing may also be performed in-flight before the drops reach the surface of the substrate. Partially cured drops may be stacked to create spacers of a particular height.
G02B 6/10 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage du type guide d'ondes optiques
A method includes patterning a plurality of first trenches in a surface of a substrate; and etching the plurality of first trenches with an etchant having an etch rate for a first crystalline plane of the substrate that is greater than for a second crystalline plane of the substrate. The etching forms a slanted grating in the substrate.
Aligning extended-reality (XR) systems may present a first target at a closer, first location and a second target at a farther, second location to a user using the XR device, align the first and the second targets to each other with an alignment process, and determine a nodal point for an eye of the user based at least in part upon the first and the second target. Aligning extended-reality (XR) system may spatially register a set of targets in display portion of a user interface of the XR device comprising an adjustment mechanism that is used to adjust a relative position of the XR device to a user, trigger execution of a device fit process in response to receiving a device fit check signal, and adjust a relative position of the XR device to the user based on the device fit process.
An apparatus including a set of three illumination sources disposed in a first plane. Each of the set of three illumination sources is disposed at a position in the first plane offset from others of the set of three illumination sources by 120 degrees measured in polar coordinates. The apparatus also includes a set of three waveguide layers disposed adjacent the set of three illumination sources. Each of the set of three waveguide layers includes an incoupling diffractive element disposed at a lateral position offset by 180 degrees from a corresponding illumination source of the set of three illumination sources.
An eyepiece waveguide for augmented reality applications includes a substrate and a set of incoupling diffractive optical elements coupled to the substrate. A first subset of the set of incoupling diffractive optical elements is operable to diffract light into the substrate along a first range of propagation angles and a second subset of the set of incoupling diffractive optical elements is operable to diffract light into the substrate along a second range of propagation angles. The eyepiece waveguide also includes a combined pupil expander diffractive optical element coupled to the substrate.
An eyepiece includes a substrate, an input coupling grating on a first side of the substrate, and a morphed grating comprising characteristics of both a primary grating and a secondary grating on at least the first side of the substrate. The primary grating and the secondary grating may differ in pitch, orientation, and dimensions.
An eyewear device for being worn on a head of a user for presenting virtual content to a user comprises a frame structure having a frame front, and an optical assembly having a first rigidity. The optical assembly has a chassis and a plurality of optical components affixed to the chassis. The eyewear device further comprises a plurality of mounts mechanically coupling the chassis of the optical assembly to the frame front, at least one of the plurality of mounts having a second rigidity less than the first rigidity, such that the mount(s) is configured for preventing at least a portion of a first static deformation load applied to the frame front from being mechanically communicated to the optical assembly.
System and methods for performing gait analysis diagnosis, and rehabilitation and real-time feedback for treating gait disorders using extended reality, such as augmented reality. Image data is captured using native camera(s) on an extended reality headset worn by a subject while walking. The images are analyzed using a location and mapping algorithm to determine head-pose data regarding a position and location of the head of the subject. One or more gait attributes of a gait of the subject by analyzing the head-pose data using a gait-metric prediction algorithm. The gait attributes are analyzed to determine gait disorders, rehabilitation treatment, rehabilitation assessments, and/or rehabilitation feedback.
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 1/3234 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
9.
OPTICAL LAYERS TO IMPROVE PERFORMANCE OF EYEPIECES FOR USE WITH VIRTUAL AND AUGMENTED REALITY DISPLAY SYSTEMS
Improved diffractive optical elements for use in an eyepiece for an extended reality system, The diffractive optical elements comprise a diffraction structure having a waveguide substrate, a surface grating positioned on a first side of the waveguide substrate, and one or more optical layer pairs disposed between the waveguide substrate and the surface grating. Each optical layer pair comprises a low index layer and a high index layer disposed directly on an exterior side of the low index layer.
G02B 6/12 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage du type guide d'ondes optiques du genre à circuit intégré
G02B 6/122 - Elements optiques de base, p.ex. voies de guidage de la lumière
10.
METHOD AND SYSTEM FOR PERFORMING EYE TRACKING IN AUGMENTED REALITY DEVICES
An augmented reality display includes an eyepiece operable to output virtual content from an output region and a plurality of illumination sources. At least some of the plurality of illumination sources overlap with the output region. The system includes cameras on a user-facing side and a world-facing side to record content and to record a user's eyes for eye tracking. The set of cameras are positioned farther from the eyepiece than the plurality of illumination sources and are disposed within a width and a height of the output region.
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
Techniques are described for operating an optical system and for reducing angular transmittance variations in operation of a segmented dimmer by applying voltages to pixels of the segmented dimmer that are different than the voltages that would be nominally used to achieve a particular transmittance level, such as for on-axis transmission of light. The voltages applied to the pixels of the segmented dimmer can be selected so as to achieve a target observed transmittance level using a set of preconfigured and/or calibrated voltages that take into consideration the angular dependence of transmittance for the dimmer.
G09G 3/18 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un seul caractère, soit en sélectionnant un seul caractère parmi plusieurs, soit en composant le caractère par combinaison d'éléments individuels, p.ex. de segments élémentaires en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
G09G 3/06 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un seul caractère, soit en sélectionnant un seul caractère parmi plusieurs, soit en composant le caractère par combinaison d'éléments individuels, p.ex. de segments élémentaires utilisant des sources lumineuses commandées
G09G 3/296 - Circuits de commande pour la production de formes d’onde appliquées aux électrodes de commande
G09G 3/3258 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des panneaux électroluminescents semi-conducteurs, p.ex. utilisant des diodes électroluminescentes [LED] organiques, p.ex. utilisant des diodes électroluminescentes organiques [OLED] utilisant une matrice active avec un circuit de pixel pour commander la tension aux bornes de l'élément électroluminescent
12.
COMPENSATING THICKNESS VARIATIONS IN SUBSTRATES FOR OPTICAL DEVICES
This disclosure describes techniques for fabrication of waveguides as optical devices or for use in optical devices, with the waveguides customized to have a desired thickness variation. Techniques can employ inkjet-based lithography to compensate for thickness variations in the substrate used to manufacture the optical devices, and/or create custom variations in the thickness to achieve various optical properties in the resulting device. In some implementations, a curvature can also be applied to one or both surfaces of the substrate, to achieve desired optical performance and/or enhance fit of a wearable optical device. The optical devices created using the techniques described herein are suitable for use in virtual reality, augmented reality, and/or other suitable optical applications. The optical devices may be created on flexible (e.g., polymer) or more rigid (e.g., glass) substrates, with the thickness of the substrate being customizable using a jettable and curable polymer resin or photoresist.
An extended reality (XR) system, comprises a head-mounted display (HMD) configured for displaying virtual content to a user, a first altimeter carried by the HMD, a hand-held control, and a second altimeter carried by the hand-held control. The first altimeter configured for outputting first atmospheric pressure data indicative of an elevation of the HMD, while the second altimeter is configured for outputting second atmospheric pressure data indicative of an elevation of the hand-held control. The XR system further comprises at least one processor configured for determining a relative elevation between the first altimeter and the second altimeter based on the first atmospheric pressure data and the second atmospheric pressure data.
G01C 5/06 - Mesure des hauteurs; Mesure des distances transversales par rapport à la ligne de visée; Nivellement entre des points séparés; Niveaux à lunette en utilisant des moyens barométriques
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
A computer implemented method for generating audio for use with a head-mounted display system includes obtaining a frequency response data of a speaker coupled to the head-mounted display system. The method also includes comparing the frequency response data of the speaker with a target speaker response. The method further includes computing a coefficient for a filter system based on a result of comparing the frequency response data of the speaker with the target speaker response. Moreover, the method includes generating the audio using the filter system and the coefficient to compensate for a characteristic of the speaker.
This disclosure generally describes methods and systems for fabrication of high-quality surface relief waveguides for eyepieces. In particular, this disclosure describes techniques for manufacturing waveguides having surface relief features, such as diffractive gratings to achieve various optical effects, using nanolithographic imprinting techniques that reduce or eliminate the presence of gaps in the imprinted features through use of optimized drop patterns for dispensing photoresist. Moreover, the disclosure also describes techniques for manufacturing surface relief waveguides having a gradation, e.g., a substantially continuous grade or slope, between zones that have different residual layer thicknesses of the dispensed photoresist, and/or between zones having surface features of different height (or depth). Such gradation can reduce or eliminate adverse optical effects that may be caused by a more abrupt transition between zones, and increase the optical efficiency of the completed waveguide.
A head-mounted display system includes a head-mountable frame; a light projection system configured to output light to provide image content; a waveguide supported by the frame, the waveguide configured to guide at least a portion of the light from the light projection system coupled into the waveguide; a diffractive structure optically coupled to the waveguide, the diffractive structure being configured to couple light guided by the waveguide out of the waveguide towards a user side of the head-mounted display, the diffractive structure having a grating layer with multiple ridges each having a side face that is slanted or stepped with respect to a plane of the waveguide. The diffractive structure directs at least 25% more light guided by the waveguide towards the user side than the world side.
Wearable systems and method for operation thereof incorporating headset and controller localization using headset cameras and controller fiducials are disclosed. A wearable system may include a headset and a controller. The wearable system may alternate between performing headset tracking and performing controller tracking by repeatedly capturing images using a headset camera of the headset during headset tracking frames and controller tracking frames. The wearable system may cause the headset camera to capture a first exposure image an exposure above a threshold and cause the headset camera to capture a second exposure image having an exposure below the threshold. The wearable system may determine a fiducial interval during which fiducials of the controller are to flash at a fiducial frequency and a fiducial period. The wearable system may cause the fiducials to flash during the fiducial interval in accordance with the fiducial frequency and the fiducial period.
Wearable systems and method for operation thereof incorporating headset and controller inside-out tracking are disclosed. A wearable system may include a headset and a controller. The wearable system may cause fiducials of the controller to flash. The wearable system may track a pose of the controller by capturing headset images using a headset camera, identifying the fiducials in the headset images, and tracking the pose of the controller based on the identified fiducials in the headset images and based on a pose of the headset. While tracking the pose of the controller, the wearable system may capture controller images using a controller camera. The wearable system may identify two-dimensional feature points in each controller image and determine three-dimensional map points based on the two-dimensional feature points and the pose of the controller.
A head-mounted display system includes a waveguide configured to guide light from a light projection system coupled into the waveguide; a grating structure optically coupled to the waveguide, the grating structure being configured to couple light from the light projection system into the waveguide. The grating structure includes a grating layer having a grating with multiple ridges having a blaze profile in at least one cross-section, the blaze profile having an anti-blaze angle of 85° or less; and one or more additional layers on the grating layer, the additional layers including a first layer of a material having a refractive index of 1.5 or less at an operative wavelength of the head-mounted display, the first layer being an outermost layer of the grating structure.
G02B 6/10 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage du type guide d'ondes optiques
20.
SYSTEMS AND METHODS FOR PERFORMING A MOTOR SKILLS NEUROLOGICAL TEST USING AUGMENTED OR VIRTUAL REALITY
System and methods for performing a motor skills neurological test using augmented reality which provides an objective assessment of the results of the test. A virtual target is displayed to a user in an AR field of view of an AR system at a target location. The movement of a body part (e.g., a finger) of a user is tracked as the user moves the body part from a starting location to the target location. A total traveled distance of the body part in moving from the starting location to the target location is determined based on the tracking. A linear distance between the starting location and the target location is determined. An efficiency index is then determined which represents an overall quality of movement of the body part from the starting location to the target location based on the total traveled distance and the linear distance.
A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
Described herein are systems and methods that provide localized dimming of world light emanating from world light sources. An optical system can include left and right dimmers. The optical system can also include left and right cameras configured to capture a left and right brightness images. The optical system can generate a 3D brightness source map based on the left and right brightness images, and generate left and right 2D brightness maps based on the 3D brightness source map. The optical can compute left and right dimming values for the left and right dimmers based on the left and right 2D brightness maps, and adjust the left and right dimmers to reduce an intensity of the world light.
According to an example method, a location of a first virtual speaker array is determined. A first virtual speaker density is determined. Based on the first virtual speaker density, a location of a second virtual speaker of the first virtual speaker array is determined. A source location in a virtual environment is determined for an audio signal. A virtual speaker of the first virtual speaker array is selected based on the source location and based further on a position or an orientation of a listener in the virtual environment. A head-related transfer function (HRTF) is identified that corresponds to the selected virtual speaker of the first virtual speaker array. The HRTF is applied to the audio signal to produce a first filtered audio signal. The first filtered audio signal is presented to the listener via a first speaker.
According to an example method, it is determined whether a difference between first acoustic data and second acoustic data exceeds a threshold. The first acoustic data is associated with a first client application in communication with an audio service. The second acoustic data is associated with a second client application. A first input audio signal associated with the first client application is received via the audio service. In accordance with the determination that the difference does not exceed the threshold, the second acoustic data is applied to the first input audio signal to produce a first output audio signal. In accordance with a determination that the difference exceeds the threshold, the first acoustic data is applied to the first input audio signal to produce the first output audio signal. The first output audio signal is presented to a user of a wearable head device in communication with the audio service.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
A63F 13/87 - Communiquer avec d’autres joueurs, p.ex. par courrier électronique ou messagerie instantanée
A63F 13/215 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types comprenant des moyens de détection des signaux acoustiques, p.ex. utilisant un microphone
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
24.
METHOD AND SYSTEM FOR VARIABLE OPTICAL THICKNESS WAVEGUIDES FOR AUGMENTED REALITY DEVICES
An augmented reality device includes a projector, projector optics optically coupled to the projector, and an eyepiece optically coupled to the projector optics. The eyepiece includes an eyepiece waveguide characterized by lateral dimensions and an optical path length difference as a function of one or more of the lateral dimensions.
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
G02B 6/10 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage du type guide d'ondes optiques
An extended reality (XR) system includes an XR display configured to be movably disposed in a line of sight of a user. The system also includes a vision correction component configured to be disposed in the line of sight of the user. The system further includes a displacement mechanism configured to guide the XR display out of the line of sight of the user while the vision correction component remains in the line of sight of the user, and limit relative positions of the XR display and the vision correction component.
The disclosure describes an improved drop-on-demand, controlled volume technique for dispensing resist onto a substrate, which is then imprinted to create a patterned optical device suitable for use in optical applications such as augmented reality and/or mixed reality systems. The technique enables the dispensation of drops of resist at precise locations on the substrate, with precisely controlled drop volume corresponding to an imprint template having different zones associated with different total resist volumes. Controlled drop size and placement also provides for substantially less variation in residual layer thickness across the surface of the substrate after imprinting, compared to previously available techniques. The technique employs resist having a refractive index closer to that of the substrate index, reducing optical artifacts in the device. To ensure reliable dispensing of the higher index and higher viscosity resist in smaller drop sizes, the dispensing system can continuously circulate the resist.
B29C 59/02 - Façonnage de surface, p.ex. gaufrage; Appareils à cet effet par des moyens mécaniques, p.ex. par pressage
B05C 9/00 - Appareillages ou installations pour appliquer des liquides ou d'autres matériaux fluides aux surfaces par des moyens non prévus dans l'un des groupes , ou dans lesquels le moyen pour déposer le liquide ou autre matériau fluide n'est pas important
F04D 7/00 - Pompes adaptées à la manipulation de liquides particuliers, p.ex. par choix de matériaux spéciaux pour les pompes ou pièces de pompe
G02B 1/04 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques caractérisés par la substance dont ils sont faits; Revêtements optiques pour éléments optiques faits de substances organiques, p.ex. plastiques
An image projection system includes an illumination source, a linear polarizer, and an eyepiece waveguide including a plurality of diffractive in-coupling optical elements. The eyepiece waveguide includes a region operable to transmit illumination light from the illumination source. The image projection system also includes a polarizing beamsplitter, a reflective structure, a quarter waveplate disposed between the polarizing beamsplitter and the reflective structure, and a reflective spatial light modulator.
An image projection system includes an illumination source and an eyepiece waveguide including a plurality of diffractive incoupling optical elements. The eyepiece waveguide includes a region operable to transmit light from the illumination source. The image projection system also includes a first optical element including a reflective polarizer, a second optical element including a partial reflector, a first quarter waveplate disposed between the first optical element and the second optical element, a reflective spatial light modulator, and a second quarter waveplate disposed between the second optical element and the reflective spatial light modulator.
G02F 1/1334 - Dispositions relatives à la structure basées sur des cristaux liquides dispersés dans un polymère, p.ex. cristaux liquides micro-encapsulés
G02F 1/313 - Dispositifs de déflexion numérique dans une structure de guide d'ondes optique
29.
AREA SPECIFIC COLOR ABSORPTION IN NANOIMPRINT LITHOGRAPHY
An eyepiece includes an optical waveguide, a transmissive input coupler at a first end of the optical waveguide, an output coupler at a second end of the optical waveguide, and a polymeric color absorbing region along a portion of the optical waveguide between the transmissive input coupler and the output coupler. The transmissive input coupler is configured to couple incident visible light to the optical waveguide, and the color-absorbing region is configured to absorb a component of the visible light as the visible light propagates through the optical waveguide.
G02B 6/293 - Moyens de couplage optique ayant des bus de données, c. à d. plusieurs guides d'ondes interconnectés et assurant un système bidirectionnel par nature en mélangeant et divisant les signaux avec des moyens de sélection de la longueur d'onde
G02B 6/34 - Moyens de couplage optique utilisant des prismes ou des réseaux
A waveguide stack having color-selective regions on one or more waveguides. The color-selective regions are configured to absorb incident light of a first wavelength range in such a way as to reduce or prevent the incident light of the first wavelength range from coupling into a waveguide configured to transmit a light of a second wavelength range.
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
Systems, methods, and computer program products for displaying virtual contents using a wearable electronic device determine the location of the sighting centers of both eyes of a user wearing the wearable electronic device and estimate an error or precision for these sighting center. A range of operation may be determined for a focal distance or a focal plane at the focal distance based at least in part upon the error or the precision and a criterion pertaining to vergence and accommodation of binocular vision of the virtual contents with the wearable electronic device. A virtual content may be adjusted into an adjusted virtual content for presentation with respect to the focal plane or the focal distance based at least in part upon the range of operation. The adjusted virtual content may be presented to the user with respect to the focal distance or the focal plane.
A61B 3/10 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
G02B 30/20 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
Examples of the disclosure describe systems and methods for reducing audio effects of fan noise, specifically, for a wearable system. A method wherein operating a fan of a wearable head device; detecting, with a microphone of the wearable head device, noise generated by the fan; generating a fan reference signal, wherein the fan reference signal represents at least one of a speed of the fan, a mode of the fan, a power output of the fan, and a phase of the fan; deriving a transfer function based on the fan reference signal and based further on the detected noise of the fan; generating a compensation signal based on the transfer function; and while operating the fan of the wearable head device, outputting, by a speaker of the wearable head device, an anti-noise signal, wherein the anti-noise signal is based on the compensation signal.
G10K 11/178 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférence; Masquage du son par régénération électro-acoustique en opposition de phase des ondes acoustiques originales
F01N 1/06 - Silencieux caractérisés par leur principe de fonctionnement utilisant les effets d'interférence
33.
MAPPING OF ENVIRONMENTAL AUDIO RESPONSE ON MIXED REALITY DEVICE
This disclosure relates in general to augmented reality (AR), mixed reality (MR), or extended reality (XR) environmental mapping. Specifically, this disclosure relates to AR, MR, or XR audio mapping in an AR, MR, or XR environment. In some embodiments, the disclosed systems and methods allow the environment to be mapped based on a recording. In some embodiments, the audio mapping information is associated to voxels located in the environment.
Embodiments of the present disclosure are directed to an acoustic waveguide for presenting an audio signal. An apparatus in accordance with embodiments of this disclosure can include a waveguide member comprising a hollow body having a first end and a second end. The apparatus can further include a sound source disposed at the first end of the waveguide configured to emit at least a first sound wave. The apparatus can further include a plurality of acoustic vents disposed on a lower surface of the body of the waveguide, wherein each of the plurality of acoustic vents is configured to receive the first sound wave and further configured to emit a respective sound wave based on the first sound wave, wherein each respective sound wave corresponds to a respective point sound source.
H04R 1/34 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en utilisant un seul transducteur avec des moyens réfléchissant, diffractant, dirigeant ou guidant des sons
Embodiments of the present disclosure can provide systems and methods for presenting audio signals based on an analysis of a voice of a speaker in an augmented reality or mixed reality environment. Methods according to embodiments of this disclosure can include receiving audio data from a microphone of a first wearable head device, the first wearable head device in communication with a virtual environment, the audio data comprising speech data. In some examples, the methods can include identifying a voice parameter based on the audio data. In some examples, the methods can include determining an acoustic parameter based on the voice parameter. In some examples, the methods can include applying the acoustic parameter to the audio data to generate a spatialized audio signal. In some examples, the methods can include presenting the spatialized audio signal to a second wearable head device in communication with the virtual environment.
This disclosure is related to systems and methods for rendering audio for a mixed reality environment. Methods according to embodiments of this disclosure include receiving an input audio signal, via a wearable device in communication with a mixed reality environment, the input audio signal corresponding to a sound source originating from a real environment. In some embodiments, the system can determine one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can determine a signal modification parameter based on the one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can apply the signal modification parameter to the input audio signal to determine a second audio signal. The system can present the second audio signal to the user.
G10L 25/54 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour la recherche
G10L 25/84 - Détection de la présence ou de l’absence de signaux de voix pour différencier la parole du bruit
G10L 25/84 - Détection de la présence ou de l’absence de signaux de voix pour différencier la parole du bruit
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
Disclosed herein are systems and methods for capturing a sound field, in particular, using a mixed reality device. In some embodiments, a method comprises: detecting, with a microphone of a first wearable-head device, a sound of an environment; determining a digital audio signal based on the detected sound, the digital audio signal associated with a sphere having a position in the environment; concurrently with detecting the sound, a microphone movement with respect to the environment; adjusting the digital audio signal, wherein the adjusting comprises adjusting the position of the sphere based on based on the detected microphone movement.
A method of forming a waveguide for an eyepiece for a display system to reduce optical degradation of the waveguide during segmentation is disclosed herein. The method includes providing a substrate having top and bottom major surfaces and a plurality of surface features, and using a laser beam to cut out a waveguide from said substrate by cutting along a path contacting and/or proximal to said plurality of surface features. The waveguide has edges formed by the laser beam and a main region and a peripheral region surrounding the main region. The peripheral region is surrounded by the edges.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using depth data to update camera calibration data. In some implementations, a frame of data is captured including (i) depth data from a depth sensor of a device, and (ii) image data from a camera of the device. Selected points from the depth data are transformed, using camera calibration data for the camera, to a three-dimensional space that is based on the image data. The transformed points are projected onto the two-dimensional image data from the camera. Updated camera calibration data is generated based on differences between (i) the locations of the projected points and (ii) locations that features representing the selected points appear in the two-dimensional image data from the camera. The updated camera calibration data can be used in a simultaneous localization and mapping process.
In an example method for forming a variable optical viewing optics assembly (VOA) for a head mounted display, a prepolymer is deposited onto a substrate having a first optical element for the VOA. Further, a mold is applied to the prepolymer to conform the prepolymer to a curved surface of the mold on a first side of the prepolymer and to conform the prepolymer to a surface of the substrate on a second side of the prepolymer opposite the first side. Further, the prepolymer is exposed to actinic radiation sufficient to form a solid polymer from the prepolymer, such that the solid polymer forms an ophthalmic lens having a curved surface corresponding to the curved surface of the mold, and the substrate and the ophthalmic lens form an integrated optical component. The mold is released from the solid polymer, and the VOA is assembled using the integrated optical component.
H04N 5/64 - TRANSMISSION D'IMAGES, p.ex. TÉLÉVISION - Détails des systèmes de télévision - Détails de structure des récepteurs, p.ex. ébénisterie ou housses
G02B 5/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques autres que les lentilles
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
42.
METHOD OF FABRICATING MOLDS FOR FORMING WAVEGUIDES AND RELATED SYSTEMS AND METHODS USING THE WAVEGUIDES
Methods are disclosed for fabricating molds for forming waveguides with integrated spacers for forming eyepieces. The molds are formed by etching features (e.g., 1µm to 1000µm deep) into a substrate comprising single crystalline material using an anisotropic wet etch. The etch masks for defining the large features may comprise a plurality of holes, wherein the size and shape of each hole at least partially determine the depth of the corresponding large feature. The holes may be aligned along a crystal axis of the substrate and the etching may automatically stop due to the crystal structure of the substrate. The patterned substrate may be utilized as a mold onto which a flowable polymer may be introduced and allowed to harden. Hardened polymer in the holes may form a waveguide with integrated spacers. The mold may be also used to fabricate a platform comprising a plurality of vertically extending microstructures of precise heights, to test the curvature or flatness of a sample, e.g., based on the amount of contact between the microstructures and the sample.
A method of generating foveated rendering using temporal multiplexing includes generating a first spatial profile for an FOV by dividing the FOV into a first foveated zone and a first peripheral zone. The first foveated zone will be rendered at a first pixel resolution, and the first peripheral zone will be rendered at a second pixel resolution lower than the first pixel resolution. The method further includes generating a second spatial profile for the FOV by dividing the FOV into a second foveated zone and a second peripheral zone, the second foveated zone being spatially offset from the first foveated zone. The second foveated zone and the second peripheral zone will be rendered at the first pixel resolution and the second pixel resolution, respectively. The method further includes multiplexing the first spatial profile and the second spatial profile temporally in a sequence of frames.
An optical device includes one or more volume phase holographic gratings each of which includes a photosensitive layer whose optical properties are spatially modulated. The spatial modulation of optical properties are recorded in the photosensitive layer by generating an optical interference pattern using a beam of light and one or more liquid crystal master gratings. The volume phase holograms may be configured to redirect light of visible or infrared wavelengths propagating in free space or through a waveguide. Advantageously, fabricating the volume phase holographic gratings using liquid crystal master grating allows independent control of the optical function and the selectivity of the volume phase holographic grating during the fabrication process.
G03H 1/04 - Procédés ou appareils pour produire des hologrammes
G03H 1/00 - Procédés ou appareils holographiques utilisant la lumière, les infrarouges ou les ultraviolets pour obtenir des hologrammes ou pour en obtenir une image; Leurs détails spécifiques
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
22 laser to yield a multilayer optical component having a first surface, a second surface opposite the first surface, and a blackened edge along a perimeter of the multilayer optical component. The multiplicity of polymer layers is sealed along the blackened edge. The resulting multilayer optical component includes a multiplicity of polymer layers and a blackened edge seal around the multiplicity of polymer layers. The blackened edge seal includes polymer melt from the multiplicity of polymer layers.
B23K 26/324 - Assemblage tenant compte des propriétés du matériau concerné faisant intervenir des parties non métalliques
B23K 26/354 - Travail par rayon laser, p.ex. soudage, découpage ou perçage pour le traitement de surface par fusion
B23K 26/38 - Enlèvement de matière par perçage ou découpage
B23K 26/402 - Enlèvement de matière en tenant compte des propriétés du matériau à enlever en faisant intervenir des matériaux non métalliques, p.ex. des isolants
B23K 103/00 - Matières à braser, souder ou découper
46.
THIN ILLUMINATION LAYER WAVEGUIDE AND METHODS OF FABRICATION
Disclosed herein are systems and methods for displays, such as for a head wearable device. An example display can include an infrared illumination layer, the infrared illumination layer including a waveguide having a first face and a second face, the first face disposed opposite the second face. The illumination layer may also include an in-coupling grating disposed on the first face, the in-coupling grating configured to couple light into the waveguide to generate internally reflected light propagating in a first direction. The illumination layer may also include a plurality of out-coupling gratings disposed on at least one of the first face and the second face, the plurality of out-coupling gratings configured to receive the internally reflected light and couple the internally reflected light out of the waveguide.
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
47.
IMPRINT LITHOGRAPHY PROCESS AND METHODS ON CURVED SURFACES
Methods for creating a pattern on a curved surface and an optical structure (e.g., curved waveguide, a lens having an antireflective feature, an optical structure of a wearable head device) are disclosed. In some embodiments, the method comprises: depositing a patterning material on a curved surface; positioning a superstrate over the patterning material, the superstrate comprising a template for creating the pattern; applying, using the patterning material, a force between the curved surface and the superstrate; curing the patterning material, wherein the cured patterning material comprises the pattern; and removing the superstrate. In some embodiments, the method comprises forming the optical structure using the pattern.
B81C 1/00 - Fabrication ou traitement de dispositifs ou de systèmes dans ou sur un substrat
B29C 37/00 - FAÇONNAGE OU ASSEMBLAGE DES MATIÈRES PLASTIQUES; FAÇONNAGE DES MATIÈRES À L'ÉTAT PLASTIQUE NON PRÉVU AILLEURS; POST-TRAITEMENT DES PRODUITS FAÇONNÉS, p.ex. RÉPARATION - Eléments constitutifs, détails, accessoires ou opérations auxiliaires non couverts par le groupe ou
Disclosed herein are systems and methods for fabricating nano-structures on a substrate that can be used in eyepieces for displays, e.g., in head wearable devices. Fabricating and/or etching such a substrate can include submerging the substrate in a bath and applying ultrasonication to the bath for a first time period. The ultrasonication applied to the first bath can agitate the fluid to provide a substantially uniform first reactive environment across the surface of the substrate. The substrate can be submerged in a second bath and ultrasonication can be applied to the second bath for a second time period. The ultrasonication applied to the second bath can agitate the fluid to provide a substantially uniform second reactive environment across the surface of the substrate. A predetermined amount of material can be removed from the surface of the substrate during the second time period to produce an etched substrate.
Structures for forming an optical feature and methods for forming the optical feature are disclosed. In some embodiments, the structure comprises a patterned layer comprising a pattern corresponding to the optical feature; a base layer; and an intermediate layer bonded to the patterned layer and the base layer.
G02B 1/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques caractérisés par la substance dont ils sont faits; Revêtements optiques pour éléments optiques
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
50.
NANOPATTERN ENCAPSULATION FUNCTION, METHOD AND PROCESS IN COMBINED OPTICAL COMPONENTS
Disclosed herein are systems and methods for displays, such as for a head wearable device. An example display can include an infrared illumination layer, the infrared illumination layer including a substrate, one or more LEDs disposed on a first surface of the substrate, and a first encapsulation layer disposed on the first surface of the substrate, where the encapsulation layer can include a nano-patterned surface. In some examples, the nano-patterned surface can be configured to improve a visible light transmittance of the illumination layer. In one or more examples, embodiments disclosed herein may provide a robust illumination layer that can reduce the haze associated with an illumination layer.
G02B 6/124 - Lentilles géodésiques ou réseaux intégrés
G02B 15/04 - Objectifs optiques avec moyens de faire varier le grossissement en modifiant, ajoutant ou retirant une partie de l'objectif, p.ex. objectifs transformables par changement d'une partie
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
51.
COVER ARCHITECTURES IN CURVED EYEPIECE STACKS FOR MIXED REALITY APPLICATIONS
Eyepieces and methods of fabricating the eyepieces are disclosed. In some embodiments, the eyepiece comprises a curved cover layer and a waveguide layer for propagating light. In some embodiments, the curved cover layer comprises an antireflective feature.
This disclosure describes in-plane switching mode liquid crystal geometric phase tunable lenses that can be integrated into an eyepiece of an optical device for the correction of non-emmetropic vision, such as in an augmented reality display system. The eyepiece can include an integrated, field-configurable optic arranged with respect to a waveguide used to project digital imagery to the user, the optic being capable of providing a tunable Rx for the user including variable spherical refractive power (SPH), cylinder refractive power, and cylinder axis values. In certain configuration, each tunable eyepiece includes two variable compound lenses: one on the user-side of the waveguide with variable SPH, cylinder power, and axis values; and a second on the world side of the waveguide with variable SPH.
G02F 1/133 - Dispositions relatives à la structure; Excitation de cellules à cristaux liquides; Dispositions relatives aux circuits
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
53.
METHOD AND APPARATUS FOR IMPROVED SPEAKER IDENTIFICATION AND SPEECH ENHANCEMENT
A headwear device comprises a frame structure configured for being worn on the head of a user, a vibration voice pickup (VVPU) sensor affixed to the frame structure for capturing vibration originating from a voiced sound of a user and generating a vibration signal, at least one microphone affixed to the frame structure for capturing voiced sound from the user and ambient noise, and at least one processor configured for performing an analysis of the vibration signal, and determining that the user has generated the voice sound based on the analysis of the vibration signal.
An eyewear device for being worn on a head of a user for presenting virtual content to a user comprises an optics system and a frame front operatively coupled to the optics system for presenting virtual content to a user wearing the eyewear device. The eyewear device further comprises left and right opposing temple arms affixed to the frame front, and a torsion band assembly having opposing ends that connect the left and right opposing temple arms together. The eyewear device further comprises at least a first floating boss that protrudes partially into one of the left and right opposing temple arms, such that the first floating boss(es) moves within the one of the left and right opposing temple arms in one or more axes in a constrained manner.
Embodiments of this disclosure provides systems and methods for displays. In embodiments, a display system includes a frame, an eyepiece coupled to the frame, and a first adhesive bond disposed between the frame and the eyepiece. The eyepiece can include a light input region and a light output region. The first adhesive bond can be disposed along a first portion of a perimeter of the eyepiece, where the first portion of the perimeter of the eyepiece borders the light input region such that the first adhesive bond is configured to maintain a position of the light input region relative to the frame.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
Disclosed here in are systems and methods for mapping environment information. In some embodiments, the systems and methods are configured for mapping information in a mixed reality environment. In some embodiments, the system is configured to perform a method including scanning an environment including capturing, with a sensor, a plurality of points of the environment; tracking a plane of the environment; updating observations associated with the environment by inserting a keyframe into the observations; determining whether the plane is coplanar with a second plane of the environment; in accordance with a determination that the plane is coplanar with the second plane, performing planar bundle adjustment on the observations associated with the environment; and in accordance with a determination that the plane is not coplanar with the second plane, performing planar bundle adjustment on a portion of the observations associated with the environment.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G01C 21/20 - Instruments pour effectuer des calculs de navigation
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G01S 17/42 - Mesure simultanée de la distance et d'autres coordonnées
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
This document describes scene understanding for cross reality systems using occupancy grids. In one aspect, a method includes recognizing one or more objects in a model of a physical environment generated using images of the physical environment. For each object, a bounding box is fit around the object. An occupancy grid that includes a multiple cells is generated within the bounding box around the object. A value is assigned to each cell of the occupancy grid based on whether the cell includes a portion of the object. An object representation that includes information describing the occupancy grid for the object is generated. The object representations are sent to one or more devices.
Some embodiments herein are directed to head-mounted Virtual Retina Display (VRD) systems with actuated reflective pupil steering. The display systems include a projection system for generating image content and an actuated reflective optical architecture, which may be part of an optical combiner, that reflects light from the projection system into the user's eyes. The display systems are configured to track the position of a user's eyes and to actuate the reflective optical architecture to change the direction of reflected light so that the reflected light is directed into the user's eyes. The VRDs described herein may be highly efficient, and may have improved size, weight, and luminance such that they are capable of all-day, everyday use.
Embodiments of this disclosure systems and methods for displays. In embodiments, a display system includes a light source configured to emit a first light, a lens configured to receive the first light, and an image generator configured receive the first light and emit a second light. The display system further includes a plurality of waveguides, where at least two of the plurality of waveguides include an in-coupling grating configured to selectively couple the second light. In some embodiments, the light source can comprise a single pupil light source having a reflector and a micro-LED array disposed in the reflector.
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
A head mounted display can include a frame, an eyepiece, an image injection device, a sensor array, a reflector, and an off-axis optical element. The frame can be configured to be supported on the head of the user. The eyepiece can be coupled to the frame and configured to be disposed in front of an eye of the user. The eyepiece can include a plurality of layers. The image injection device can be configured to provide image content to the eyepiece for viewing by the user. The sensor array can be integrated in or one the eyepiece. The reflector can be disposed in or on the eyepiece and configured to reflect light received from an object for imaging by the sensor array. The off-axis optical element can be disposed in or one the eyepiece. The off-axis optical element can be configured to receive light reflected from the reflector and direct at least a portion of the light toward the sensor array.
Techniques for addressing deformations in a virtual or augmented headset described. In some implementations, cameras in a headset can obtain image data at different times as the headset moves through a series of poses of the headset. One or more miscalibration conditions for the headset that have occurred as the headset moved through the series of poses can be detected. The series of poses can be divided into groups of poses based on the one or more miscalibration conditions, and bundle adjustment for the groups of poses can be performed using a separate set of camera calibration data. The bundle adjustment for the poses in each group is performed using a same set of calibration data for the group. The camera calibration data for each group is estimated jointly with bundle adjustment estimation for the poses in the group.
In some embodiments, a near-eye, near-eye display system comprises a stack of waveguides having pillars in a central, active portion of the waveguides. The active portion may include light outcoupling optical elements configured to outcouple image light from the waveguides towards the eye of a viewer. The pillars extend between and separate neighboring ones of the waveguides. The light outcoupling optical elements may include diffractive optical elements that are formed simultaneously with the pillars, for example, by imprinting or casting. The pillars are disposed on one or more major surfaces of each of the waveguides. The pillars may define a distance between two adjacent waveguides of the stack of waveguides. The pillars may be bonded to adjacent waveguides may be using one or more of the systems, methods, or devices herein. The bonding provides a high level of thermal stability to the waveguide stack, to resist deformation as temperatures change.
C09J 5/02 - Procédés de collage en général; Procédés de collage non prévus ailleurs, p.ex. relatifs aux amorces comprenant un traitement préalable des surfaces à joindre
Methods, systems, and apparatus for performing bundling adjustment using epipolar constraints. A method includes receiving image data from a headset for a particular pose. The image data includes a first image from a first camera of the headset and a second image from a second camera of the headset. The method includes identifying at least one key point in a three-dimensional model of an environment at least partly represented in the first image and the second image and performing bundle adjustment. Bundle adjustment is performed by jointly optimizing a reprojection error for the at least one key point and an epipolar error for the at least one key point. Results of the bundle adjustment are used to perform at least one of (i) updating the three-dimensional model, (ii) determining a position of the headset at the particular pose, or (iii) determining extrinsic parameters of the first camera and second camera.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
64.
MISCALIBRATION DETECTION FOR VIRTUAL REALITY AND AUGMENTED REALITY SYSTEMS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing miscalibration detection. One of the methods includes receiving sensor data from each of multiple sensors of a device in a system configured to provide augmented reality or mixed reality output to a user. Feature values are determined based on the sensor data for a predetermined set of features. The determined feature values are processed using a miscalibration detection model that has been trained, based on examples of captured sensor data from one or more devices, to predict whether a miscalibration condition of one or more of the multiple sensors has occurred. Based on the output of the miscalibration detection model, the system determines whether to initiate recalibration of extrinsic parameters for at least one of the multiple sensors or to bypass recalibration.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
A computer-implemented method includes receiving gaze information about an observer of a video stream; determining a video compression spatial map for the video stream based on the received gaze information and performance characteristics of a network connection with the observer; compressing the video stream according to the video compression spatial map; and sending the compressed video stream to the observer.
H04B 1/66 - TRANSMISSION - Détails des systèmes de transmission non caractérisés par le milieu utilisé pour la transmission pour améliorer l'efficacité de la transmission
H04N 7/12 - Systèmes dans lesquels le signal de télévision est transmis par un canal ou une pluralité de canaux parallèles, la bande passante de chaque canal étant inférieure à la largeur de bande du signal de télévision
66.
TRANSMODAL INPUT FUSION FOR MULTI-USER GROUP INTENT PROCESSING IN VIRTUAL ENVIRONMENTS
This document describes imaging and visualization systems in which the intent of a group of users in a shared space is determined and acted upon. In one aspect, a method includes identifying, for a group of users in a shared virtual space, a respective objective for each of two or more of the users in the group of users. For each of the two or more users, a determination is made, based on inputs from multiple sensors having different input modalities, a respective intent of the user. At least a portion of the multiple sensors are sensors of a device of the user that enables the user to participate in the shared virtual space. A determination is made, based on the respective intent, whether the user is performing the respective objective for the user. Output data is generated and provided based on the respective objectives respective intents.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
67.
BRAGG GRATINGS FOR AN AUGMENTED REALITY DISPLAY SYSTEM
A head-mounted display system can include a head-mountable frame, a light projection system configured to output light to provide image content to a user's eye, and a waveguide supported by the frame. The waveguide can be configured to guide at least a portion of the light from the light projection system coupled into the waveguide to present the image content to the user's eye. The system can include a grating that includes a first reflective diffractive optical element and a second reflective diffractive optical element. The combination of the first and second reflective diffractive optical elements can operate as a transmissive diffractive optical element. The first reflective diffractive optical element can be a volume phase holographic grating. The second reflective diffractive optical element can be a liquid crystal polarization grating.
G02B 5/30 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques autres que les lentilles Éléments polarisants
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
G02B 30/00 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques
G02F 1/133 - Dispositions relatives à la structure; Excitation de cellules à cristaux liquides; Dispositions relatives aux circuits
G02F 1/295 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c. à d. déflexion dans une structure de guide d'ondes optique
Systems and methods for presenting audio using an audio system supporting multiple modes of operation are disclosed. In some embodiments, elements of the audio system are configured to operate in the different modes. For example, the audio system is configured to operate in a first mode and a second mode. The audio system may be operating in the first mode or the second mode based on an application running on a system or a signal generated by the system.
A voice user interface (VUI) and methods for operating the VUI are disclosed. In some embodiments, the VUI configured to receive and process linguistic and non-linguistic inputs. For example, the VUI receives an audio signal, and the VUI determines whether the audio input comprises a linguistic and/or a non-linguistic input. In accordance with a determination that the audio signal comprises a non-linguistic input, the VUI causes a system to perform an action associated with the non-linguistic input.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for calibrating an augmented reality device using camera and inertial measurement unit data. In some implementations, a bundle adjustment process jointly optimizes or estimates states of the augmented reality device. The process can use, as input, visual and inertial measurements as well as factory-calibrated sensor extrinsic parameters. The process performs bundle adjustment and uses non-linear optimization of estimated states constrained by the measurements and the factory calibrated extrinsic parameters. The process can jointly optimize inertial constraints, IMU calibration, and camera calibrations. Output of the process can include most likely estimated states, such as data for a 3D map of an environment, a trajectory of the device, and/or updated extrinsic parameters of the visual and inertial sensors (e.g., cameras and IMUs).
An eyepiece waveguide for an augmented reality display system. The eyepiece waveguide can include an optically transmissive substrate with an input coupling grating (ICG) region. The ICG region can receive a beam of light and couple the beam into the substrate in a guided propagation mode. The eyepiece waveguide can also include a combined pupil expander-extractor (CPE) grating region that receives the beam of light from the ICG region and alters the propagation direction of the beam with a first interaction and out-couples the beam with a second interaction. The diffractive features of the CPE grating region can be arranged in rows and columns of alternating higher and lower quadrilateral surfaces or the diffractive features can comprise diamond shaped raised ridges. The eyepiece waveguide can also include one or more recycler grating regions.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for camera calibration during bundle adjustment. One of the methods includes maintaining a three-dimensional model of an environment and a plurality of image data clusters that each include data generated from images captured by two or more cameras included in a device. The method includes jointly determining, for a three-dimensional point represented by an image data cluster (i) the newly estimated coordinates for the three-dimensional point for an update to the three-dimensional model or a trajectory of the device, and (ii) the newly estimated calibration data that represents the spatial relationship between the two or more cameras.
Systems include three optical elements arranged along an optical axis each having a different cylinder axis and a variable cylinder refractive power. Collectively, the three elements form a compound optical element having an overall spherical refractive power (SPH), cylinder refractive power (CYL), and cylinder axis (Axis) that can be varied according to a prescription (Rx).
Systems and methods for managing multi-objective alignments in imprinting (e.g., single-sided or double-sided) are provided. An example system includes rollers for moving a template roll, a stage for holding a substrate, a dispenser for dispensing resist on the substrate, a light source for curing the resist to form an imprint on the substrate when a template of the template roll is pressed into the resist on the substrate, a first inspection system for registering a fiducial mark of the template to determine a template offset, a second inspection system for registering the imprint on the substrate to determine a wafer registration offset between a target location and an actual location of the imprint, and a controller for controlling to move the substrate with the resist below the template based on the template offset, and determine an overlay bias of the imprint on the substrate based on the wafer registration offset.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an object recognition neural network using multiple data sources. One of the methods includes receiving training data that includes a plurality of training images from a first source and images from a second source. A set of training images are obtained from the training data. For each training image in the set of training images, contrast equalization is applied to the training image to generate a modified image. The modified image is processed using the neural network to generate an object recognition output for the modified image. A loss is determined based on errors between, for each training image in the set, the object recognition output for the modified image generated from the training image and ground-truth annotation for the training image. Parameters of the neural network are updated based on the determined loss.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/32 - Alignement ou centrage du capteur d'image ou de la zone image
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
76.
CAMERA EXTRINSIC CALIBRATION VIA RAY INTERSECTIONS
Embodiments provide image display systems and methods for extrinsic calibration of one or more cameras. More specifically, embodiments are directed to camera extrinsic calibration approach based on determining intersection of the rays projecting from the optical centers of the camera and a reference camera. Embodiments determine the relative position and orientation of one or more cameras given image(s) of the same target object from each camera by projecting measured image points into 3D rays in the real world. The extrinsic parameters are found by minimizing the expected 3D intersections of those rays with the known 3D target points.
An eye tracking system can include a first camera configured to capture a first plurality of visual data of a right eye at a first sampling rate. The system can include a second camera configured to capture a second plurality of visual data of a left eye at a second sampling rate. The second plurality of visual data can be captured during different sampling times than the first plurality of visual data. The system can estimate, based on at least some visual data of the first and second plurality of visual data, visual data of at least one of the right or left eye at a sampling time during which visual data of an eye for which the visual data is being estimated are not being captured. Eye movements of the eye based on at least some of the estimated visual data and at least some visual data of the first or second plurality of visual data can be determined.
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
A61B 3/14 - Dispositions spécialement adaptées à la photographie de l'œil
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints for thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of cornea of the user's eye using data derived from the glint images. The display system may use spherical and aspheric cornea models to estimate a location of the corneal center of the user's eye.
A61B 3/107 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer la forme ou mesurer la courbure de la cornée
A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
A61B 5/1171 - Identification des personnes basée sur la morphologie ou l’aspect de leur corps ou de parties de celui-ci
A method for measuring performance of a head-mounted display module, the method including arranging the head-mounted display module relative to a plenoptic camera assembly so that an exit pupil of the head-mounted display module coincides with a pupil of the plenoptic camera assembly; emitting light from the head-mounted display module while the head-mounted display module is arranged relative to the plenoptic camera assembly; filtering the light at the exit pupil of the head-mounted display module; acquiring, with the plenoptic camera assembly, one or more light field images projected from the head-mounted display module with the filtered light; and determining information about the performance of the head-mounted display module based on acquired light field image.
A method, includes providing a wafer including a first surface grating extending over a first area of a surface of the wafer and a second surface grating extending over a second area of the surface of the wafer; de-functionalizing a portion of the surface grating in at least one of the first surface grating area and the second surface grating area; and singulating an eyepiece from the wafer, the eyepiece including a portion of the first surface grating area and a portion of the second surface grating area. The first surface grating in the eyepiece corresponds to an input coupling grating for a head-mounted display and the second surface grating corresponds to a pupil expander grating for the head-mounted display.
H01L 21/66 - Test ou mesure durant la fabrication ou le traitement
H01L 21/67 - Appareils spécialement adaptés pour la manipulation des dispositifs à semi-conducteurs ou des dispositifs électriques à l'état solide pendant leur fabrication ou leur traitement; Appareils spécialement adaptés pour la manipulation des plaquettes pendant la fabrication ou le traitement des dispositifs à semi-conducteurs ou des dispositifs électriques à l'état solide ou de leurs composants
H01L 21/68 - Appareils spécialement adaptés pour la manipulation des dispositifs à semi-conducteurs ou des dispositifs électriques à l'état solide pendant leur fabrication ou leur traitement; Appareils spécialement adaptés pour la manipulation des plaquettes pendant la fabrication ou le traitement des dispositifs à semi-conducteurs ou des dispositifs électriques à l'état solide ou de leurs composants pour le positionnement, l'orientation ou l'alignement
A method for displaying an image using a wearable display system including directing display light from a display towards a user through an eyepiece to project images in the users field of view, determining a relative location between an ambient light source and the eyepiece, and adjusting an attenuation of ambient light from the ambient light source through the eyepiece depending on the relative location between the ambient light source and the eyepiece.
Disclosed are techniques for improving the color uniformity of a display of a display device. A plurality of images of the display are captured using an image capture device. The plurality of images are captured in a color space, with each image corresponding to one of a plurality of color channels. A global white balance is performed to the plurality of images to obtain a plurality of normalized images. A local white balance is performed to the plurality of normalized images to obtain a plurality of correction matrices. Performing the local white balance includes defining a set of weighting factors based on a figure of merit and computing a plurality of weighted images based on the plurality of normalized images and the set of weighting factors. The plurality of correction matrices are computed based on the plurality of weighted images.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
83.
OBJECT RECOGNITION NEURAL NETWORK FOR AMODAL CENTER PREDICTION
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object recognition neural network for amodal center prediction. One of the methods includes receiving an image of an object captured by a camera. The image of the object is processed using an object recognition neural network that is configured to generate an object recognition output. The object recognition output includes data defining a predicted two-dimensional amodal center of the object, wherein the predicted two-dimensional amodal center of the object is a projection of a predicted three-dimensional center of the object under a camera pose of the camera that captured the image.
Embodiments provide image display systems and methods for one or more camera calibration using a two-sided diffractive optical element (DOE). More specifically, embodiments are directed to determining intrinsic parameters of one or more cameras using a single image obtained using a two-sided DOE. The two-sided DOE has a first pattern on a first surface and a second pattern on a second surface. Each of the first and second patterns may be formed by repeating sub-patterns that are lined when tiled on each surface. The patterns on the two-sided DOE are formed such that the brightness of the central intensity peak on the image of the image pattern formed by the DOE is reduced to a predetermined amount.
Techniques for tracking eye movement in an augmented reality system identify a plurality of base images of an object or a portion thereof. A search image may be generated based at least in part upon at least some of the plurality of base images. A deep learning result may be generated at least by performing a deep learning process on a base image using a neural network in a deep learning mode. A captured image may be localized at least by performing an image registration process on the captured image and the search image using a Kalman filter model and the deep learning result.
Enhanced eye-tracking techniques for augmented or virtual reality display systems. An example method includes obtaining an image of an eye of a user of a wearable system, the image depicting glints on the eye caused by respective light emitters, wherein the image is a low dynamic range (LDR) image; generating a high dynamic range (HDR) image via computation of a forward pass of a machine learning model using the image; determining location information associated with the glints as depicted in the HDR image, wherein the location information is usable to inform an eye pose of the eye.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/048 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI]
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
Wearable and optical display systems and methods for operation thereof incorporating monovision display techniques are disclosed. A wearable device may include left and right optical stacks configured to switch between displaying virtual content at a first focal plane or a second focal plane. The wearable device may determine whether or not an activation condition is satisfied. In response to determining that the activation condition is satisfied, a monovision display mode associated with the wearable device may be activated, which may include causing the left optical stack to display the virtual content at the first focal plane and causing the right optical stack to display the virtual content at the second focal plane.
Disclosed herein are systems and methods for presenting an audio signal associated with presentation of a virtual object colliding with a surface. The virtual object and the surface may be associated with a mixed reality environment. Generation of the audio signal may be based on at least one of an audio stream from a microphone and a video stream form a sensor. In some embodiments, the collision between the virtual object and the surface is associated with a footstep on the surface.
Disclosed herein are systems and methods for calculating angular acceleration based on inertial data using two or more inertial measurement units (IMUs). The calculated angular acceleration may be used to estimate a position of a wearable head device comprising the IMUs. Virtual content may be presented based on the position of the wearable head device. In some embodiments, a first IMU and a second IMU share a coincident measurement axis.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G06T 1/00 - Traitement de données d'image, d'application générale
90.
PIECEWISE PROGRESSIVE AND CONTINUOUS CALIBRATION WITH COHERENT CONTEXT
A piecewise progressive continuous calibration method with context coherence is utilized to improve display of virtual content. When a set of frames are rendered to depict a virtual image, the VAR system may identify a location of the virtual content in the frames. The system may convolve a test pattern at the location of the virtual content to generate a calibration frame. The calibration frame is inserted within the set of frames in a manner that is imperceptible to the user.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
91.
AUGMENTED AND VIRTUAL REALITY DISPLAY SYSTEMS WITH CORRELATED IN-COUPLING AND OUT-COUPLING OPTICAL REGIONS
Augmented reality and virtual reality display systems and devices are configured for efficient use of projected light. In some aspects, a display system includes a light projection system and a head-mounted display configured to project light into an eye of the user to display virtual image content. The head-mounted display includes at least one waveguide comprising a plurality of in-coupling regions each configured to receive, from the light projection system, light corresponding to a portion of the user's field of view and to in-couple the light into the waveguide; and a plurality of out-coupling regions configured to out-couple the light out of the waveguide to display the virtual content, wherein each of the out-coupling regions are configured to receive light from different ones of the in-coupling regions. In some implementations, each in-coupling region has a one-to-one correspondence with a unique corresponding out-coupling region.
An eyepiece waveguide for an augmented reality display system includes a substrate having a first surface and a second surface and a diffractive input coupling element. The diffractive input coupling element is configured to receive an input beam of light and to couple the input beam into the substrate as a guided beam. The eyepiece waveguide also includes a diffractive combined pupil expander-extractor (CPE) element formed on or in the first surface or the second surface of the substrate. The diffractive CPE element includes a first portion and a second portion divided by an axis. A first set of diffractive optical elements is disposed in the first portion and oriented at a positive angle with respect to the axis and a second set of diffractive optical elements is disposed in the second portion and oriented at a negative angle with respect to the axis.
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
Embodiments transform an image frame based on a position of pupils of a viewer to eliminate visual artefacts formed on an image frame displayed on a scanning-type display device. An MR system obtains a first image frame corresponding to a first view perspective associated with a first pupil position. The system receives data from an eye tracking device, determines a second pupil position, and generates a second image frame corresponding to a second view perspective associated with the second pupil position. A first set of pixels of the second image frame are shifted by a first shift value, and a second set of pixels of the second image frame are shifted by a second shift value, where the shift values are calculated based on at least the second pupil position. The system transmits the second image frame to a near-eye display device to be displayed thereon.
Embodiments shifts the color fields of a rendered image frame based on the eye tracking data (e.g. position of the user's pupils). An MR device obtains a first image frame having a set of color fields. The first image frame corresponds to a first position of the pupils of the viewer. The MR device then determines a second position of the pupils of the viewer based on, for example, data receive from an eye tracking device coupled to the MR device. The MR device generates, based on the first image frame, a second image frame corresponding to the second position of the pupils. The second image frame is generated by shifting color fields by a shift value based on the second position of the pupils of the viewer. The MR device transmits the second image frame to a display device of the MR device to be displayed thereon.
A method for fabricating a cantilever having a device surface, a tapered surface, and an end region includes providing a semiconductor substrate having a first side and a second side opposite to the first side and etching a predetermined portion of the second side to form a plurality of recesses in the second side. Each of the plurality of recesses comprises an etch termination surface. The method also includes anisotropically etching the etch termination surface to form the tapered surface of the cantilever and etching a predetermined portion of the device surface to release the end region of the cantilever.
H01L 21/31 - Traitement des corps semi-conducteurs en utilisant des procédés ou des appareils non couverts par les groupes pour former des couches isolantes en surface, p.ex. pour masquer ou en utilisant des techniques photolithographiques; Post-traitement de ces couches; Emploi de matériaux spécifiés pour ces couches
H01L 21/469 - Traitement de corps semi-conducteurs en utilisant des procédés ou des appareils non couverts par les groupes pour changer les caractéristiques physiques ou la forme de leur surface, p.ex. gravure, polissage, découpage pour y former des couches isolantes, p.ex. pour masquer ou en utilisant des techniques photolithographiques; Post-traitement de ces couches
96.
COMPUTATIONALLY EFFICIENT METHOD FOR COMPUTING A COMPOSITE REPRESENTATION OF A 3D ENVIRONMENT
Methods and apparatus for providing a representation of an environment, for example, in an XR system, and any suitable computer vision and robotics applications. A representation of an environment may include one or more planar features. The representation of the environment may be provided by jointly optimizing plane parameters of the planar features and sensor poses that the planar features are observed at. The joint optimization may be based on a reduced matrix and a reduced residual vector in lieu of the Jacobian matrix and the original residual vector.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A pupil separation system includes an input surface, a central portion including a set of dichroic mirrors, a first reflective surface disposed laterally with respect to the central portion, and a second reflective surface disposed laterally with respect to the central portion. The pupil separation system also includes an exit face including a central surface operable to transmit light in a first wavelength range, a first peripheral surface adjacent the central surface and operable to transmit light in a second wavelength range, and a second peripheral surface adjacent the central surface and opposite to the first peripheral surface. The second peripheral surface is operable to transmit light in a third wavelength range.
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 27/09 - Mise en forme du faisceau, p.ex. changement de la section transversale, non prévue ailleurs
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
98.
METHOD AND SYSTEM FOR INTEGRATION OF REFRACTIVE OPTICS WITH A DIFFRACTIVE EYEPIECE WAVEGUIDE DISPLAY
An eyepiece waveguide includes a set of waveguide layers having a world side and a user side. The eyepiece waveguide also includes a first cover plate having a first optical power and disposed adjacent the world side of the set of waveguide layers and a second cover plate having a second optical power and disposed adjacent the user side of the set of waveguide layers.
Techniques for calibrating cameras and displays are disclosed. An image of a target is captured using a camera. The target includes a tessellation having a repeated structure of tiles. The target further includes unique patterns superimposed onto the tessellation. Matrices are formed based on pixel intensities within the captured image. Each of the matrices includes values each corresponding to the pixel intensities within one of the tiles. The matrices are convolved with kernels to generate intensity maps. Each of the kernels is generated based on a corresponding unique pattern of the unique patterns. An extrema value is identified in each of the intensity maps. A location of each of the unique patterns within the image is determined based on the extrema value for each of the intensity maps. A device calibration is performed using the location of each of the unique patterns.
Techniques are disclosed for using and training a descriptor network. An image may be received and provided to the descriptor network. The descriptor network may generate an image descriptor based on the image. The image descriptor may include a set of elements distributed between a major vector comprising a first subset of the set of elements and a minor vector comprising a second subset of the set of elements. The second subset of the set of elements may include more elements than the first subset of the set of elements. A hierarchical normalization may be imposed onto the image descriptor by normalizing the major vector to a major normalization amount and normalizing the minor vector to a minor normalization amount. The minor normalization amount may be less than the major normalization amount.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/36 - Prétraitement de l'image, c. à d. traitement de l'information image sans se préoccuper de l'identité de l'image
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/68 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons successives des signaux images avec plusieurs références, p.ex. mémoire adressable
H04N 5/445 - Circuits de réception pour visualisation d'information additionnelle