An augmented reality system includes a light source configured to generate a virtual light beam. The system also includes a light guiding optical element having an entry portion, an exit portion, and a surface having a diverter disposed adjacent thereto. The light source and the light guiding optical element are configured such that the virtual light beam enters the light guiding optical element through the entry portion, propagates through the light guiding optical element by at least partially reflecting off of the surface, and exits the light guiding optical element through the exit portion. The light guiding optical element is transparent to a first real-world light beam. The diverter is configured to modify a light path of a second real-world light beam at the surface.
G02B 30/52 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p.ex. des voxels le volume 3D étant construit à partir d'une pile ou d'une séquence de plans 2D, p.ex. systèmes d'échantillonnage en profondeur
An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.
Systems and methods for compressing dynamic unstructured point clouds. A dynamic unstructured point cloud can be mapped to a skeletal system of a subject to form one or more structured point cloud representations. One or more sequences of the structured point cloud representations can be formed. The one or more sequences of structured point cloud representations can then be compressed.
Controllable three-dimensional (3D) virtual dioramas in a rendered 3D environment such as a virtual reality or augmented reality environment including one or more rendered objects. 3D diorama is associated with a spatial computing content item such as a downloadable application executable by a computing device. 3D diorama assets may include visual and/or audio content and are presented with rendered 3D environment objects in a composite view, which is presented to a user through a display of computing device. 3D diorama is rotatable in composite view, and at least one 3D diorama asset at least partially occludes, or is at least partially occluded by, at least one rendered 3D environment object. 3D diorama may depict or provide a preview of a spatial computing user experience generated by the downloadable application.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/279 - Générateurs de signaux d’images à partir de modèles 3D d’objets, p.ex. des signaux d’images stéréoscopiques générés par ordinateur les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi
Disclosed herein are systems and methods for presenting and annotating virtual content. According to an example method, a virtual object is presented to a first user at a first position via a transmissive display of a wearable device. A first input is received from the first user. In response to receiving the first input, a virtual annotation is presented at a first displacement from the first position. A first data is transmitted to a second user, the first data associated with the virtual annotation and the first displacement. A second input is received from the second user. In response to receiving the second input, the virtual annotation is presented to the first user at a second displacement from the first position. Second data is transmitted to a remote server, the second data associated with the virtual object, the virtual annotation, the second displacement, and the first position.
An extended reality display system includes a display subsystem configured to present an image corresponding to image data to a user. The display subsystem includes an optical component that introduce a non-uniformity to the image, a segmented illumination light source, and a spatial light modulator (SLM) configured to receive light from the segmented illumination light source. The system also includes a display controller configured to control the segmented illumination light source. The display controller includes a memory for storing non-uniformity correction information, and a processor to control the segmented illumination light source based on the non-uniformity correction information. The segmented illumination light source is configured to differentially illuminate first and second portions of the SLM using respective first and second portions of the segmented illumination light source.
7.
METHODS AND APPARATUSES FOR CASTING POLYMER PRODUCTS
In an example method of forming a waveguide part having a predetermined shape, a photocurable material is dispensed into a space between a first mold portion and a second mold portion opposite the first mold portion. A relative separation between a surface of the first mold portion with respect to a surface of the second mold portion opposing the surface of the first mold portion is adjusted to fill the space between the first and second mold portions. The photocurable material in the space is irradiated with radiation suitable for photocuring the photocurable material to form a cured waveguide film so that different portions of the cured waveguide film have different rigidity. The cured waveguide film is separated from the first and second mold portions. The waveguide part is singulated from the cured waveguide film. The waveguide part corresponds to portions of the cured waveguide film having a higher rigidity than other portions of the cured waveguide film.
A fan assembly is disclosed. The fan assembly can include a first support frame. The fan assembly can comprise a shaft assembly having a first end coupled with the first support frame and a second end disposed away from the first end. A second support frame can be coupled with the first support frame and disposed at or over the second end of the shaft assembly. An impeller can have fan blades coupled with a hub, the hub being disposed over the shaft assembly for rotation between the first and second support frames about a longitudinal axis. Transverse loading on the shaft assembly can be controlled by the first and second support frames.
A wearable ophthalmic device may include a head-mounted light field display configured to generate a physical light field comprising a beam of light. Camera(s) on or in communication with the device may receive light from the surroundings, and a light field processor may determine, based on the light, left and right numerical light field image data describing image(s) to be displayed to the left and right eyes respectively. The left and/or right numerical light field image data can be modified to computationally introduce a shift based on a determined convergence point of the eyes, and the physical light field presented to the user can be generated corresponding to the modified numerical light field image data, e.g., to correct for a convergence deficiency of the eye(s).
A61B 3/00 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux
A61B 3/028 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure subjective, c. à d. appareils de d’examen nécessitant la participation active du patient pour la détermination de la réfraction, p.ex. phoromètres
A61B 3/10 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient
Systems and methods for displaying a virtual reticle in an augmented or virtual reality environment by a wearable device are described. The environment can include real or virtual objects that may be interacted with by the user through a variety of poses, such as, e.g., head pose, eye pose or gaze, or body pose. The user may select objects by pointing the virtual reticle toward a target object by changing pose or gaze. The wearable device can recognize that an orientation of a user's head or eyes is outside of a range of acceptable or comfortable head or eye poses and accelerate the movement of the reticle away from a default position and toward a position in the direction of the user's head or eye movement, which can reduce the amount of movement by the user to align the reticle and target.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04812 - Techniques d’interaction fondées sur l’aspect ou le comportement du curseur, p.ex. sous l’influence de la présence des objets affichés
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
11.
EYEPIECE FOR HEAD-MOUNTED DISPLAY AND METHOD FOR MAKING THE SAME
A method, includes providing a wafer including a first surface grating extending over a first area of a surface of the wafer and a second surface grating extending over a second area of the surface of the wafer; de-functionalizing a portion of the surface grating in at least one of the first surface grating area and the second surface grating area; and singulating an eyepiece from the wafer, the eyepiece including a portion of the first surface grating area and a portion of the second surface grating area. The first surface grating in the eyepiece corresponds to an input coupling grating for a head-mounted display and the second surface grating corresponds to a pupil expander grating for the head-mounted display.
Disclosed herein are systems and methods for setting, accessing, and modifying user privacy settings using a distributed ledger. In an aspect, a system can search previously stored software contracts to locate an up-to-date version of a software contract associated with a user based on a request for access to user data for the particular user. Then, the system determines that the user data is permitted to be shared. The system transmits, to a data virtualization platform, instructions to extract encrypted user data from a data platform. The system can then make available, to a data verification system, a private encryption key and details associated with the software contract to verify that the private encryption key and the user data match. Then the system transmits, to the data virtualization platform, the private encryption key so that the data virtualization platform can decrypt the encrypted user data.
H04L 9/00 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p.ex. par clés ou règles de contrôle de l’accès
H04L 9/30 - Clé publique, c. à d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
Neutral avatars are neutral with reference physical characteristics of the corresponding user, such as weight, ethnicity, gender, or even identity. Thus, neutral avatars may be desirable to use in various copresence environments where the user desires to maintain privacy with reference to the above-noted characteristics. Neutral avatars may be configured to convey, in real-time, actions and behaviors of the corresponding user without using literal forms of the user's actions and behaviors.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
Systems, apparatus, and methods for double-sided imprinting are provided. An example system includes first rollers for moving a first web including a first template having a first imprinting feature, second rollers for moving a second web including a second template having a second imprinting feature, dispensers for dispensing resist, a locating system for locating reference marks on the first and second webs for aligning the first and second templates, a light source for curing the resist, such that a cured first resist has a first imprinted feature corresponding to the first imprinting feature on one side of the substrate and a cured second resist has a second imprinted feature corresponding to the second imprinting feature on the other side of the substrate, and a moving system for feeding in the substrate between the first and second templates and unloading the double-imprinted substrate from the first and second webs.
B29C 59/04 - Façonnage de surface, p.ex. gaufrage; Appareils à cet effet par des moyens mécaniques, p.ex. par pressage en utilisant des rouleaux ou des courroies sans fin
B29C 43/22 - Moulage par pressage, c. à d. en appliquant une pression externe pour faire couler la matière à mouler; Appareils à cet effet pour la fabrication d'objets de longueur indéfinie
B29C 43/28 - Moulage par pressage, c. à d. en appliquant une pression externe pour faire couler la matière à mouler; Appareils à cet effet pour la fabrication d'objets de longueur indéfinie en incorporant des parties ou des couches préformées, p.ex. moulage par pressage autour d'inserts ou sur des objets à recouvrir
B29C 43/30 - Fabrication d'objets multicouches ou polychromes
B29C 43/34 - Alimentation en matière à mouler des moules ou des moyens de pressage
B29C 51/26 - Façonnage par thermoformage, p.ex. façonnage de feuilles dans des moules en deux parties ou par emboutissage profond; Appareils à cet effet - Eléments constitutifs, détails ou accessoires; Opérations auxiliaires
G03F 7/00 - Production par voie photomécanique, p.ex. photolithographique, de surfaces texturées, p.ex. surfaces imprimées; Matériaux à cet effet, p.ex. comportant des photoréserves; Appareillages spécialement adaptés à cet effet
15.
UV AND VISIBLE LIGHT EXIT GRATING FOR EYEPIECE FABRICATION AND OPERATION
A method of forming a waveguide for an eyepiece for a display system to reduce optical degradation of the waveguide during segmentation is disclosed herein. The method includes providing a substrate having top and bottom major surfaces and a plurality of surface features, and using a laser beam to cut out a waveguide from said substrate by cutting along a path contacting and/or proximal to said plurality of surface features. The waveguide has edges formed by the laser beam and a main region and a peripheral region surrounding the main region. The peripheral region is surrounded by the edges.
Techniques are disclosed for operating a time-of-flight (TOF) sensor. The TOF may be operated in a low power mode by repeatedly performing a low power mode sequence, which may include performing a depth frame by emitting light pulses, detecting reflected light pulses, and computing a depth map based on the detected reflected light pulses. Performing the low power mode sequence may also include performing an amplitude frame at least one time by emitting a light pulse, detecting a reflected light pulse, and computing an amplitude map based on the detected reflected light pulse. In response to determining that an activation condition is satisfied, the TOF may be switched to operate in a high accuracy mode by repeatedly performing a high accuracy mode sequence, which may include performing the depth frame multiple times.
Examples of systems and methods for rendering an avatar in a mixed reality environment are disclosed. The systems and methods may be configured to automatically select avatar characteristics that optimize gaze perception by the user, based on context parameters associated with the virtual environment.
A display assembly suitable for use with a virtual or augmented reality headset is described and includes the following: an input coupling grating; a scanning mirror configured to rotate about two or more different axes of rotation; an optical element; and optical fibers, each of which have a light emitting end disposed between the input coupling grating and the scanning mirror and oriented such that light emitted from the light emitting end is refracted through at least a portion of the optical element, reflected off the scanning mirror, refracted back through the optical element and into the input coupling grating. The scanning mirror can be built upon a MEMS type architecture.
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
An augmented reality (AR) system includes a handheld device comprising handheld fiducials affixed to the handheld device. The AR system also includes a wearable device comprising a display operable to display virtual content and an imaging device mounted to the wearable device and having a field of view that at least partially includes the handheld fiducials and a hand of a user. The AR system also includes a computing apparatus configured to receive hand pose data associated with the hand based on an image captured by the imaging device and receive handheld device pose data associated with the handheld device based on the image captured by the imaging device. The computing apparatus is also configured to determine a pose discrepancy between the hand pose data and the handheld device pose data and perform an operation to fuse the hand pose data with the handheld device pose data.
Disclosed herein are systems and methods for colocating virtual content. A method may include receiving first persistent coordinate data, second persistent coordinate data, and relational data. A third persistent coordinate data and a fourth persistent coordinate data may be determined based on input received via one or more sensors of a head-wearable device. It can be determined whether the first persistent coordinate data corresponds to the third persistent coordinate data. In accordance with a determination that the first persistent coordinate data corresponds to the third persistent coordinate data, it can be determined whether the second persistent coordinate data corresponds to the fourth persistent coordinate data. In accordance with a determination that the second persistent coordinate data corresponds to the fourth persistent coordinate data, a virtual object can be displayed using the relational data and the second persistent coordinate data via a display of the head-wearable device. In accordance with a determination that the second persistent coordinate data does not correspond to the fourth persistent coordinate data, the virtual object can be displayed using the relational data and the first persistent coordinate data via the display of the head-wearable device. In accordance with a determination that the first persistent coordinate data does not correspond to the third persistent coordinate data, the method may forgo displaying the virtual object via the display of the head-wearable device.
An electronic device is disclosed. The electronic device comprises a first clock configured to operate at a frequency. First circuitry of the electronic device is configured to synchronize with the first clock. Second circuitry is configured to determine a second clock based on the first clock. The second clock is configured to operate at the frequency of the first clock, and is further configured to operate with a phase shift with respect to the first clock. Third circuitry is configured to synchronize with the second clock.
H03L 7/081 - Commande automatique de fréquence ou de phase; Synchronisation utilisant un signal de référence qui est appliqué à une boucle verrouillée en fréquence ou en phase - Détails de la boucle verrouillée en phase avec un déphaseur commandé additionnel
22.
WIDE FIELD-OF-VIEW POLARIZATION SWITCHES WITH LIQUID CRYSTAL OPTICAL ELEMENTS WITH PRETILT
A switchable optical assembly comprises a switchable waveplate configured to be electrically activated and deactivated to selectively alter the polarization state of light incident on the switchable waveplate. The switchable waveplate comprises first and second surfaces and a liquid crystal layer disposed between the first and second surfaces. The liquid crystal layer comprises a plurality of liquid crystal molecules. The first surface and/or the second surface may be planar. The first surface and/or the second surface may be curved. The plurality of liquid crystal molecules may vary in tilt with respect to the first and second surfaces with outward radial distance from an axis through the first and second surfaces and the liquid crystal layer in a plurality of radial directions. The switchable waveplate can include a plurality of electrodes to apply an electrical signal across the liquid crystal layer.
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
G02B 30/34 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p.ex. visionneuses de diapositives 3D
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
G02F 1/13 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using depth data to update camera calibration data. In some implementations, a frame of data is captured including (i) depth data from a depth sensor of a device, and (ii) image data from a camera of the device. Selected points from the depth data are transformed, using camera calibration data for the camera, to a three-dimensional space that is based on the image data. The transformed points are projected onto the two-dimensional image data from the camera. Updated camera calibration data is generated based on differences between (i) the locations of the projected points and (ii) locations that features representing the selected points appear in the two-dimensional image data from the camera. The updated camera calibration data can be used in a simultaneous localization and mapping process.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
An optical projection system includes a source of collimated light, a first microelectromechanical system mirror positioned to receive collimated light from the source, and an optical relay system positioned to receive collimated light from the first microelectromechanical system mirror. The optical relay system includes a single-pass relay having a first component, a second component, and a third component. The optical projection system also includes a second microelectromechanical system mirror positioned to receive collimated light from the optical relay system and an eyepiece positioned to receive light reflected from the second microelectromechanical system mirror.
An optical scanner includes a base region and a cantilevered silicon beam protruding from the base region. The optical scanner also includes a waveguide disposed on the base region and the cantilevered silicon beam and a transducer assembly comprising one or more piezoelectric actuators coupled to the cantilevered silicon beam and configured to induce motion of the cantilevered silicon beam in a scan pattern.
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
Methods are disclosed for fabricating molds for forming waveguides with integrated spacers for forming eyepieces. The molds are formed by etching features (e.g., 1 μm to 1000 μm deep) into a substrate comprising single crystalline material using an anisotropic wet etch. The etch masks for defining the large features may comprise a plurality of holes, wherein the size and shape of each hole at least partially determine the depth of the corresponding large feature. The holes may be aligned along a crystal axis of the substrate and the etching may automatically stop due to the crystal structure of the substrate. The patterned substrate may be utilized as a mold onto which a flowable polymer may be introduced and allowed to harden. Hardened polymer in the holes may form a waveguide with integrated spacers. The mold may be also used to fabricate a platform comprising a plurality of vertically extending microstructures of precise heights, to test the curvature or flatness of a sample, e.g., based on the amount of contact between the microstructures and the sample.
B29C 43/02 - Moulage par pressage, c. à d. en appliquant une pression externe pour faire couler la matière à mouler; Appareils à cet effet pour la fabrication d'objets de longueur définie, c. à d. d'objets séparés
B29C 33/42 - Moules ou noyaux; Leurs détails ou accessoires caractérisés par la forme de la surface de moulage, p.ex. par des nervures ou des rainures
B29C 43/40 - Moules pour la fabrication d'objets de longueur définie, c. à d. d'objets séparés avec des moyens pour le découpage des objets
An example a head-mounted display device includes a light projector and an eyepiece. The eyepiece is arranged to receive light from the light projector and direct the light to a user during use of the wearable display system. The eyepiece includes a waveguide having an edge positioned to receive light from the display light source module and couple the light into the waveguide. The waveguide includes a first surface and a second surface opposite the first surface. The waveguide includes several different regions, each having different grating structures configured to diffract light according to different sets of grating vectors.
Blazed diffraction gratings provide optical elements in head-mounted display systems to, e.g., incouple light into or out-couple light out of a waveguide. These blazed diffraction gratings may be configured to have reduced polarization sensitivity. Such gratings may, for example, incouple or outcouple light of different polarizations with similar level of efficiency. The blazed diffraction gratings and waveguides may be formed in a high refractive index substrate such as lithium niobate. In some implementations, the blazed diffraction gratings may include diffractive features having a feature height of 40 nm to 120 nm, for example, 80 nm. The diffractive features may be etched into the high index substrate, e.g., lithium niobate.
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of cornea of the user's eye using data derived from the glint images. The display system may use spherical and aspheric cornea models to estimate a location of the corneal center of the user's eye.
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
30.
CROSS REALITY SYSTEM WITH SIMPLIFIED PROGRAMMING OF VIRTUAL CONTENT
A cross reality system that renders virtual content generated by executing native mode applications may be configured to render web-based content using components that render content from native applications. The system may include a Prism manager that provides Prisms in which content from executing native applications is rendered. For rendering web based content, a browser, accessing the web based content, may be associated with a Prism and may render content into its associated Prism, creating the same immersive experience for the user as when content is generated by a native application. The user may access the web application from the same program launcher menu as native applications. The system may have tools that enable a user to access these capabilities, including by creating for a web location an installable entity that, when processed by the system, results in an icon for the web content in a program launcher menu.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 16/954 - Navigation, p.ex. en utilisant la navigation par catégories
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p.ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
31.
SPATIALLY-RESOLVED DYNAMIC DIMMING FOR AUGMENTED REALITY DEVICE
Techniques are described for operating an optical system. In some embodiments, light associated with a world object is received at the optical system. Virtual image light is projected onto an eyepiece of the optical system. A portion of a system field of view of the optical system to be at least partially dimmed is determined based on information detected by the optical system. A plurality of spatially-resolved dimming values for the portion of the system field of view may be determined based on the detected information. The detected information may include light information, gaze information, and/or image information. A dimmer of the optical system may be adjusted to reduce an intensity of light associated with the world object in the portion of the system field of view according to the plurality of dimming values.
A wearable device can present virtual content to the wearer for many applications in a healthcare setting. The wearer may be a patient or a healthcare provider (HCP). Such applications can include, but are not limited to, access, display, and modification of patient medical records and sharing patient medical records among authorized HCPs.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p.ex. pour des dossiers électroniques de patients
A61B 3/00 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux
A61B 3/10 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
A61B 5/06 - Dispositifs autres que ceux à radiation, pour détecter ou localiser les corps étrangers
A61B 5/1171 - Identification des personnes basée sur la morphologie ou l’aspect de leur corps ou de parties de celui-ci
A61B 5/339 - Affichages spécialement adaptés à cet effet
A61B 17/00 - Instruments, dispositifs ou procédés chirurgicaux, p.ex. tourniquets
A61B 34/20 - Systèmes de navigation chirurgicale; Dispositifs pour le suivi ou le guidage d'instruments chirurgicaux, p.ex. pour la stéréotaxie sans cadre
A61B 90/00 - Instruments, outillage ou accessoires spécialement adaptés à la chirurgie ou au diagnostic non couverts par l'un des groupes , p.ex. pour le traitement de la luxation ou pour la protection de bords de blessures
A61B 90/50 - Supports pour instruments chirurgicaux, p.ex. bras articulés
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
G16H 30/40 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le traitement d’images médicales, p.ex. l’édition
G16H 40/67 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santé; TIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement à distance
33.
TWO-DIMENSIONAL MICRO-ELECTRICAL MECHANICAL SYSTEM MIRROR AND ACTUATION METHOD
A two-dimensional scanning micromirror device includes a base, a first platform coupled to the base by first support flexures, and a second platform including a reflector and coupled to the first platform by second support flexures. The first platform is oscillatable about a first axis and the second platform is oscillatable about a second axis orthogonal to the first axis. The first platform, the second platform, and the second support flexures together exhibit a first resonance having a first frequency, the first resonance corresponds to oscillatory motion of at least the first platform, the second platform, and the second support flexures about the first axis. The first platform, the second platform, and the second support flexures together exhibit a second resonance having a second frequency, and the second resonance corresponds to oscillatory motion of at least the second platform about the second axis.
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
G03B 21/00 - Projecteurs ou visionneuses du type par projection; Leurs accessoires
Examples of systems and methods for matching a base mesh to a target mesh for a virtual avatar or object are disclosed. The systems and methods may be configured to automatically match a base mesh of an animation rig to a target mesh, which may represent a particular pose of the virtual avatar or object. Base meshes may be obtained by manipulating an avatar or object into a particular pose, while target meshes may be obtain by scanning, photographing, or otherwise obtaining information about a person or object in the particular pose. The systems and methods may automatically match a base mesh to a target mesh using rigid transformations in regions of higher error and non-rigid deformations in regions of lower error.
B63B 21/26 - Ancres avec système d'assurance de tenue sur le fond
F01N 13/00 - Silencieux ou dispositifs d'échappement caractérisés par les aspects de structure
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 40/58 - Utilisation de traduction automatisée, p.ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
A wearable computing system that includes a head-mounted display implements a gaze timer feature for enabling the user to temporarily extend the functionality of a handheld controller or other user input device. In one embodiment, when the user gazes at, or in the vicinity of, a handheld controller for a predetermined period of time, the functionality of one or more input elements (e.g., buttons) of the handheld controller is temporarily modified. For example, the function associated with a particular controller button may be modified to enable the user to open a particular menu using the button. The gaze timer feature may, for example, be used to augment the functionality of a handheld controller or other user input device during mixed reality and/or augmented reality sessions.
G06F 3/023 - Dispositions pour convertir sous une forme codée des éléments d'information discrets, p.ex. dispositions pour interpréter des codes générés par le clavier comme codes alphanumériques, comme codes d'opérande ou comme codes d'instruction
G06F 3/048 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI]
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.
The disclosure relates to systems and methods for displaying three-dimensional (3D) content in a spatial 3D environment. The systems and methods can include receiving a request from web domain to display 3D content of certain dimensions at a location within the spatial 3D environment, identifying whether the placement is within an authorized portion of the spatial 3D environment, expanding the authorized portion of the 3D spatial environment to display the 3D content based on a user authorization to resize the authorized portion, and displaying the 3D content.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 16/954 - Navigation, p.ex. en utilisant la navigation par catégories
H04N 13/332 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques
40.
METHOD AND SYSTEM FOR PERFORMING SPATIAL FOVEATION BASED ON EYE GAZE
A method includes determining an eye gaze location of a user and generating a spatial foveation map based on the eye gaze location. The method also includes receiving an image, forming a spatially foveated image using the image and the spatial foveation map, and transmitting the spatially foveated image to a wearable device. The method further includes spatially defoveating the spatially foveated image to produce a spatially defoveated image and displaying the spatially defoveated image.
An eyepiece for projecting an image to a viewer includes a substrate positioned in a substrate lateral plane and a set of color filters disposed on the substrate. The set of color filters comprise a first color filter disposed at a first lateral position and operable to pass a first wavelength range, a second color filter disposed at a second lateral position and operable to pass a second wavelength range, and a third color filter disposed at a third lateral position and operable to pass a third wavelength range. The eyepiece further includes a first planar waveguide positioned in a first lateral plane adjacent the substrate lateral plane, a second planar waveguide positioned in a second lateral plane adjacent to the first lateral plane, and a third planar waveguide positioned in a third lateral plane adjacent to the second lateral plane.
A method of fabricating a fiber scanning system includes forming a set of piezoelectric elements. The method also includes coating an interior surface and an exterior surface of each of the set of piezoelectric elements with a first conductive material. The method also includes providing a fiber optic element having an actuation region and coating the actuation region of the fiber optic element with a second conductive material. The method also includes joining the interior surfaces of the set of piezoelectric elements to the actuation region of the fiber optic element and poling the set of piezoelectric elements. The method also includes forming electrical connections to the exterior surface of each of the set of piezoelectric elements and the fiber optic element.
H02N 2/00 - Machines électriques en général utilisant l'effet piézo-électrique, l'électrostriction ou la magnétostriction
H02N 2/02 - Machines électriques en général utilisant l'effet piézo-électrique, l'électrostriction ou la magnétostriction produisant un mouvement linéaire, p.ex. actionneurs; Positionneurs linéaires
H02N 2/04 - Machines électriques en général utilisant l'effet piézo-électrique, l'électrostriction ou la magnétostriction produisant un mouvement linéaire, p.ex. actionneurs; Positionneurs linéaires - Détails de structure
H02N 2/06 - Circuits d'entraînement; Dispositions pour la commande
H10N 30/045 - Traitements afin de modifier une propriété piézo-électrique ou électrostrictive, p.ex. les caractéristiques de polarisation, de vibration ou par réglage du mode par polarisation
A thin transparent layer can be integrated in a head mounted display device and disposed in front of the eye of a wearer. The thin transparent layer may be configured to output light such that light is directed onto the eye to create reflections therefrom that can be used, for example, for glint based tracking. The thin transparent layer can be configured to reduced obstructions in the field of the view of the user.
A handheld controller includes a housing having a frame, one or more external surfaces, and a plurality of vibratory external surfaces. The handheld controller also includes a plurality of vibration sources disposed in the housing, where the one or more external surfaces and the plurality of vibration sources are mechanically coupled to the frame. The handheld controller also includes a plurality of structural members, where each of the plurality of structural members mechanically couple one of the plurality of vibratory external surfaces to one of the plurality of vibration sources.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A63F 13/245 - Dispositions d'entrée pour les dispositifs de jeu vidéo - Parties constitutives, p.ex. manettes de jeu avec poignées amovibles spécialement adaptées pour un type particulier de jeu, p.ex. les volants
A63F 13/285 - Génération de signaux de retour tactiles via le dispositif d’entrée du jeu, p.ex. retour de force
45.
EYE-IMAGING APPARATUS USING DIFFRACTIVE OPTICAL ELEMENTS
Examples of eye-imaging apparatus using diffractive optical elements are provided. For example, an optical device comprises a substrate having a proximal surface and a distal surface, a first coupling optical element disposed on one of the proximal and distal surfaces of the substrate, and a second coupling optical element disposed on one of the proximal and distal surfaces of the substrate and offset from the first coupling optical element. The first coupling optical element can be configured to deflect light at an angle to totally internally reflect (TIR) the light between the proximal and distal surfaces and toward the second coupling optical element, and the second coupling optical element can be configured to deflect at an angle out of the substrate. The eye-imaging apparatus can be used in a head-mounted display such as an augmented or virtual reality display.
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/332 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
One embodiment is directed to a system for enabling two or more users to interact within a virtual world comprising virtual world data, comprising a computer network comprising one or more computing devices, the one or more computing devices comprising memory, processing circuitry, and software stored at least in part in the memory and executable by the processing circuitry to process at least a portion of the virtual world data; wherein at least a first portion of the virtual world data originates from a first user virtual world local to a first user, and wherein the computer network is operable to transmit the first portion to a user device for presentation to a second user, such that the second user may experience the first portion from the location of the second user, such that aspects of the first user virtual world are effectively passed to the second user.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 1/20 - Architectures de processeurs; Configuration de processeurs p.ex. configuration en pipeline
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau
H04L 67/131 - Protocoles pour jeux, simulations en réseau ou réalité virtuelle
Disclosed herein are systems and methods for calculating angular acceleration based on inertial data using two or more inertial measurement units (IMUs). The calculated angular acceleration may be used to estimate a position of a wearable head device comprising the IMUs. Virtual content may be presented based on the position of the wearable head device. In some embodiments, a first IMU and a second IMU share a coincident measurement axis.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G01P 15/08 - Mesure de l'accélération; Mesure de la décélération; Mesure des chocs, c. à d. d'une variation brusque de l'accélération en ayant recours aux forces d'inertie avec conversion en valeurs électriques ou magnétiques
G01P 15/16 - Mesure de l'accélération; Mesure de la décélération; Mesure des chocs, c. à d. d'une variation brusque de l'accélération en calculant la dérivée par rapport au temps d'un signal de vitesse mesuré
48.
INTERACTIONS WITH 3D VIRTUAL OBJECTS USING POSES AND MULTIPLE-DOF CONTROLLERS
A wearable system can comprise a display system configured to present virtual content in a three-dimensional space, a user input device configured to receive a user input, and one or more sensors configured to detect a user's pose. The wearable system can support various user interactions with objects in the user's environment based on contextual information. As an example, the wearable system can adjust the size of an aperture of a virtual cone during a cone cast (e.g., with the user's poses) based on the contextual information. As another example, the wearable system can adjust the amount of movement of virtual objects associated with an actuation of the user input device based on the contextual information.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
49.
THREAD WEAVE FOR CROSS-INSTRUCTION SET ARCHITECTUREPROCEDURE CALLS
The invention provides a method of initiating code including (i) storing an application having first, second and third functions, the first function being a main function that calls the second and third functions to run the application, (ii) compiling the application to first and second heterogeneous processors to create first and second central processing unit (CPU) instruction set architecture (ISA) objects respectively, (iii) pruning the first and second CPU ISA objects by removing the third function from the first CPU ISA objects and removing first and second functions from the second CPU ISA objects, (iv) proxy inserting first and second remote procedure calls (RPC's) in the first and second CPU ISA objects respectively, and pointing respectively to the third function in the second CPU ISA objects and the second function in the first CPU ISA objects, and (v) section renaming the second CPU ISA objects to common application library.
Head-mounted virtual and augmented reality display systems include a light projector with one or more emissive micro-displays having a first resolution and a pixel pitch. The projector outputs light forming frames of virtual content having at least a portion associated with a second resolution greater than the first resolution. The projector outputs light forming a first subframe of the rendered frame at the first resolution, and parts of the projector are shifted using actuators, such that physical positions of light output for individual pixels occupy gaps between the old locations of light output for individual pixels. The projector then outputs light forming a second subframe of the rendered frame. The first and second subframes are outputted within the flicker fusion threshold. Advantageously, an emissive micro-display (e.g., micro-LED display) having a low resolution can form a frame having a higher resolution by using the same light emitters to function as multiple pixels of that frame.
G02B 27/09 - Mise en forme du faisceau, p.ex. changement de la section transversale, non prévue ailleurs
G02B 27/10 - Systèmes divisant ou combinant des faisceaux
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
G02B 27/18 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour projection optique, p.ex. combinaison de miroir, de condensateur et d'objectif
G02B 27/40 - Moyens optiques auxiliaires pour mise au point
G02B 27/62 - Appareils optiques spécialement adaptés pour régler des éléments optiques pendant l'assemblage de systèmes optiques
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G09G 3/32 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des panneaux électroluminescents semi-conducteurs, p.ex. utilisant des diodes électroluminescentes [LED]
H02N 2/02 - Machines électriques en général utilisant l'effet piézo-électrique, l'électrostriction ou la magnétostriction produisant un mouvement linéaire, p.ex. actionneurs; Positionneurs linéaires
A fiber scanning projector includes a piezoelectric element and a scanning fiber passing through and mechanically coupled to the piezoelectric element. The scanning fiber emits light propagating along an optical path. The fiber scanning projector also includes a first polarization sensitive reflector disposed along and perpendicular to the optical path. The first polarization sensitive reflector includes an aperture and the scanning fiber passes through the aperture. The fiber scanning projector also includes a second polarization sensitive reflector disposed along and perpendicular to the optical path.
A high-resolution image sensor suitable for use in an augmented reality (AR) system. The AR system may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may have pixels configured to output events indicating changes in sensed IR light. Those pixels may be sensitive to IR light of the same frequency source as an active IR light source, and may be part of an eye tracking camera, oriented toward a user's eye. Changes in IR light may be used to determine the location of the user's pupil, which may be used in rendering virtual objects. The events may be generated and processed at a high rate, enabling the system to render the virtual object based on the user's gaze so that the virtual object will appear more realistic to the user.
A two-dimensional waveguide light multiplexer can efficiently multiplex and distribute a light signal in two dimensions. An example of a two-dimensional waveguide light multiplexer can include a waveguide, a first diffraction grating, and a second diffraction grating arranged such that the grating direction of the first diffraction grating is perpendicular to the grating direction of the second diffraction grating. In some examples, the first and second diffraction gratings are on opposite sides of a waveguide. In some examples, the first and second diffraction gratings are on a same side of a waveguide, with the second grating over the first grating.
G02F 1/00 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs
54.
METHOD AND SYSTEM FOR PERFORMING OPTICAL IMAGING IN AUGMENTED REALITY DEVICES
An image projection system includes an illumination source and an eyepiece waveguide including a plurality of diffractive incoupling optical elements. The eyepiece waveguide includes a region operable to transmit light from the illumination source. The image projection system also includes a first optical element including a reflective polarizer, a second optical element including a partial reflector, a first quarter waveplate disposed between the first optical element and the second optical element, a reflective spatial light modulator, and a second quarter waveplate disposed between the second optical element and the reflective spatial light modulator.
Systems and methods for gradient adversarial training of a neural network are disclosed. In one aspect of gradient adversarial training, an auxiliary neural network can be trained to classify a gradient tensor that is evaluated during backpropagation in a main neural network that provides a desired task output. The main neural network can serve as an adversary to the auxiliary network in addition to a standard task-based training procedure. The auxiliary neural network can pass an adversarial gradient signal back to the main neural network, which can use this signal to regularize the weight tensors in the main neural network. Gradient adversarial training of the neural network can provide improved gradient tensors in the main network. Gradient adversarial techniques can be used to train multitask networks, knowledge distillation networks, and adversarial defense networks.
An imprint lithography method of configuring an optical layer includes selecting one or more parameters of a nanolayer to be applied to a substrate for changing an effective refractive index of the substrate and imprinting the nanolayer on the substrate to change the effective refractive index of the substrate such that a relative amount of light transmittable through the substrate is changed by a selected amount.
G03F 7/00 - Production par voie photomécanique, p.ex. photolithographique, de surfaces texturées, p.ex. surfaces imprimées; Matériaux à cet effet, p.ex. comportant des photoréserves; Appareillages spécialement adaptés à cet effet
G02B 1/118 - Revêtements antiréfléchissants ayant des structures de surface de longueur d’onde sous-optique conçues pour améliorer la transmission, p.ex. structures du type œil de mite
57.
REAL-TIME PREVIEW OF CONNECTABLE OBJECTS IN A PHYSICALLY-MODELED VIRTUAL SPACE
Virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) systems may enable one or more users to connect two or more connectable objects together. These connectable objects may be real objects from the user's environment, virtual objects, or a combination thereof. A preview system may be included as a part of the VR, AR, and/or MR systems that provide a preview of the connection between the connectable objects prior to the user(s) connecting the connectable objects. The preview may include a representation of the connectable objects in a connected state along with an indication of whether the connected state is valid or invalid. The preview system may continuously physically model the connectable objects while simultaneously displaying a preview of the connection process to the user of the VR, AR, or MR system.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
58.
METHOD AND SYSTEM FOR PERFORMING OPTICAL IMAGING IN AUGMENTED REALITY DEVICES
An image projection system includes an illumination source, a linear polarizer, and an eyepiece waveguide including a plurality of diffractive in-coupling optical elements. The eyepiece waveguide includes a region operable to transmit illumination light from the illumination source. The image projection system also includes a polarizing beamsplitter, a reflective structure, a quarter waveplate disposed between the polarizing beamsplitter and the reflective structure, and a reflective spatial light modulator.
A system includes a first chuck operable to support a stencil including a plurality of apertures, a wafer chuck operable to support and move a wafer including a plurality of incoupling gratings, a first light source operable to direct light to impinge on a first surface of the stencil, and one or more second light sources operable to direct light to impinge on the wafer. The system also includes one or more lens and camera assemblies operable to receive light from the first light source passing through the plurality of apertures in the stencil and receive light from the one or more second light sources diffracted from the plurality of incoupling gratings in the wafer. The system also includes an alignment system operable to move the wafer with respect to the stencil to reduce an offset between aperture locations and incoupling grating locations.
B29C 39/02 - Moulage par coulée, c. à d. en introduisant la matière à mouler dans un moule ou entre des surfaces enveloppantes sans pression significative de moulage; Appareils à cet effet pour la fabrication d'objets de longueur définie, c. à d. d'objets séparés
An eye tracking system can include eye-tracking camera(s) configured to obtain images of the eye at different exposure times or different frame rates. For example, longer exposure images of the eye taken at a longer exposure time can show iris or pupil features, and shorter exposure, glint images can show peaks of glints reflected from the eye. The shorter exposure glint images may be taken at a higher frame rate than the longer exposure images for more accurate gaze prediction. The shorter exposure glint images can be analyzed to provide glint locations to subpixel accuracy. The longer exposure images can be analyzed for pupil center and/or center of rotation. The eye tracking system can predict gaze direction, which can be used for foveated rendering by a wearable display system. In some instances, the eye-tracking system may estimate the location of a partially or totally occluded glint.
Systems and methods for interacting with virtual objects in a three-dimensional space using a wearable system are disclosed. The wearable system can be programmed to permit user interaction with interactable objects in a field of regard (FOR) of a user. The FOR includes a portion of the environment around the user that is capable of being perceived by the user via the AR system. The system can determine a group of interactable objects in the FOR of the user and determine a pose of the user. The system can update, based on a change in the pose or a field of view (FOV) of the user, a subgroup of the interactable objects that are located in the FOV of the user and receive a selection of a target interactable object from the subgroup of interactable objects. The system can initiate a selection event on the target interactable object.
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/04883 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels pour l’entrée de données par calligraphie, p.ex. sous forme de gestes ou de texte
62.
SYSTEM AND METHOD FOR RETINA TEMPLATE MATCHING IN TELEOPHTHALMOLOGY
A retina image template matching method is based on the registration and comparison between the images captured with portable low-cost fundus cameras (e.g., a consumer grade camera typically incorporated into a smartphone or tablet computer) and a baseline image. The method solves the challenges posed by registering small and low-quality retinal template images captured with such cameras. Our method combines dimension reduction methods with a mutual information (MI) based image registration technique. In particular, principle components analysis (PCA) and optionally block PCA are used as a dimension reduction method to localize the template image coarsely to the baseline image, then the resulting displacement parameters are used to initialize the MI metric optimization for registration of the template image with the closest region of the baseline image.
A61B 3/12 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour examiner le fond de l'œil, p.ex. ophtalmoscopes
A61B 3/00 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux
A61B 3/14 - Dispositions spécialement adaptées à la photographie de l'œil
G06T 3/4038 - Création de mosaïques d’images, p. ex. composition d’images planes à partir de sous-images planes
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
A head mounted display system can include a camera, at least one waveguide, at least one coupling optical element that is configured such that light is coupled into the waveguide(s) and guided therein, and at least one out-coupling element. The at least one out-coupling element can be configured to couple light that is guided within the waveguide(s) out of the waveguide(s) and direct the light to the camera. The at least one coupling element may include a diffractive optical element having optical power. Additionally or alternatively, the at least one coupling optical element may have a coupling area, for coupling light into the waveguide(s), the coupling area having an average thickness in a range from 0.1 to 3 millimeters across and/or may have a pinhole sized and/or slit shaped coupling area.
Systems and methods for reducing far-field noise (background noise) from near-field audio signals generated by a microphone array. The systems and methods disclosed herein use an innovative spatial filtering approach which filters far-field ambient noise from the near-field audio and reduces the far-field noise in the near-field audio signal thereby improving and isolating the near-field audio from the far-field interference.
H04R 1/34 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en utilisant un seul transducteur avec des moyens réfléchissant, diffractant, dirigeant ou guidant des sons
G10L 21/0232 - Traitement dans le domaine fréquentiel
G10L 21/038 - Amélioration de l'intelligibilité de la parole, p.ex. réduction de bruit ou annulation d'écho utilisant des techniques d’étalement de bande
65.
METHOD AND SYSTEM FOR PERFORMING FOVEATED IMAGE COMPRESSION BASED ON EYE GAZE
A method of compressing an image includes determining an eye gaze location of a user and generating a foveation map based on the eye gaze location. The foveation map includes a first region of the image and a second region of the image. The method also includes compressing the first region of the image using a first quality setting and the second region of the image using a second quality setting.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 19/167 - Position dans une image vidéo, p.ex. région d'intérêt [ROI]
H04N 19/48 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant des techniques de traitement dans le domaine compressé autres que le décodage, p.ex. modification de coefficients de transformées, de données de codage à longueur variable ou de données de codage par longueur de plage
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
66.
AIR POCKET STRUCTURES FOR PROMOTING TOTAL INTERNAL REFLECTION IN A WAVEGUIDE
Recesses are formed on a front side and a rear side of a waveguide. A solid porogen material is spun onto the front side and the rear side and fills the recesses. First front and rear cap layers are then formed on raised formations of the waveguide and on the solid porogen material. The entire structure is then heated and the solid porogen material decomposes to a porogen gas. The first front and rear cap layers are porous to allow the porogen gas to escape and air to enter into the recesses. The air maximizes a difference in refractive indices between the high-index transparent material of the waveguide and the air to promote reflection in the waveguide from interfaces between the waveguide and the air.
G02B 6/10 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage du type guide d'ondes optiques
G02B 6/122 - Elements optiques de base, p.ex. voies de guidage de la lumière
G02B 6/13 - Circuits optiques intégrés caractérisés par le procédé de fabrication
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
A user may interact and view virtual elements such as avatars and objects and/or real world elements in three-dimensional space in an augmented reality (AR) session. The system may allow one or more spectators to view from a stationary or dynamic camera a third person view of the users AR session. The third person view may be synchronized with the user view and the virtual elements of the user view may be composited onto the third person view.
Techniques are described for enhanced eye tracking for display systems, such as augmented or virtual reality display systems. The display systems may include a light source configured to output light, a moveable diffractive grating configured to reflect light from the light source, the reflected light forming a scan pattern on the eye of the user, and light detectors to detect light reflected from the eye. The orientation of the diffractive grating can be moved such that the light reflected from the diffractive grating is scanned across the eye according to the scan pattern. Light intensity pattern(s) are obtained via the light detectors, with a light intensity pattern representing a light detector signal obtained by detecting light reflected by the eye as the light is scanned across the eye. One or more physiological characteristics and/or a rotation speed of the eye are determined based on detected light intensity pattern(s).
A method of producing a reprojected image includes receiving motion data and determining, based on the motion data, if a motion threshold is exceeded. The method also includes generating a depth-based reprojection if the motion threshold is exceeded or generating a non-depth-based reprojection if the motion threshold is not exceeded.
G06F 16/783 - Recherche de données caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
A method includes receiving, at an encoder, virtual content, receiving, at the encoder, a predicted head pose corresponding to the virtual content, and encoding the virtual content based on the predicted head pose. The method also includes producing compressed content, receiving, at a decoder, the compressed content, and receiving, at the decoder, a current head pose. The method also includes decoding the compressed content based on the current head pose and producing reprojected virtual content.
G06F 16/783 - Recherche de données caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
A head mounted display system configured to project a first image to an eye of a user, the head mounted display system includes at least one waveguide comprising a first major surface, a second major surface opposite the first major surface, and a first edge and a second edge between the first major surface and second major surface. The at least one waveguide also includes a first reflector disposed between the first major surface and the second major surface. The head mounted display system also includes at least one light source disposed closer to the first major surface than the second major surface and a spatial light modulator configured to form a second image and disposed closer to the first major surface than the second major surface, wherein the first reflector is configured to reflect light toward the spatial light modulator.
An augmented reality system includes a light source to generate a virtual light beam, the virtual light beam carrying information for a virtual object. The system also includes a light guiding optical element, the light guiding optical element allowing a first portion of a first real-world light beam to pass therethrough, where the virtual light beam enters the light guiding optical element, propagates through the light guiding optical element by substantially total internal reflection (TIR), and exits the light guiding optical element. The system further includes a lens disposed adjacent and exterior to a surface of the light guiding optical element, the lens comprising a light modulating mechanism to absorb a second portion of the real-world light beam and to allow the first portion of the real-world light to pass through the lens.
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs
An example system for molding a photocurable material into a planar object includes a first mold structure having a first mold surface, a second mold structure having a second mold surface, and one or more protrusions disposed along at least one of the first mold surface or the second mold surface. During operation, the system is configured to position the first and second mold structures such that the first and second mold surfaces face each other with the one or more protrusions contacting the opposite mold surface, and a volume having a total thickness variation (TTV) of 500 nm or less is defined between the first and second mold surfaces. The system is further configured to receive the photocurable material in the volume, and direct radiation at the one or more wavelengths into the volume.
B29D 11/00 - Fabrication d'éléments optiques, p.ex. lentilles ou prismes
B29C 35/08 - Chauffage ou durcissement, p.ex. réticulation ou vulcanisation utilisant l'énergie ondulatoire ou un rayonnement corpusculaire
B29C 39/02 - Moulage par coulée, c. à d. en introduisant la matière à mouler dans un moule ou entre des surfaces enveloppantes sans pression significative de moulage; Appareils à cet effet pour la fabrication d'objets de longueur définie, c. à d. d'objets séparés
A method performed by an augmented reality (AR) system includes receiving a command that is input by the user through the AR system. The augmented reality (AR) system includes a hardware processor and an AR display configured to present virtual content in an environment of a user. The command specifies a type of virtual object to be presented in the environment. In response to the command, virtual objects of the specified type are presented in the environment, and a presentation of at least one of the virtual objects is altered in response to detecting a movement of the user in proximity to the at least one virtual object.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
75.
USING THREE-DIMENSIONAL SCANS OF A PHYSICAL SUBJECT TO DETERMINE POSITIONS AND/OR ORIENTATIONS OF SKELETAL JOINTS IN THE RIGGING FOR A VIRTUAL CHARACTER
Systems and methods for using three-dimensional scans of a physical subject to determine positions and/or orientations of skeletal joints in the rigging for a virtual character. At least one articulation segment of a polygon mesh for the virtual character may be determined. The articulation segment may include a subset of vertices in the polygon mesh. An indicator of the position or orientation of the articulation segment of the polygon mesh may be determined. Based on the indicator of the position or orientation of the articulation segment, the position or orientation of at least one joint for deforming the polygon mesh may be determined.
The description relates the feature matching. Our approach establishes pointwise correspondences between challenging image pairs. It takes off-the-shelf local features as input and uses an attentional graph neural network to solve an assignment optimization problem. The deep middle-end matcher acts as a middle-end and handles partial point visibility and occlusion elegantly, producing a partial assignment matrix.
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/00 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène
G06V 30/196 - Reconnaissance utilisant des moyens électroniques utilisant des comparaisons successives des signaux d’images avec plusieurs références
77.
OPTICAL DEVICE VENTING GAPS FOR EDGE SEALANT AND LAMINATION DAM
An optical device, such as an eyepiece, including multiple layers of waveguides. The optical device can include an edge sealant for reducing light contamination, a lamination dam to restrict the wicking of the edge sealant between layers of the optical device, and venting gap(s) in the sealant and dam to allow air flow between the exterior and interior of the eyepiece. The gap(s) allow outgassing from the interior of the eyepiece of unreacted polymer and/or accumulated moisture, to prevent defect accumulation caused by chemical reaction of outgassed chemicals with the (e.g., ionic, acidic, etc.) surface of the eyepiece layers. The gap(s) also prevent pressure differences which may physically deform the eyepiece over time.
An optical device may include a light turning element. The optical device can include a first surface that is parallel to a horizontal axis and a second surface opposite to the first surface. The optical device may include a light module that includes a plurality of light emitters. The light module can be configured to combine light for the plurality of emitters. The optical device can further include a light input surface that is between the first and the second surfaces and is disposed with respect to the light module to receive light emitted from the plurality of emitters. The optical device may include an end reflector that is disposed on a side opposite the light input surface. The light coupled into the light turning element may be reflected by the end reflector and/or reflected from the second surface towards the first surface.
G02B 5/30 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques autres que les lentilles Éléments polarisants
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
G02B 30/26 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type autostéréoscopique
G02B 30/52 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p.ex. des voxels le volume 3D étant construit à partir d'une pile ou d'une séquence de plans 2D, p.ex. systèmes d'échantillonnage en profondeur
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
G03B 21/00 - Projecteurs ou visionneuses du type par projection; Leurs accessoires
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G09G 3/02 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques par traçage ou balayage d'un faisceau lumineux sur un écran
G09G 3/24 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des filaments incandescents
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 13/279 - Générateurs de signaux d’images à partir de modèles 3D d’objets, p.ex. des signaux d’images stéréoscopiques générés par ordinateur les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
An augmented reality display system can include a first eyepiece waveguide with a first input coupling grating (ICG) region. The first ICG region can receive a set of input beams of light corresponding to an input image having a corresponding field of view (FOV), and can in-couple a first subset of the input beams. The first subset of input beams can correspond to a first sub-portion of the FOV. The system can also include a second eyepiece waveguide with a second ICG region. The second ICG region can receive and in-couple at least a second subset of the input beams. The second subset of the input beams can correspond to a second sub-portion of the FOV. The first and second sub-portions of the FOV can be at least partially different but together include the complete FOV of the input image.
Color-selective waveguides, methods for fabricating color-selective waveguides, and augmented reality (AR)/mixed reality (MR) applications including color-selective waveguides are described. The color-selective waveguides can advantageously reduce or block stray light entering a waveguide (e.g., red, green, or blue waveguide), thereby reducing or eliminating back-reflection or back-scattering into the eyepiece.
An eyepiece for a head-mounted display includes one or more first waveguides arranged to receive light from a spatial light modulator at a first edge, guide at least some of the received light to a second edge opposite the first edge, and extract at least some of the light through a face of the one or more first waveguides between the first and second edges. The eyepiece also includes a second waveguide positioned to receive light exiting the one or more first waveguides at the second edge and guide the received light to one or more light absorbers.
G02B 6/12 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage du type guide d'ondes optiques du genre à circuit intégré
G02B 6/293 - Moyens de couplage optique ayant des bus de données, c. à d. plusieurs guides d'ondes interconnectés et assurant un système bidirectionnel par nature en mélangeant et divisant les signaux avec des moyens de sélection de la longueur d'onde
G02B 6/34 - Moyens de couplage optique utilisant des prismes ou des réseaux
G02B 6/35 - Moyens de couplage optique comportant des moyens de commutation
G02B 30/50 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p.ex. des voxels
82.
WAVEGUIDE-BASED ILLUMINATION FOR HEAD MOUNTED DISPLAY SYSTEM
A head-mounted display system is configured to project light to an eye of a user wearing the head-mounted display system to display content in a vision field of said user. The head-mounted display system comprises at least one diffusive optical element, at least one out-coupling optical element, at least one mask comprising at least one mask opening, at least one illumination in-coupling optical element configured to in-couple light from at least one illumination source into a light-guiding component, an image projector configured to in-couple an image and an at least one illumination source is configured to in-couple light into at least one illumination in-coupling optical element, an eyepiece, a curved light-guiding component, a light-guiding component comprising a portion of a frame, and/or two light-guiding components disposed on opposite sides of at least one out-coupling optical element.
Techniques for operating an optical system are disclosed. World light may be linearly polarized along a first axis. When the optical system is operating in accordance with a first state, a polarization of the world light may be rotated by 90 degrees, the world light may be linearly polarized along a second axis perpendicular to the first axis, and zero net optical power may be applied to the world light. When the optical system is operating in accordance with a second state, virtual image light may be projected onto an eyepiece of the optical system, the world light and the virtual image light may be linearly polarized along the second axis, a polarization of the virtual image light may be rotated by 90 degrees, and non-zero net optical power may be applied to the virtual image light.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/279 - Générateurs de signaux d’images à partir de modèles 3D d’objets, p.ex. des signaux d’images stéréoscopiques générés par ordinateur les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
A method of viewing a patient including inserting a catheter is described for health procedure navigation. A CT scan is carried out on a body part of a patient. Raw data from the CT scan is processed to create three-dimensional image data, storing the image data in the data store. Projectors receive generated light in a pattern representative of the image data and waveguides guide the light to a retina of an eye of a viewer while light from an external surface of the body transmits to the retina of the eye so that the viewer sees the external surface of the body augmented with the processed data rendering of the body part.
Apparatus and methods for displaying an image by a rotating structure are provided. The rotating structure can comprise blades of a fan. The fan can be a cooling fan for an electronics device such as an augmented reality display. In some embodiments, the rotating structure comprises light sources that emit light to generate the image. The light sources can comprise light-field emitters. In other embodiments, the rotating structure is illuminated by an external (e.g., non-rotating) light source.
G09G 3/02 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques par traçage ou balayage d'un faisceau lumineux sur un écran
F04D 17/16 - Pompes centrifuges pour déplacement sans compression notable
F04D 25/08 - Ensembles comprenant des pompes et leurs moyens d'entraînement le fluide énergétique étant l'air, p.ex. pour la ventilation
F04D 29/00 - POMPES À DÉPLACEMENT NON POSITIF - Parties constitutives, détails ou accessoires
F04D 29/42 - Carters d'enveloppe; Tubulures pour le fluide énergétique pour pompes radiales ou hélicocentrifuges
G02B 30/56 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p.ex. des voxels en projetant une image aérienne ou flottante
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
Exemplary systems and methods for creating spatial contents in a mixed reality environment are disclosed. In an example, a location associated with a first user in a coordinate space is determined. A persistent virtual content is generated. The persistent virtual content is associated with the first user's associated location. The first user's associated location is determined and is associated with the persistent virtual content. A location of a second user at a second time in the coordinate space is determined. The persistent virtual content is presented to the second user via a display at a location in the coordinate space corresponding to the first user's associated location.
A wearable display system for a cross reality (XR) system may have a dynamic vision sensor (DVS) camera and a color camera. At least one of the cameras may be a plenoptic camera. The wearable display system may dynamically restrict processing of image data from either or both cameras based on detected conditions and XR function being performed. For tracking an object, image information may be processed for patches of a field of view of either or both cameras corresponding to the object. The object may be tracked based on asynchronously acquired events indicating changes within the patches. Stereoscopic or other types of image information may be used when event-based object tacking yields an inadequate quality metric. The tracked object may be a user's hand or a stationary object in the physical world, enabling calculation of the pose of the wearable display system and of the wearer's head.
An augmented reality device may communicate with a map server via an API interface to provide mapping data that may be implemented into a canonical map, and may also receive map data from the map server. A visualization of map quality, including quality indicators for multiple cells of the environment, may be provided to the user as an overlay to the current real-world environment seen through the AR device. These visualizations may include, for example, a map quality minimap and/or a map quality overlay. The visualizations provide guidance to the user that allows more efficient updates to the map, thereby improving map quality and localization of users into the map.
A display system includes a head-mounted display configured to project light, having different amounts of wavefront divergence, to an eye of a user to display virtual image content appearing to be disposed at different depth planes. The wavefront divergence may be changed in discrete steps, with the change in steps being triggered based upon whether the user is fixating on a particular depth plane. The display system may be calibrated for switching depth planes for a main user. Upon determining that a guest user is utilizing the system, rather than undergoing a full calibration, the display system may be configured to switch depth planes based on a rough determination of the virtual content that the user is looking at. The virtual content may have an associated depth plane and the display system may be configured to switch to the depth plane of that virtual content.
A wearable display system includes one or more emissive micro-displays, e.g., micro-LED displays. The micro-displays may be monochrome micro-displays or full-color micro-displays. The micro-displays may include arrays of light emitters. Light collimators may be utilized to narrow the angular emission profile of light emitted by the light emitters. Where a plurality of emissive micro-displays is utilized, the micro-displays may be positioned at different sides of an optical combiner, e.g., an X-cube prism which receives light rays from different micro-displays and outputs the light rays from the same face of the cube. The optical combiner directs the light to projection optics, which outputs the light to an eyepiece that relays the light to a user's eye. The eyepiece may output the light to the user's eye with different amounts of wavefront divergence, to place virtual content on different depth planes.
G02B 27/09 - Mise en forme du faisceau, p.ex. changement de la section transversale, non prévue ailleurs
G02B 27/10 - Systèmes divisant ou combinant des faisceaux
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
G02B 27/18 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour projection optique, p.ex. combinaison de miroir, de condensateur et d'objectif
G02B 27/40 - Moyens optiques auxiliaires pour mise au point
G02B 27/62 - Appareils optiques spécialement adaptés pour régler des éléments optiques pendant l'assemblage de systèmes optiques
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G09G 3/32 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des panneaux électroluminescents semi-conducteurs, p.ex. utilisant des diodes électroluminescentes [LED]
H02N 2/02 - Machines électriques en général utilisant l'effet piézo-électrique, l'électrostriction ou la magnétostriction produisant un mouvement linéaire, p.ex. actionneurs; Positionneurs linéaires
91.
DYNAMIC FIELD OF VIEW VARIABLE FOCUS DISPLAY SYSTEM
An augmented reality (AR) device is described with a display system configured to adjust an apparent distance between a user of the AR device and virtual content presented by the AR device. The AR device includes a first tunable lens that change shape in order to affect the position of the virtual content. Distortion of real-world content on account of the changes made to the first tunable lens is prevented by a second tunable lens that changes shape to stay substantially complementary to the optical configuration of the first tunable lens. In this way, the virtual content can be positioned at almost any distance relative to the user without degrading the view of the outside world or adding extensive bulk to the AR device. The augmented reality device can also include tunable lenses for expanding a field of view of the augmented reality device.
Images perceived to be substantially full color or multi-colored may be formed using component color images that are distributed in unequal numbers across a plurality of depth planes. The distribution of component color images across depth planes may vary based on color. In some embodiments, a display system includes a stack of waveguides that each output light of a particular color, with some colors having fewer numbers of associated waveguides than other colors. The waveguide stack may include multiple pluralities (e.g., first and second pluralities) of waveguides, each configured to produce an image by outputting light corresponding to a particular color. The total number of waveguides in the second plurality of waveguides may be less than the total number of waveguides in the first plurality of waveguides, and may be more than the total number of waveguides in a third plurality of waveguides, in embodiments that utilize three component colors.
A switchable optical assembly comprises a switchable waveplate configured to be electrically activated and deactivated to selectively alter the polarization state of light incident thereon. The switchable waveplate comprises first and second surfaces and a first liquid crystal layer disposed between the first and second surfaces. The first liquid crystal layer comprises a plurality of liquid crystal molecules that are rotated about respective axes parallel to a central axis, where the rotation varies with an azimuthal angle about the central axis. The switchable waveplate additionally comprises a plurality of electrodes to apply an electrical signal across the first liquid crystal layer.
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs
G02F 1/13363 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs Éléments à biréfringence, p.ex. pour la compensation optique
G02F 1/1347 - Disposition de couches ou de cellules à cristaux liquides dans lesquelles un faisceau lumineux est modifié par l'addition des effets de plusieurs couches ou cellules
Disclosed herein are systems and methods for generating and presenting virtual audio for mixed reality systems. A method may include determining a collision between a first object and a second object, wherein the first object comprises a first virtual object. A memory storing one or more audio models can be accessed. It can be determined if the one or more audio models stored in the memory comprises an audio model corresponding to the first object. In accordance with a determination that the one or more audio models comprises an audio model corresponding to the first object, an audio signal can be synthesized, wherein the audio signal is based on the collision and the audio model corresponding to the first object, and the audio signal can be presented to a user via a speaker of a head-wearable device. In accordance with a determination that the one or more audio models does not comprise an audio model corresponding to the first object, an acoustic property of the first object can be determined, a custom audio model based on the acoustic property of the first object can be generated, an audio signal can be synthesized, wherein the audio signal is based on the collision and the custom audio model, and the audio signal can be presented, via a speaker of a head-wearable device, to a user.
H04S 7/00 - Dispositions pour l'indication; Dispositions pour la commande, p.ex. pour la commande de l'équilibrage
A63F 13/285 - Génération de signaux de retour tactiles via le dispositif d’entrée du jeu, p.ex. retour de force
A63F 13/577 - Simulations de propriétés, de comportement ou de déplacement d’objets dans le jeu, p.ex. calcul de l’effort supporté par un pneu dans un jeu de course automobile utilisant la détermination de la zone de contact entre les personnages ou les objets du jeu, p.ex. pour éviter une collision entre des voitures de course virtuelles
The present disclosure relates to display systems and, more particularly, to augmented reality display systems including diffraction grating(s), and methods of fabricating same. A diffraction grating includes a plurality of different diffracting zones having a periodically repeating lateral dimension corresponding to a grating period adapted for light diffraction. The diffraction grating additionally includes a plurality of different liquid crystal layers corresponding to the different diffracting zones. The different liquid crystal layers include liquid crystal molecules that are aligned differently, such that the different diffracting zones have different optical properties associated with light diffraction.
Examples of systems and methods for a wearable system to automatically select or filter available user interface interactions or virtual objects are disclosed. The wearable system can select a group of virtual objects for user interaction based on contextual information associated with the user, the user's environment, physical or virtual objects in the user's environment, or the user's physiological or psychological state.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
G06K 19/06 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques caractérisés par le genre de marque numérique, p.ex. forme, nature, code
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
97.
AUDIOVISUAL PRESENCE TRANSITIONS IN A COLLABORATIVE REALITY ENVIRONMENT
Examples of systems and methods to facilitate audiovisual presence transitions of virtual objects such as virtual avatars in a mixed reality collaborative environment are disclosed. The systems and methods may be configured to produce different audiovisual presence transitions such as appearance, disappearance and reappearance of the virtual avatars. The virtual avatar audiovisual transitions may be further indicated by various visual and sound effects of the virtual avatars. The transitions may occur based on various colocation or decolocation scenarios.
A method in a virtual, augmented, or mixed reality system includes a GPU determining/detecting an absence of image data. The method also includes shutting down a portion/component/function of the GPU. The method further includes shutting down a communication link between the GPU and a DB. Moreover, the method includes shutting down a portion/component/function of the DB. In addition, the method includes shutting down a communication link between the DB and a display panel. The further also includes shutting down a portion/component/function of the display panel.
Systems and methods for providing accurate and independent control of reverberation properties are disclosed. In some embodiments, a system may include a reverberation processing system, a direct processing system, and a combiner. The reverberation processing system can include a reverb initial power (RIP) control system and a reverberator. The RIP control system can include a reverb initial gain (RIG) and a RIP corrector. The RIG can be configured to apply a RIG value to the input signal, and the RIP corrector can be configured to apply a RIP correction factor to the signal from the RIG. The reverberator can be configured to apply reverberation effects to the signal from the RIP control system. In some embodiments, one or more values and/or correction factors can be calculated and applied such that the signal output from a component in the reverberation processing system is normalized to a predetermined value (e.g., unity (1.0)).
The present disclosure relates to display systems and, more particularly, to augmented reality display systems. In one aspect, a method of fabricating an optical element includes providing a substrate having a first refractive index and transparent in the visible spectrum. The method additionally includes forming on the substrate periodically repeating polymer structures. The method further includes exposing the substrate to a metal precursor followed by an oxidizing precursor. Exposing the substrate is performed under a pressure and at a temperature such that an inorganic material comprising the metal of the metal precursor is incorporated into the periodically repeating polymer structures, thereby forming a pattern of periodically repeating optical structures configured to diffract visible light. The optical structures have a second refractive index greater than the first refractive index.