Disclosed is imaging system (100) comprising: first camera (102) and second camera (104); depth-mapping means (106); gaze-tracking means (108); and processor (110) configured to: generate depth map of real-world scene; determine gaze directions of first eye and second eye; identify line of sight (206) and conical region of interest (200); determine optical depths of first object (202) and second object (204) present in conical region; when first and second objects are placed horizontally opposite, adjust optical focuses of first and second cameras to focus on respective objects on same side as them; when first and second objects are placed vertically opposite, adjust optical focus of one camera corresponding to dominant eye to focus on object having greater optical depth, and adjust optical focus of another camera to focus on another object; and capture first image(s) and second image(s) using adjusted optical focuses of cameras.
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD]
G01S 3/00 - Radiogoniomètres pour déterminer la direction d'où proviennent des ondes infrasonores, sonores, ultrasonores ou électromagnétiques ou des émissions de particules sans caractéristiques de direction
H04N 23/67 - Commande de la mise au point basée sur les signaux électroniques du capteur d'image
H04N 23/90 - Agencement de caméras ou de modules de caméras, p. ex. de plusieurs caméras dans des studios de télévision ou des stades de sport
Disclosed is imaging system (100) comprising: first camera (102) and second camera (104); depth-mapping means (106); gaze-tracking means (108); and processor (110) configured to: generate depth map of real-world scene of real-world environment; determine gaze directions of first eye and second eye; identify line of sight (208) of user and conical region of interest (200) in real-world scene; determine optical depths of objects in conical region of interest, wherein at least first object (202), second object (204) and third object (206) from amongst objects are at different optical depths; adjust optical focus of one of first camera and second camera to focus on first object and second object in alternating manner, whilst adjusting optical focus of another of first camera and second camera to focus on third object; and capture images using adjusted optical focus of cameras.
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
Disclosed is imaging system (100) comprising: first camera (102) and second camera (104); depth-mapping means (106); gaze-tracking means (108); and processor (110) configured to: generate depth map of real-world scene; determine gaze directions of first eye and second eye; identify line of sight (206) and conical region of interest (200); determine optical depths of first object (202) and second object (204) present in conical region; determine one of first camera and second camera having lesser occlusion in real-world scene; adjust optical focus of one of first camera and second camera to focus on one of first object and second object having greater optical depth, and adjust optical focus of another of first camera and second camera to focus on another of first object and second object; and capture first image(s) and second image(s) using adjusted optical focuses of cameras.
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
Disclosed is display apparatus (100, 204, 206, 208) comprising: light source(s) (102, 104); gaze-tracking means (106); and processor(s) (108) configured to: determine gaze directions of user's eyes; send, to rendering server (110, 202), information indicative of gaze direction determined at first time instant; receive image frame(s) generated according to gaze, and being optionally timestamped with second time instant; display image frame(s) at third time instant; determine time lag between any one of: first time instant and third time instant, or second time instant and third time instant; detect whether or not time lag exceeds first predefined threshold; when time lag exceeds first predefined threshold, switch on gaze-lock mode; select forward line of vision as fixed gaze direction; send, to rendering server, information indicative of fixed gaze; and receive image frames generated according to fixed gaze; and display image frames.
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
5.
SYSTEMS AND METHODS FOR FACILITATING SCALABLE SHARED RENDERING
Disclosed is system (100, 300) for facilitating scalable shared rendering, comprising plurality of servers (102a-b, 302a-c) communicably coupled to each other, each server executing executable instance (304a-c) of rendering software (314), being communicably coupled to display 5 apparatus(/es) (104a-c, 200, 306a-k), wherein when executed, rendering software causes each server to receive information indicative of poses of users of display apparatus(/es), utilise three-dimensional model(/s) of extended-reality environment to generate images from poses, send images to respective display apparatus(/es) for display,10 wherein at least one of plurality of servers is configured to detect when total number of display apparatuses to be served exceeds predefined threshold number, and employ new server and execute new executable instance of rendering software when predefined threshold number is exceeded, wherein new display apparatuses are served by new server, 15 thereby facilitating scalable shared rendering.
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
A63F 13/352 - Dispositions d’interconnexion entre des serveurs et des dispositifs de jeu; Dispositions d’interconnexion entre des dispositifs de jeu; Dispositions d’interconnexion entre des serveurs de jeu - Détails des serveurs de jeu comportant des dispositions particulières de serveurs de jeu, p.ex. des serveurs régionaux connectés à un serveur national ou à plusieurs serveurs gérant les partitions de jeu
Disclosed is computer-implemented method comprising: capturing visible-light images via visible-light camera(s) (302) from view points in real-world environment, wherein 3D positions of view points are represented in coordinate system; dividing 3D space of real-world environment into 3D grid of convex-polyhedral regions; creating 3D data structure comprising nodes representing convex-polyhedral regions of 3D space; determining 3D positions of pixels of visible-light images based on 3D positions of view points; dividing each visible-light image into portions, wherein 3D positions of pixels of given portion of said visible-light image fall inside corresponding convex-polyhedral region; and storing, in each node, portions of visible-light images whose pixels' 3D positions fall inside corresponding convex-polyhedral region, wherein each portion of visible-light image is stored in corresponding node.
Disclosed is a display apparatus (100, 200, 300) comprising: light source(s) (102, 104, 202, 204) per eye, first tracking means (106, 206, 304), and processor(s) (108, 208) configured to: process first tracking data, collected by first tracking means, to determine location (404, 416) of display apparatus in real-world environment (400); obtain software application(s) (320, 322) that is available for location of display apparatus along with metainformation indicative of location (412, 420, 422, 424) in real-world environment with which software application(s) is/are associated; determine relative location of display apparatus with respect to location with which software application(s) is/are associated; execute software application(s) to create and overlay virtual content on image(s) (434) representing real-world environment, based on relative location of display apparatus with respect to location with which software application(s) is/are associated; and display image(s) via light source(s).
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H04M 1/72457 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens permettant d’adapter la fonctionnalité du dispositif dans des circonstances spécifiques en s’appuyant sur la localisation géographique
H04W 4/02 - Services utilisant des informations de localisation
8.
GAZE-BASED NON-REGULAR SUBSAMPLING OF SENSOR PIXELS
Disclosed is an imaging system (100) comprising: image sensor (102, 704) comprising pixels arranged on photo-sensitive surface (300); and processor (104, 702) configured to: obtain information indicative of gaze direction of user's eye; identify gaze position on photo-sensitive surface; determine first region (302) and second region (304) on photo-sensitive surface, wherein first region includes and surrounds gaze position, while second region surrounds first region; read out first pixel data from each pixel of first region; select set of pixels to be read out from second region based on predetermined sub-sampling pattern; read out second pixel data from pixels of selected set; generate, from second pixel data, pixel data of remaining pixels of second region; and process first pixel data, second pixel data, and generated pixel data to generate image frame(s).
H04N 5/345 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en lisant partiellement une matrice de capteurs SSIS
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
9.
IMPROVED FOVEATION-BASED IMMERSIVE XR VIDEO ENCODING AND DECODING
Disclosed is an encoding method and a decoding method. The encoding method comprises generating curved image (202, 208, 300) by creating projection of visual scene onto inner surface of imaginary 3D geometric shape (200, 206) that is curved in at least one dimension; dividing curved image into input portion (302) and plurality of input rings (304, 306, 308); encoding input portion and input rings into first planar image (400) and second planar image (402), respectively, such that input portion is stored into first planar image, and input rings are packed into corresponding rows (404) of second planar image; and communicating, to display apparatus (704), first and second planar images and information indicative of sizes of input portion and input rings.
H04N 21/4728 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de services; Interface pour utilisateurs finaux pour l'interaction avec le contenu, p.ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés pour la sélection d'une région d'intérêt [ROI], p.ex. pour la requête d'une version de plus haute résolution d'une région sélectionnée
H04N 21/6587 - Paramètres de contrôle, p.ex. commande de lecture à vitesse variable ("trick play") ou sélection d’un point de vue
H04N 21/2343 - Traitement de flux vidéo élémentaires, p.ex. raccordement de flux vidéo ou transformation de graphes de scènes MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
H04N 21/4402 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
10.
A TRACKING METHOD FOR IMAGE GENERATION, A COMPUTER PROGRAM PRODUCT AND A COMPUTER SYSTEM
The transmitted information from a gaze tracker camera to a control unit (19) of a VR/AR system (1) can be controlled by an image signal processor (ISP) (15) for use with a camera (14) arranged to provide a stream of images of a moving part of an object in a VR or AR system to 5 a gaze tracking function of the VR or AR system, the image signal processor being arranged to receive a signal from the gaze tracking function indicating at least one desired property of the images and to provide the stream of images to the gaze tracking function according to the signal. The ISP may be arranged to provide the image as either a 10 full view of the image with reduced resolution or a limited part of the image with a second resolution which is high enough to enable detailed tracking of the object.
Disclosed is a system (100, 200) comprising image sensor(s) (102, 202, 204) comprising a plurality of pixels arranged on a photo-sensitive surface (300) thereof; and image signal processor(s) (104, 206) configured to: receive, from image sensor(s), a plurality of image signals captured by corresponding pixels of image sensor(s); and process the plurality of image signals to generate at least one image, wherein, when processing, image signal processor(s) is configured to: determine, for a given image signal to be processed, a position of a given pixel on the photo-sensitive surface that is employed to capture the given image signal; and selectively perform a sequence of image signal processes on the given image signal and control a plurality of parameters employed for performing the sequence of image signal processes, based on the position of the given pixel.
Disclosed is a display apparatus (100) comprising: light source(s) (102); camera(s) (104); and processor(s) (106) configured to: display extended-reality image for presentation to user, whilst capturing eye image(s) of user's eyes; analyse eye image(s) to detect eye features; employ existing calibration model to determine gaze directions of user's eyes; determine gaze location of user; identify three-dimensional bounding box at gaze location within extended-reality environment, based on position and optical depth of gaze location; identify inlying pixels of extended-reality image lying within three-dimensional bounding box, based on optical depths of pixels in extended-reality image; compute probability of user focussing on given inlying pixel and generate probability distribution of probabilities computed for inlying pixels; identify at least one inlying pixel calibration target, based on probability distribution; and map position of calibration target to eye features, to update existing calibration model to generate new calibration model.
Disclosed is a gaze-tracking system (100) for use in head-mounted display apparatus (102, 200, 300). The gaze-tracking system comprises: illuminators (104); camera (106); and processor (108) configured to: illuminate illuminators in sequential manner; control camera to capture eye images of user's eye (308, 310) during illumination of illuminators; identify reflection(s) of illuminator in eye image; determine extent of deformation in shape of reflection(s) with respect to shape of illuminator; determine extent of displacement in position of reflection(s) with respect to position of illuminator; compute user-specific score for illuminator based on extents of deformation and displacement; select illuminator(s) based on user-specific scores; illuminate illuminator(s); control camera to capture eye image (306) of user's eye during illumination of illuminator(s); and detect gaze direction of user based upon relative position of pupil of user's eye with respect to reflections of illuminator(s) in eye image.
Disclosed is a system (100) for producing extended-reality images for a display apparatus (102). The system comprises camera(s) (104) and processor (106) communicably coupled to camera(s), wherein processor is configured to: control camera(s) to capture image(s) (202, 302, 402) representing test object (206, 306, 406) present in real-world environment, wherein test object is physically covered three- dimensionally with coded pattern (210, 310, 410); obtain information pertaining to three-dimensional geometry of coded pattern; analyze image(s) to identify first image segment representing part of coded pattern visible in image(s); determine virtual content (212, 312, 412) to be presented for test object, based on said part of coded pattern; process image(s) to generate extended-reality image(s) (204, 304, 404) in which virtual content is virtually superimposed over said part of the coded pattern, based on information pertaining to three-dimensional geometry of coded pattern.
Disclosed is an encoder (100, 302) for encoding images. The encoder comprises processor (102). The processor is configured to: receive, from display apparatus (200, 304), information indicative of at least one of: head pose of user, gaze direction of user; identify gaze location (X) in input image (400, 500), based on the at least one of: head pose, gaze direction; divide input image into first input portion (402, 502) and second input portion (404, 504), wherein first input portion includes and surrounds gaze location; and encode first input portion and second input portion at first compression ratio and at least one second compression ratio to generate first encoded portion and second encoded portion, respectively, wherein at least one second compression ratio is larger than first compression ratio.
H04N 19/167 - Position dans une image vidéo, p.ex. région d'intérêt [ROI]
H04N 19/146 - Débit ou quantité de données codées à la sortie du codeur
A61B 5/16 - Dispositifs pour la psychotechnie; Test des temps de réaction
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 19/12 - Sélection parmi plusieurs transformées ou standards, p.ex. sélection entre une transformée en cosinus discrète [TCD] et une transformée en sous-bandes ou sélection entre H.263 et H.264
H04N 19/17 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p.ex. un objet
H04N 19/20 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage d'objets vidéo
H04N 19/29 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage d'objets vidéo mettant en œuvre l'échelonnage au niveau de l’objet, p.ex. par couche objet vidéo [VOL]
16.
DISPLAY APPARATUSES AND RENDERING SERVERS INCORPORATING PRIORITIZED RE-RENDERING
Disclosed is a display apparatus (100, 200) comprising means (102, 202) for tracking pose of user's head, light source(s) (104, 106, 204, 206) and processor (108, 208) configured to: process pose-tracking data to determine position, orientation, velocity and acceleration of head; predict viewpoint and view direction of user in extended-reality environment; determine region of extended-reality environment to be presented, based on viewpoint and view direction; determine sub-region(s) of region whose rendering information is to be derived from previous rendering information of corresponding sub-region(s) of previously-presented region of extended-reality environment; generate rendering information of sub-region(s) based on previous rendering information; send, to rendering server (110, 212), information indicating remaining sub-regions required to be re-rendered and pose information indicating viewpoint and view direction; receive, from rendering server, rendering information of remaining sub-regions; merge rendering information of sub-region(s) and rendering information of remaining sub-regions to generate image(s); and display image(s) via light source(s).
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
17.
DISPLAY APPARATUS AND METHOD OF ENHANCING APPARENT RESOLUTION USING LIQUID-CRYSTAL DEVICE
A display apparatus (100, 200) includes an image renderer (102, 104, 202, 204, 304) per eye; a liquid-crystal device (106, 108, 206, 208) including a liquid-crystal structure (112, 114, 212, 214, 302, 400) and a control circuit (116, 118, 216, 218), the liquid-crystal structure being arranged in front of image-rendering surface (306) of image renderer, wherein liquid-crystal structure is to be electrically controlled, via control circuit, to shift light emanating from a given pixel of the image renderer to a plurality of positions in a sequential and repeated manner; and at least one processor (110, 210) configured to render a sequence of output image frames via the image renderer, wherein a shift in the light emanating from the given pixel of the image renderer to the plurality of positions causes a resolution of the output image frames to appear higher than a display resolution of the image renderer.
G09G 3/36 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
18.
LED-BASED DISPLAY APPARATUS AND METHOD INCORPORATING SUB-PIXEL SHIFTING
Disclosed is a display apparatus (100, 200) comprising: image renderer (102, 202, 302) comprising light-emitting diodes (108, 110, 112, 208, 210, 212, 314, 316, 318) that are to be employed as sub-pixels of image renderer; liquid-crystal device (104, 204) comprising liquid-crystal structure (114, 214) and control circuit (116, 216), wherein liquid-crystal structure is arranged in front of light-emitting diodes of image renderer, wherein liquid-crystal structure is to be electrically controlled, via control circuit, to shift light emanating from light-emitting diode to target positions on image plane according to shifting sequence in repeated manner; and processor(s) (106, 206) configured to render output sequence of output image frames via image renderer, wherein shift in light emanating from light-emitting diode to target positions causes resolution of output image frames to appear higher than display resolution of image renderer.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
19.
DISPLAY APPARATUS AND METHOD INCORPORATING ADAPTIVE POSE LOCKING
Disclosed is display apparatus (100, 200) comprising pose-tracking means (102, 202); image renderer (104, 106, 204, 206) per eye; liquid-crystal device (108, 110, 208, 210) comprising liquid-crystal structure (114, 116, 214, 216) and control circuit (118, 120, 218, 220); and processor (112, 212). Processor is configured to: process pose-tracking data to determine user's head pose; detect if rate at which head pose is changing is below predefined threshold rate; if yes, switch on lock mode, select head pose for session of lock mode, and generate output image frames according to head pose during session; if no, generate output image frames according to corresponding head poses of user using pose-tracking data; and display output image frames, whilst shifting light emanating from pixels of image renderer to multiple positions (P1-P9) in sequential and repeated manner, said shifting causes resolution of output image frames to appear higher than display resolution of image renderer.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
20.
DISPLAY APPARATUS AND METHOD OF CORRECTING IMAGE DISTORTION THEREFOR
Disclosed is display apparatus (100, 200, 300) comprising first (102, 202) and second (104, 204) displays or projectors that display first and second images for first (302) and second (304) eyes, respectively; first (306) and second (308) portions facing first and second eyes, respectively, first and second portions having first (F-F') and second (S-S') optical axes, respectively; means (106, 206) for tracking positions and orientations of first and second eyes relative to corresponding optical axes, respectively; and processor (108, 208). The processor or external processor (110, 226) obtains current relative positions and orientations of both eyes; determines first and second transformations for first and second input image frames, respectively, given transformation being applied to correct apparent per-pixel distortions produced when given input image frame is displayed; and applies first and second transformations to generate first and second distortion-corrected image frames, respectively, wherein processor renders first and second distortion-corrected image frames.
Disclosed is system (100, 200) for presenting notifications on display device (202) and external computing device (204). The display device comprises image renderer (206), external visual indicator (208) and first processor (210), the external computing device comprises display (212) and second processor (214). The system comprises first (102A, 216) and second clients (102B, 218) executing on first processor, third client (102C, 220) executing on second processor, first, second and third clients are configured to generate and render first, second and third user interfaces on image renderer, external visual indicator and display, respectively; and control server (222). The control server is configured to obtain information; detect whether or not notification is to be presented; determine notification type and content; and select clients from amongst plurality of clients and send content on selected clients, wherein selected clients are configured to generate and render their respective user interfaces to present notification substantially simultaneously.
Disclosed is a display apparatus (100, 200) comprising image renderer (102, 202), camera (104, 204) and processor (106, 206). The processor or external processor (108, 208) communicably coupled to said processor is configured to: render at least one extended-reality image (302) during first mode of operation of display apparatus; determine second mode of operation to which display apparatus is to be switched; control camera to capture at least one real-world image (304) of real-world environment; generate at least one composite image (306, 500) from at least one next extended-reality image and at least one real-world image, wherein first portion (306A, 500A) of at least one composite image is derived from at least one next extended-reality image, and second portion (306B, 500B) of at least one composite image is derived from at least one real-world image; and render at least one composite image during second mode of operation of display apparatus.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
23.
DISPLAY APPARATUS AND METHOD USING PROJECTION MATRICES TO GENERATE IMAGE FRAMES
Disclosed is display apparatus (100, 200) comprising display or projector (102, 202), means (104, 204) for tracking position and orientation of user's head, and processor (106, 206) coupled to display or projector and said means. The processor or external processor (108, 208) communicably coupled to processor is configured to obtain head-tracking data indicative of position and orientation; process head-tracking data to determine current position and orientation of user's head and velocity and/or acceleration with which position and orientation is changing; predict first and second position and orientation of user's head at time t1 and t2, respectively; determine first and second projection matrices to be applied to three-dimensional image data of given frame, respectively; and apply first and second projection matrices to said image data to generate first image frame and second image frame, respectively. Processor is configured to render first and second image frame at time t1 and t2, respectively.
An input device (100, 200, 300, 400, 504) including first sensor (102, 202, 204, 206, 302, 402, 506) that measures first sensor data indicative of at least one of: pressure applied to input device, presence or absence of given object in proximity of input device, distance of given object from input device, position and orientation of input device; and processor (104, 208, 310, 508) coupled to first sensor. Processor processes first sensor data to determine state of input device, said state indicating whether or not input device is lying on given object; obtains, from user device, context information pertaining to visual scene being presented to user; and controls input device to operate in first mode of operation or second mode of operation based on said state and context information. Input device acts as computer mouse during first mode of operation and as six-degrees-of-freedom controller during second mode of operation.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p.ex. souris 2D, boules traçantes, crayons ou palets
Disclosed is a direct retina projection apparatuscomprisingmeans (102, 802) for detecting a gaze direction of a user,a projector(104,806), an optical element(106, 808),a reflective element(108, 810),an actuator (110)and a processor(112). The processor is configuredtodetermine, based upon the gaze direction of the user,a portion of the optical element at or through which the user is gazing. The processor is configured to render an image via the projector, whilst adjusting an orientation of the reflective element, via the actuator,to reflect a projection of the image from the reflective element towards the optical element according to the detected gaze direction. Aprojection of at least a portion of the rendered image is to be reflected from the reflective element towards the portion of the optical element from where the projection of the at least a portion of the rendered image is directed towards a fovea of the user's eye.
Disclosed is a display apparatus (100, 200, 300) comprising means (102, 202) for detecting gaze direction of user; processor (104, 204) configured to process input image based upon gaze direction to generate first and second images; first image renderer (106, 206, 304) and second image renderer(108, 208, 306, 602, 704, 804, 904, 1004) to render first and second images; optical combiner (110, 210, 308); first array (112, 212, 310) of micro-prisms (500) to split light emanating from second image renderer into multiple directions to produce multiple projections of second image; optical element (114, 214, 312) to direct said multiple projections towards optical combiner; and optical shutter (116, 216, 314) arranged between optical element and optical combiner, wherein optical shutter allows given portion of said multiple projections to pass, whilst blocking remaining portion of said multiple projections. The optical combiner optically combines projection of first image with given portion of said multiple projections, to produce on image plane (302) output image (400) having spatially-variable angular resolution. The processor controls optical shutter based upon detected gaze direction, whilst first and second images are being rendered.
Disclosed is system (100, 200) for processing images for display apparatus, system is communicably coupled to display apparatus (102, 202), display apparatus comprising first image Tenderer (104, 204, 406) and second image Tenderer (106, 206), system comprises image source (108, 208, 408) and processor (110, 210), wherein image source produces input image (300), processor of system being configured to: process input image to generate first image (302, 402) such that first region of first image is blurred and its intensity is reduced with respect to intensity of corresponding region of input image; and process input image to generate second image, second image corresponding to cropped region of input image, intensity of second image being adjusted according to intensity of aforesaid first region; wherein processor of system or processor (112, 212) of display apparatus renders first and second images at first and second image renderers, respectively, projections of rendered first and second images being optically combined such that projection of rendered second image overlaps with projection of first region of first image.
Disclosed is system (100, 200) for producing images for display apparatus (102, 202). The system comprises image source (104, 204) to obtain input image and processor(106, 206). The processor is configured to obtain information of gaze direction of user, determine region of interest of input image based on gaze direction, and process input image to generate first image (302) and second image(304).First image comprises first region (302A) that is blurred with respect to region of interest. Second image corresponds to region of interest. Processor adjusts intensity of pixels within first region of first image and intensity of pixels within second image. When intensity of given pixel within region of interest is lower than or equal to predefined intensity threshold, intensity of a corresponding pixel (306A) within first region of first image is lower than intensity of corresponding pixel (308A) within second image.
Disclosed is display apparatus (100) comprising configuration of gaze sensors (102, 204); gaze predictor module (104, 206) configured to process sensor data collected by aforesaid configuration to determine current gaze location and gaze velocity and/or acceleration, and to predict gaze location and gaze velocity and/or acceleration of user;image processing module (106, 208) configured to process input image for generating first image having first resolution and second image having second resolution, second resolution being higher than first resolution; first image renderer (108) and second image renderer (110) that render first and second image, respectively; optical combiner (112) for optically combining projections of first and second images; and image steering unit (114) configured to determine region of optical combiner onto which projection of second image is to be focused, and to make adjustment to focus projection of second image on said region; wherein second image renderer is switched off or dimmed during adjusting phase of image steering unit when said unit is making adjustment.
Disclosed is a display apparatus (100, 200, 300) comprising at least one first display (102, 202, 302) having a first display resolution; at least one second display (104, 204, 304) having a second display resolution; at least one exit optical element (106, 206, 306); and at least one optical combiner (108, 208, 308) to be employed to optically combine a projection of the first image with a projection of the second image. The at least one optical combiner comprises a first optical element (108A, 208A, 308A) having a reflective surface obliquely facing the at least one exit optical element, the reflective surface having an outwardly-curved shape.
Disclosed is a display apparatus (100, 200) comprising at least one image renderer (102, 202); light sources (104, 106; 204); controllable scanning mirrors (108, 110; 206); at least two actuators (112A, 112B; 14A, 114B; 208, 210) associated with the controllable scanning mirrors; means (116) for detecting gaze direction of user; and a processor (118) communicably coupled to the aforementioned components. The processor is configured to: (a) obtain an input image and determine region of visual accuracy thereof; (b) process the input image to generate a context image (214, 302) and a focus image (216, 304); (c) determine a focus area (218, 308) within a projection surface (212, 306) over which the focus image is to be drawn; (d) render the context image; (e) draw the focus image; and (f) control the actuators to align the controllable scanning mirrors. The processor is configured to perform (d), (e) and (f) substantially simultaneously, and optically combine a projection of the drawn focus image with a projection of the rendered context image to create a visual scene.
Disclosed is a display apparatus (100, 200, 500) comprising at least one light source (102, 104; 202; 506) per eye; at least one controllable scanning mirror (106, 108; 204; 508; 602) per eye; means (110) for detecting gaze direction of user; and a processor (112) communicably coupled to the aforementioned components. The processor is configured to (a) obtain an input image (304, 406)and determine region of visual accuracy thereof; (b) generate pixel data corresponding toat least a first region (208, 406A) and a second region (210, 406B) of the input image, wherein the second region substantially corresponds to the region of visual accuracy of the input image, while the first region substantially corresponds to a remaining region of the input image, wherein the first region have a first resolution, while the second region have a second resolution, the second resolution being higher than the first resolution; and (c) control the at least one light source and the at least one controllable scanning mirror to draw the aforementioned regions.
Disclosed is an imaging system (102, 200) for producing images to be displayed to user via a head-mounted display apparatus (104, 210)that comprises means for detecting gaze direction of user. The imaging system comprises a first outer camera (202) and a second outer camera (204), at least one inner camera(110, 206), and processor (112) coupled to aforesaid cameras and means for detecting gaze direction. The processor is configured to (i) obtain inter-pupillary distance of user with respect to user's gaze at infinity; (ii) receive detected gaze direction; (iii) control first outer camera, second outer camera and at least one inner camera to capture first outer image, second outer image and at least one inner image (406A, 406B) of a scene; and (iv)process first outer image and inner image to generate first view of the scene, and process second outer image and inner image to generate second view of the scene, based upon inter-pupillary distance and detected gaze direction.
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/117 - Transformation de signaux d’images correspondant à des points de vue virtuels, p.ex. interpolation spatiale de l’image les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi du spectateur
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
34.
SYSTEM AND METHOD OF ENHANCING USER'S IMMERSION IN MIXED REALITY MODE OF DISPLAY APPARATUS
Disclosed is a system (102) and a method for enhancing a user's immersion in a mixed reality mode of head-mounted display apparatus (104), the system being at least communicably coupled to the aforesaid display apparatus. The system comprises at least one camera (106, 300) communicably coupled to a processor(108).The processor controls said camera to capture sequence of images of real-world environment; analyse sequence of images to identify spatial geometry of real objects in real-world environment and material categories to which real objects belong; process sequence of images to generate sequence of mixed-reality images, based upon spatial geometry and material category of at least one real object that is represented by at least one virtual object in sequence of mixed-reality images, wherein visual behaviour of at least one virtual object emulates at least one material property associated with material category of the at least one real object; and render the sequence of mixed-reality images.
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
Disclosed is a gaze-tracking system for use in a head-mounted display apparatus, and a method of tracking a user's gaze, via such a gaze- tracking system. The gaze-tracking system comprises illuminators for emitting light pulses to illuminate a user's eye; a camera for capturing an image of reflections of the light pulses from the user's eye, the camera comprising photo-sensitive elements arranged into a chip, wherein a first surface of the chip bulges inwards in a substantially-curved shape, such that a focal plane of photo-sensitive elements positioned proximally to edges of the chip is farther away than a focal plane of photo-sensitive elements positioned substantially at a center portion of the chip, the first surface facing the user's eye; and a processor being configured to control operations of the illuminators and the camera, and to process the captured image to detect a gaze direction of the user.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
A61B 3/14 - Dispositions spécialement adaptées à la photographie de l'œil
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
Disclosed is agaze-tracking system for use in ahead-mounted display apparatus and an aperture device.The gaze-tracking system comprises a first light source and a second light source operable to emit light of first and second type respectively; an image sensor operable to capture an image of the user's eye and reflections of the light of first type from the user's eye; a primary lens;an aperture device positioned between the image sensor and the primary lens, and a processor configured to control the first and second light sources and image sensor, and to process the captured image to detect a gaze direction of the user. The aperture device provides a first aperture to the light of first type and a second aperture to the light of second type, the first aperture and the second aperture being substantially concentric, the first aperture being smaller than the second aperture.
Disclosed is a gaze-tracking system for use in a head-mounted display apparatus, and a method of tracking a user's gaze, via such a gaze- tracking system. The gaze-tracking system comprises a plurality of illuminators for emitting light pulses to illuminate a user's eye when the head-mounted display apparatus is worn by the user, the illuminators comprising at least a first illuminator and a second illuminator; at least one lens positioned on an optical path of reflections of the light pulses from the user's eye, the at least one lens having no chromatic-aberration correction; at least one camera for capturing an image of the reflections of the light pulses; and a processor coupled with illuminators and the at least one camera, the processor being configured to control operations of the illuminators and the at least one camera, and to process the captured image to detect a gaze direction of the user.
Disclosed is a display apparatus and method of displaying, via the display apparatus. The display apparatus comprises at least one image renderer for rendering an image; an exit optical element through which a projection of the rendered image exits the display apparatus to be incident upon a user's eye, when the display apparatus is head-mounted by the user; means for providing visual cues, the visual cues being provided in a peripheral region, the peripheral region substantially surrounding a viewport of the exit optical element; and a processor coupled to the at least one image renderer and the means for providing the visual cues, wherein the processor generates a drive signal based at least partially upon a region of the rendered image that is not visible in the viewport of the exit optical element, and to control, via the drive signal, the means for providing the visual cues.
Disclosed is a display apparatus and a method of displaying, via the display apparatus. The display apparatus comprises an image source (102), a processor )104) configured to render an image at the image source, at least one optical combiner (112) for combining a projection of the rendered image with a projection of a real-world image, a first polarizing element (108) for polarizing the projection of the real-world image at a first polarization orientation, wherein the first polarizing element is positioned on a first side of the at least one optical combiner upon which the projection of the real-world image is incident, and a second polarizing element (110) facing a second side of the at least one optical combiner, polarization properties of the second polarizing element being adjustable, wherein the polarization properties of the second polarizing element are to be adjusted with respect to the first polarization orientation of the first polarizing element.
Disclosed is a display apparatus and a method of displaying, via the display apparatus. The display apparatus comprises an image source for rendering an image, a projection screen facing a direction at a predefined angle to a direction in which the rendered image is projected from the image source, an exit optical element facing the projection screen, a first polarizing element facing the image source and arranged to polarize a projection of the rendered image at a first polarization orientation, a first optical element arranged to reflect the polarized projection towards the projection screen, wherein the projection screen is arranged to unpolarize the polarized projection whilst reflecting the unpolarized projection towards the exit optical element, and a second polarizing element positioned between the projection screen and the exit optical element, arranged to polarize the unpolarized projection at a second polarization orientation. The second polarization orientation is different from the first polarization orientation.
Disclosed is a display apparatus, and a method of displaying via the display apparatus. The display apparatus comprises at least one context image renderer for rendering a context image, at least one focus image renderer for rendering a focus image, at least one first optical combiner for combining the projection of the rendered context image with the projection of the rendered focus image to form a combined projection, and at least one second optical combiner for combining the combined projection with a projection of a real-world image. An angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees. An angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees.
Disclosed is a gaze-tracking system for a head-mounted display apparatus. The gaze-tracking system comprises a first set of illuminators for emitting light to illuminate a user s eye; at least one photo sensor for sensing reflections of the light from the user s eye; at least one actuator for moving at least one of the first set of illuminators and the at least one photo sensor; and a processor coupled with the first set of illuminators, the at least one photo sensor and the at least one actuator. The processor is configured to collect and process sensor data from the at least one photo sensor to detect a gaze direction of the user, and to control the at least one actuator to adjust, based upon the detected gaze direction, a position of the at least one of the first set of illuminators and the at least one photo sensor.
Disclosed are a display apparatus and a method of displaying, via the display apparatus and a portable electronic device. The display apparatus comprises at least one focus display, means for detecting a gaze direction, and a processor coupled thereto. The display apparatus is arranged to be detachably attached to and be communicably coupled with the portable device, the portable device comprising a display and a processor coupled thereto. The processor of the display apparatus or the processor of the portable device is configured to obtain and process an input image to generate context and focus images. The processor of the display apparatus renders the focus image at the focus display, whilst the processor of the portable device renders the context image at the display thereof. The projections of the rendered context and focus images are optically combined to create a visual scene.
Disclosed is an imaging system and a method of producing a context image and a focus image for a display apparatus, via the imaging system. The imaging system comprises at least one imaging sensor per eye of a user, and a processor coupled thereto. The processor is configured to control the imaging sensors to capture at least one image of a real world environment, and is arranged to be communicably coupled with the display apparatus. Furthermore, the processor is configured to: receive, from the display apparatus, information indicative of a gaze direction of the user; determine a region of visual accuracy of the at least one image, based upon the gaze direction of the user; process the at least one image to generate the context image and the focus image; and communicate the generated context image and the generated focus image to the display apparatus.
Disclosed is a display apparatus and method of displaying, via the display apparatus. The display apparatus comprises context image renderer for rendering context image; focus image renderer for rendering focus image; exit optical element; and optical combiner for optically combining projection of the rendered context image with projection of the rendered focus image to create visual scene. The optical combiner comprises first semi-transparent reflective element; and a second semi-transparent reflective element. The context image renderer is arranged in a manner that projection of rendered context image is incident upon first semi- transparent reflective element and reflected towards exit optical element therefrom. The focus image renderer is arranged in a manner that projection of rendered focus image is incident upon first semi-transparent reflective element and reflected towards second semi-transparent reflective element, and then reflected towards first semi-transparent reflective element, from where projection thereof is allowed to pass through towards exit optical element.
Disclosed is a gaze-tracking system for use in a head-mounted display apparatus. The gaze-tracking system comprises means for producing structured light comprising a plurality of illuminators for emitting light pulses. Furthermore, the gaze-tracking system comprises at least one camera for capturing an image of reflections of the structured light from the user s eye, wherein the image is representative of a form of the reflections and a position of the reflections on an image plane of the at least one camera. Moreover, the gaze-tracking system comprises a processor configured to control the means for producing the structured light to illuminate the user s eye with the structured light and to control the at least one camera to capture the image of the reflections of the structured light, and to process the captured image to detect a gaze direction of the user.
Disclosed is a display apparatus. The display apparatus comprising at least one context image renderer for rendering a context image, wherein an angular width of a projection of the rendered context image ranges from 40 degrees to 220 degrees; at least one focus image renderer for rendering a focus image, wherein an angular width of a projection of the rendered focus image ranges from 5 degrees to 60 degrees; and at least one optical combiner for combining the projection of the rendered context image with the projection of the rendered focus image to create a visual scene, wherein the visual scene is to be created in a manner that at least two different optical distances are provided therein.
Disclosed is an imaging system and a method of producing images for a display apparatus, via the imaging system. The imaging system comprises at least one focusable camera for capturing at least one image of a given real-world scene; means for generating a depth map or a voxel map of the given real-world scene; and a processor coupled to the focusable camera and the aforesaid means. The processor is communicably coupled with the display apparatus. The processor is configured to: receive information of the gaze direction of the user; map the gaze direction to the depth map or the voxel map to determine an optical depth of a region of interest in the given real-world scene; and control the focusable camera to employ a focus length that is substantially similar to the determined optical depth of the region of interest when capturing the at least one image.
Disclosed is a display apparatus and method of displaying, via the display apparatus. The display apparatus comprises context image renderer for rendering context image; focus image renderer for rendering focus image; exit optical element; and optical combiner for optically combining 5 projection of the rendered context image with projection of the rendered focus image to create visual scene. The optical combiner comprises first semi-transparent reflective element; and a second semi-transparent reflective element. The context image renderer is arranged in a manner that the projection of rendered context image passes through first semi-10 transparent reflective element towards exit optical element. The focus image renderer is arranged in a manner that projection of the rendered focus image passes through the first semi-transparent reflective element towards the second semi-transparent reflective element, and reflected therefrom towards the first semi-transparent reflective element, and is 15 then reflected from the first semi-transparent reflective element towards the exit optical element.