A method including: receiving colour images, depth images, and viewpoint information; dividing 3D space occupied by real-world environment into 3D grid(s) of voxels (204); creating 3D data structure(s) comprising nodes, each node representing corresponding voxel; dividing colour image and depth image into colour tiles and depth tiles, respectively; mapping colour tile to voxel(s) whose colour information is captured in colour tile, based on depth information captured in corresponding depth tile and viewpoint from which colour image and depth image are captured; and storing, in node representing voxel(s), reference information indicative of unique identification of colour tile that captures colour information of voxel(s) and corresponding depth tile that captures depth information, along with viewpoint information indicative of viewpoint from which colour image and depth image are captured.
A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une forme; Localisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
3.
Tracking system and method incorporating selective control of light sources of controller
A tracking system for use in head-mounted device (HMD) includes light sources arranged spatially around user-interaction controller(s); controller-pose-tracking means arranged in user-interaction controller(s); HMD-pose-tracking means; camera(s) arranged on portion of HMD that faces real-world environment in which HMD is in use; and processor(s) configured to estimate relative pose, based on controller-pose-tracking data and HMD-pose-tracking data; determine sub-set of light sources, based on estimated relative pose and arrangement of light sources; selectively control light sources such that light sources of sub-set are activated, whereas remaining light sources are deactivated; process at least one image, captured by camera(s), to identify operational state of light source(s) of sub-set that is visible in image(s), wherein image(s) is indicative of actual relative pose; and correct estimated relative pose to determine actual relative pose, based on operational state.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H05B 47/125 - Commande de la source lumineuse en réponse à des paramètres détectés en détectant la présence ou le mouvement d'objets ou d'êtres vivants en utilisant des caméras
4.
Utilising focussing of light sources in image for controller tracking
A controller-tracking system includes: camera(s) arranged on head-mounted display (HMD); one or more light sources arranged on controller(s) to be tracked, and controller(s) being associated with the HMD. The light sources provide light having wavelength(s). The processor(s) are configured to receive image(s) representing controller(s); identify image segment(s) in image(s) that represents light source(s); determine level of focussing of light source(s) in image(s), based on characteristics associated with pixels of image segment(s); determine distance between camera(s) and light source(s), based on level of focussing, intrinsic parameters of camera(s), and reference focussing distance corresponding to wavelength of light provided by light source(s); and determine pose of controller(s) in global coordinate space of real-world environment, using distance between camera(s) and light source(s).
Disclosed is a headphone adapter (100, 204) having: first element (102) having first part (104) and second part (106) attached to first part, wherein recess (108) is defined between first part and second part; second element (110) having first end (112), second end (114) opposite to first end, and at least one third part (116) extending between first end and second end, wherein second element is rotatably attached to first element at first end; and third element (118) having fourth part (120) and attachment parts (122, 124) extending from fourth part, wherein fourth part is rotatably attached to second element at second end, and wherein headphone adapter, in use, enables attachment of a headphone device (216, 302) to a head-mounted device (208, 306) such that headband (206, 304) of head-mounted device passes through the recess defined in first element, and headband (214) of headphone device is attached to third element.
An imaging system including: a first camera and a second camera corresponding to a first eye and a second eye of a user, respectively; and processor(s) configured to: control the first camera and the second camera to capture a sequence of first images and a sequence of second images, respectively; and apply motion blur correction to one of a given first image and a given second image, whilst applying at least one of: defocus blur correction, image sharpening, contrast enhancement, edge enhancement to another of the given first image and the given second image.
An imaging system including a first camera and a second camera corresponding to a first eye and a second eye of a user, respectively; and at least one processor. The at least one processor is configured to control the first camera and the second camera to capture a sequence of first images and a sequence of second images of a real-world environment, respectively; and apply a first extended depth-of-field correction to one of a given first image and a given second image, whilst applying at least one of: defocus blur correction, image sharpening, contrast enhancement, edge enhancement to another of the given first image and the given second image.
An imaging system including processor(s) and data repository. Processor(s) are configured to: receive images of region of real-world environment that are captured by cameras using at least one of: different exposure times, different sensitivities, different apertures; receive depth maps of region that are generated by depth-mapping means; identify different portions of each image that represent objects located at different optical depths; create set of depth planes corresponding to each image; warp depth planes of each set to match perspective of new viewpoint corresponding to which output image is to be generated; fuse sets of warped depth planes corresponding to two or more images to form output set of warped depth planes; and generate output image from output set of warped depth planes.
A computer-implemented method including: receiving visible-light images captured from viewpoints using visible-light camera(s); creating 3D model of real-world environment, wherein 3D model stores colour information pertaining to 3D points on surfaces of real objects (204); dividing 3D points into groups of 3D points, based on at least one of: whether surface normal of 3D points in group lie within predefined threshold angle from each other, differences in materials of real objects, differences in textures of surfaces of real objects; for group of 3D points, determining at least two of visible-light images in which group of 3D points is captured from different viewpoints, wherein said images are representative of different surface irradiances of group of 3D points; and storing, in 3D model, information indicative of different surface irradiances.
An imaging system includes first camera having negative distortion; second camera, second field of view of second camera being wider than first field of view of first camera, wherein first field of view fully overlaps with portion of second field of view, second camera having negative distortion at said portion and positive distortion at remaining portion; and processor(s) configured to: capture first image and second image; determine overlapping image segment and non-overlapping image segment of second image; and generate output image from first image and second image, wherein: inner image segment of output image is generated from at least one of: first image, overlapping image segment, and peripheral image segment of output image is generated from non-overlapping image segment.
H04N 23/951 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux en utilisant plusieurs images pour influencer la résolution, la fréquence d'images ou le rapport de cadre
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
An imaging system includes first camera; second camera, second field of view of second camera being wider than first field of view of first camera, wherein first field of view overlaps with portion of second field of view; and processor(s) configured to: capture first images and second images, wherein overlapping image segment and non-overlapping image segment of second image correspond to said portion and remaining portion of second field of view; determine blurred region(s) (B1, B2) of first image; and generate output image in manner that: inner image segment of output image is generated from: region(s) of overlapping image segment that corresponds to blurred region(s) of first image, and remaining region of first image that is not blurred, and peripheral image segment of output image is generated from non-overlapping image segment.
Disclosed is a system (100) comprising devices (102 a, 102 b, 200, 300, 304), each device comprising active illuminator, active sensor and processor, wherein processor (108 a) of device (102 a, 300) is configured to: project pattern of light onto surroundings (302), whilst detect reflections of pattern of light off onto surroundings; determine shapes of surfaces present in surroundings and distances of surfaces from pose of device; obtain pattern information indicative of other pattern(s) of light projected by other active illuminator(s) (104 b) of other device(s) (102 b, 304); detect reflections of other pattern(s) from pose of device; determine relative pose of other device(s) with respect to pose of device; and send, to server (110, 208), surface information indicative of shapes and distances of surfaces from pose of device, along with pose information indicative of relative pose of other device(s) with respect to pose of device.
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la projection de lumière structurée
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p.ex. utilisant un modèle de réflectance ou d’éclairage
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
Disclosed is a computer-implemented method comprising: tracking positions and orientations of devices (104, 106, 204a-204f, A-F, 402, 404) within real-world environment (300), each device comprising active sensor(s) (108, 110, 206a-206f); classifying devices into groups, based on positions and orientations of devices within real-world environment, wherein a group has devices whose active sensors are likely to interfere with each other; and controlling active sensors of devices in the group to operate by employing multiplexing.
Disclosed is a system (100, 200) comprising server (102, 202, 308) and data repository (104, 204, 310) storing three-dimensional (3D) environment model, wherein server is configured to: receive, from client device (106, 206, 300), first image(s) of real-world environment captured by camera(s) (108, 208, 302) of client device, along with information indicative of first measured pose of client device measured by pose-tracking means (110, 210, 304) of client device; utilise 3D environment model to generate first reconstructed image(s) from perspective of first measured pose; determine first spatial transformation indicative of difference in first measured pose and first actual pose of client device; calculate first actual pose, based on first measured pose and first spatial transformation; and send information indicative of at least one of: first actual pose, first spatial transformation, to client device for enabling client device to calculate subsequent actual poses.
An encoder for encoding images includes at least one processor configured transform a given input pixel of an input image having (x, y) coordinates in a Cartesian coordinate system into a given transformed pixel of a transformed image having (ρ, θ) coordinates in a log-polar coordinate system, using a log-polar transformation in which a radial distance (ρ) of the given transformed pixel is a logarithm of a distance of the given input pixel from an origin in the Cartesian coordinate system, and an angular distance (θ) of the given transformed pixel is a sum of an arctangent of a slope of a line connecting the given input pixel to the origin and a function of the radial distance, encode the transformed image, by employing a compression algorithm, into an encoded image, and send the encoded image to a display apparatus for subsequent decoding thereat.
H04N 19/60 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H04N 19/167 - Position dans une image vidéo, p.ex. région d'intérêt [ROI]
A method implemented by a server communicably coupled to at least two devices, each device including camera(s), the devices being present within same real-world environment. The method includes: receiving, from the devices(s), images captured by respective cameras of the devices; identifying one of the devices whose camera has camera parameter(s) better than camera parameter(s) of camera of another of the devices; training neural network using images captured by camera of one of the devices as ground truth material and using images captured by camera of another of the devices as training material; generating correction information to correct images captured by camera of another of the devices using trained neural network; and correcting the images captured by the camera of the another of the device(s) by utilising the correction information at the server, or sending correction information to another of the devices for correcting the images.
H04N 5/12 - Dispositifs dans lesquels les signaux de synchronisation ne sont actifs que si une différence de phase se produit entre les dispositifs de synchronisation et les dispositifs de balayage synchronisés, p.ex. synchronisation à volants
G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la projection de lumière structurée
G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra
18.
SYSTEMS AND METHODS FOR VISUALLY INDICATING STALE CONTENT IN ENVIRONMENT MODEL
A system includes server(s) configured to: receive plurality of images of real-world environment captured by camera(s); process a number of images to detect plurality of objects present in a real-world environment and generate a three-dimensional environment model of the real-world environment; classify each of the objects as either a static or dynamic object; receive current image(s) of the real-world environment; process the current image(s) to detect object(s); determine whether or not the object(s) is/are from amongst the plurality of objects; determine whether the object(s) is a static object or dynamic object when it is determined that the object(s) is/are from amongst the plurality of objects; and for each dynamic object that is represented in the three-dimensional environment model but not in a current image(s), apply a first visual effect to a representation of the dynamic object in the three-dimensional environment model for indicating staleness of the representation.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
Disclosed is a system (100) for implementing object-based camera calibration, the system comprising first camera (102) and processor(s) (104) configured to: detect occurrence of calibration event(s); obtain object information indicative of actual features of calibration objects;
Disclosed is a system (100) for implementing object-based camera calibration, the system comprising first camera (102) and processor(s) (104) configured to: detect occurrence of calibration event(s); obtain object information indicative of actual features of calibration objects;
capture image(s); detect features of at least portion of real-world object(s) represented in image(s) (200); identify calibration object(s) that match real-world object(s), based on comparison of detected features of at least portion of real-world object(s) with actual features of calibration objects; determine camera calibration parameters, based on differences between actual features of calibration object(s) and detected features of at least portion of real-world object(s); and correct distortion(s) in the image(s) and subsequent images captured by first camera using the camera calibration parameters.
An imaging system including: first camera and second camera; depth-mapping means; gaze-tracking means; and processor configured to: generate depth map of real-world scene; determine gaze directions of first eye and second eye; identify line of sight and conical region of interest; determine optical depths of first object and second object present in conical region; determine one of first camera and second camera having lesser occlusion in real-world scene; adjust optical focus of one of first camera and second camera to focus on one of first object and second object having greater optical depth, and adjust optical focus of another of first camera and second camera to focus on another of first object and second object; and capture first image(s) and second image(s) using adjusted optical focuses of cameras.
Disclosed is imaging system (100) comprising: first camera (102) and second camera (104); depth-mapping means (106); gaze-tracking means (108); and processor (110) configured to: generate depth map of real-world scene; determine gaze directions of first eye and second eye; identify line of sight (206) and conical region of interest (200); determine optical depths of first object (202) and second object (204) present in conical region; when first and second objects are placed horizontally opposite, adjust optical focuses of first and second cameras to focus on respective objects on same side as them; when first and second objects are placed vertically opposite, adjust optical focus of one camera corresponding to dominant eye to focus on object having greater optical depth, and adjust optical focus of another camera to focus on another object; and capture first image(s) and second image(s) using adjusted optical focuses of cameras.
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD]
G01S 3/00 - Radiogoniomètres pour déterminer la direction d'où proviennent des ondes infrasonores, sonores, ultrasonores ou électromagnétiques ou des émissions de particules sans caractéristiques de direction
H04N 23/67 - Commande de la mise au point basée sur les signaux électroniques du capteur d'image
H04N 23/90 - Agencement de caméras ou de modules de caméras, p. ex. de plusieurs caméras dans des studios de télévision ou des stades de sport
Disclosed is imaging system (100) comprising: first camera (102) and second camera (104); depth-mapping means (106); gaze-tracking means (108); and processor (110) configured to: generate depth map of real-world scene of real-world environment; determine gaze directions of first eye and second eye; identify line of sight (208) of user and conical region of interest (200) in real-world scene; determine optical depths of objects in conical region of interest, wherein at least first object (202), second object (204) and third object (206) from amongst objects are at different optical depths; adjust optical focus of one of first camera and second camera to focus on first object and second object in alternating manner, whilst adjusting optical focus of another of first camera and second camera to focus on third object; and capture images using adjusted optical focus of cameras.
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
Disclosed is imaging system (100) comprising: first camera (102) and second camera (104); depth-mapping means (106); gaze-tracking means (108); and processor (110) configured to: generate depth map of real-world scene; determine gaze directions of first eye and second eye; identify line of sight (206) and conical region of interest (200); determine optical depths of first object (202) and second object (204) present in conical region; determine one of first camera and second camera having lesser occlusion in real-world scene; adjust optical focus of one of first camera and second camera to focus on one of first object and second object having greater optical depth, and adjust optical focus of another of first camera and second camera to focus on another of first object and second object; and capture first image(s) and second image(s) using adjusted optical focuses of cameras.
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
Disclosed is display apparatus (100, 204, 206, 208) comprising: light source(s) (102, 104); gaze-tracking means (106); and processor(s) (108) configured to: determine gaze directions of user's eyes; send, to rendering server (110, 202), information indicative of gaze direction determined at first time instant; receive image frame(s) generated according to gaze, and being optionally timestamped with second time instant; display image frame(s) at third time instant; determine time lag between any one of: first time instant and third time instant, or second time instant and third time instant; detect whether or not time lag exceeds first predefined threshold; when time lag exceeds first predefined threshold, switch on gaze-lock mode; select forward line of vision as fixed gaze direction; send, to rendering server, information indicative of fixed gaze; and receive image frames generated according to fixed gaze; and display image frames.
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
A computer-implemented method including: capturing visible-light images via visible-light camera(s) from view points in real-world environment, wherein 3D positions of view points are represented in coordinate system; dividing 3D space of real-world environment into 3D grid of convex-polyhedral regions; creating 3D data structure including nodes representing convex-polyhedral regions of 3D space; determining 3D positions of pixels of visible-light images based on 3D positions of view points; dividing each visible-light image into portions, wherein 3D positions of pixels of given portion of said visible-light image fall inside corresponding convex-polyhedral region; and storing, in each node, portions of visible-light images whose pixels' 3D positions fall inside corresponding convex-polyhedral region, wherein each portion of visible-light image is stored in corresponding node.
Disclosed is system (100, 300) for facilitating scalable shared rendering, comprising plurality of servers (102a-b, 302a-c) communicably coupled to each other, each server executing executable instance (304a-c) of rendering software (314), being communicably coupled to display 5 apparatus(/es) (104a-c, 200, 306a-k), wherein when executed, rendering software causes each server to receive information indicative of poses of users of display apparatus(/es), utilise three-dimensional model(/s) of extended-reality environment to generate images from poses, send images to respective display apparatus(/es) for display,10 wherein at least one of plurality of servers is configured to detect when total number of display apparatuses to be served exceeds predefined threshold number, and employ new server and execute new executable instance of rendering software when predefined threshold number is exceeded, wherein new display apparatuses are served by new server, 15 thereby facilitating scalable shared rendering.
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
A63F 13/352 - Dispositions d’interconnexion entre des serveurs et des dispositifs de jeu; Dispositions d’interconnexion entre des dispositifs de jeu; Dispositions d’interconnexion entre des serveurs de jeu - Détails des serveurs de jeu comportant des dispositions particulières de serveurs de jeu, p.ex. des serveurs régionaux connectés à un serveur national ou à plusieurs serveurs gérant les partitions de jeu
Disclosed is computer-implemented method comprising: capturing visible-light images via visible-light camera(s) (302) from view points in real-world environment, wherein 3D positions of view points are represented in coordinate system; dividing 3D space of real-world environment into 3D grid of convex-polyhedral regions; creating 3D data structure comprising nodes representing convex-polyhedral regions of 3D space; determining 3D positions of pixels of visible-light images based on 3D positions of view points; dividing each visible-light image into portions, wherein 3D positions of pixels of given portion of said visible-light image fall inside corresponding convex-polyhedral region; and storing, in each node, portions of visible-light images whose pixels' 3D positions fall inside corresponding convex-polyhedral region, wherein each portion of visible-light image is stored in corresponding node.
A display apparatus including: light source(s); gaze-tracking means; and processor(s) configured to: determine gaze directions of user's eyes; send, to rendering server, information indicative of gaze direction determined at first time instant; receive image frame(s) generated according to gaze, and being optionally timestamped with second time instant; display image frame(s) at third time instant; determine time lag between any one of: first time instant and third time instant, or second time instant and third time instant; detect whether or not time lag exceeds first predefined threshold; when time lag exceeds first predefined threshold, switch on gaze-lock mode; select forward line of vision as fixed gaze direction; send, to rendering server, information indicative of fixed gaze; and receive image frames generated according to fixed gaze; and display image frames.
Disclosed is a display apparatus (100, 200, 300) comprising: light source(s) (102, 104, 202, 204) per eye, first tracking means (106, 206, 304), and processor(s) (108, 208) configured to: process first tracking data, collected by first tracking means, to determine location (404, 416) of display apparatus in real-world environment (400); obtain software application(s) (320, 322) that is available for location of display apparatus along with metainformation indicative of location (412, 420, 422, 424) in real-world environment with which software application(s) is/are associated; determine relative location of display apparatus with respect to location with which software application(s) is/are associated; execute software application(s) to create and overlay virtual content on image(s) (434) representing real-world environment, based on relative location of display apparatus with respect to location with which software application(s) is/are associated; and display image(s) via light source(s).
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H04M 1/72457 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens permettant d’adapter la fonctionnalité du dispositif dans des circonstances spécifiques en s’appuyant sur la localisation géographique
H04W 4/02 - Services utilisant des informations de localisation
30.
Systems and methods for facilitating scalable shared rendering
A system for facilitating scalable shared rendering, including plurality of servers communicably coupled to each other, each server executing executable instance of rendering software, being communicably coupled to display apparatus(/es), wherein when executed, rendering software causes each server to receive information indicative of poses of users of display apparatus(/es), utilise three-dimensional model(/s) of extended-reality environment to generate images from poses, send images to respective display apparatus(/es) for display, wherein at least one of plurality of servers is configured to detect when total number of display apparatuses to be served exceeds predefined threshold number, and employ new server and execute new executable instance of rendering software when predefined threshold number is exceeded, wherein new display apparatuses are served by new server, thereby facilitating scalable shared rendering.
An imaging system including: first camera and second camera; depth-mapping means; gaze-tracking means; and processor configured to: generate depth map of real-world scene of real-world environment; determine gaze directions of first eye and second eye; identify line of sight of user and conical region of interest real-world scene; determine optical depths of objects in conical region of interest, wherein at least first object, second object and third object from amongst objects are at different optical depths; adjust optical focus of one of first camera and second camera to focus on first object and second object in alternating manner, whilst adjusting optical focus of another of first camera and second camera to focus on third object; and capture images using adjusted optical focus of cameras.
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 13/246 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques Étalonnage des caméras
H04N 23/959 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux pour l'imagerie à grande profondeur de champ en ajustant la profondeur de champ pendant la capture de l'image, p. ex. en maximisant ou en réglant la portée en fonction des caractéristiques de la scène
A system including server(s) configured to: receive, from host device, visible-light images of real-world environment captured by visible-light camera(s); process visible-light images to generate three-dimensional (3D) environment model; receive, from client device, information indicative of pose of client device; utilise 3D environment model to generate reconstructed image(s) and reconstructed depth map(s); determine position of each pixel of reconstructed image(s); receive, from host device, current visible-light image(s); receive, from host device, information indicative of current pose of host device, or determine said current pose; determine, for pixel of reconstructed image(s), whether or not corresponding pixel exists in current visible-light image(s); replace initial pixel values of pixel in reconstructed image(s) with pixel values of corresponding pixel in current visible-light image(s), when corresponding pixel exists in current visible-light image(s); and send reconstructed image(s) to client device.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06T 7/50 - Récupération de la profondeur ou de la forme
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
33.
GAZE-BASED NON-REGULAR SUBSAMPLING OF SENSOR PIXELS
Disclosed is an imaging system (100) comprising: image sensor (102, 704) comprising pixels arranged on photo-sensitive surface (300); and processor (104, 702) configured to: obtain information indicative of gaze direction of user's eye; identify gaze position on photo-sensitive surface; determine first region (302) and second region (304) on photo-sensitive surface, wherein first region includes and surrounds gaze position, while second region surrounds first region; read out first pixel data from each pixel of first region; select set of pixels to be read out from second region based on predetermined sub-sampling pattern; read out second pixel data from pixels of selected set; generate, from second pixel data, pixel data of remaining pixels of second region; and process first pixel data, second pixel data, and generated pixel data to generate image frame(s).
H04N 5/345 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en lisant partiellement une matrice de capteurs SSIS
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
34.
Gaze-based non-regular subsampling of sensor pixels
An imaging system including: image sensor including pixels arranged on photo-sensitive surface; and processor configured to: obtain information indicative of gaze direction of user's eye; identify gaze position on photo-sensitive surface; determine first region and second region on photo-sensitive surface, wherein first region includes and surrounds gaze position, while second region surrounds first region; read out first pixel data from each pixel of first region; select set of pixels to be read out from second region based on predetermined sub-sampling pattern; read out second pixel data from pixels of selected set; generate, from second pixel data, pixel data of remaining pixels of second region; and process first pixel data, second pixel data, and generated pixel data to generate image frame(s).
H04N 25/75 - Circuits pour fournir, modifier ou traiter des signaux d'image provenant de la matrice de pixels
H04N 25/40 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
35.
IMPROVED FOVEATION-BASED IMMERSIVE XR VIDEO ENCODING AND DECODING
Disclosed is an encoding method and a decoding method. The encoding method comprises generating curved image (202, 208, 300) by creating projection of visual scene onto inner surface of imaginary 3D geometric shape (200, 206) that is curved in at least one dimension; dividing curved image into input portion (302) and plurality of input rings (304, 306, 308); encoding input portion and input rings into first planar image (400) and second planar image (402), respectively, such that input portion is stored into first planar image, and input rings are packed into corresponding rows (404) of second planar image; and communicating, to display apparatus (704), first and second planar images and information indicative of sizes of input portion and input rings.
H04N 21/4728 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de services; Interface pour utilisateurs finaux pour l'interaction avec le contenu, p.ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés pour la sélection d'une région d'intérêt [ROI], p.ex. pour la requête d'une version de plus haute résolution d'une région sélectionnée
H04N 21/6587 - Paramètres de contrôle, p.ex. commande de lecture à vitesse variable ("trick play") ou sélection d’un point de vue
H04N 21/2343 - Traitement de flux vidéo élémentaires, p.ex. raccordement de flux vidéo ou transformation de graphes de scènes MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
H04N 21/4402 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
An encoding method and a decoding method. The encoding method includes generating curved image by creating projection of visual scene onto inner surface of imaginary 3D geometric shape that is curved in at least one dimension; dividing curved image into input portion and plurality of input rings; encoding input portion and input rings into first planar image and second planar image, respectively, such that input portion is stored into first planar image, and input rings are packed into corresponding rows of second planar image; and communicating, to display apparatus, first and second planar images and information indicative of sizes of input portion and input rings.
A display apparatus including display, display driver, and processor configured to send input signal to display driver, first part and second part of input signal comprise first pixel data pertaining to portion of first image frame and second pixel data pertaining to second image frame, respectively, first part of input signal further comprises extra pixel data pertaining to second image frame. Display driver is configured to: re-scale pixels of first pixel data based on display resolution; update pixels of second pixel data based on extra pixel data; generate control signal based on re-scaled pixels and updated pixels; drive display using control signal to present visual scene, wherein re-scaled pixels surround updated pixels when displayed on display area.
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
38.
Display apparatuses and methods for facilitating location-based virtual content
A display apparatus including: light source(s) per eye, first tracking means, and processor(s) configured to: process first tracking data, collected by first tracking means, to determine location of display apparatus in real-world environment; obtain software application(s) that is available for location of display apparatus along with metainformation indicative of location in real-world environment with which software application(s) is/are associated; determine relative location of display apparatus with respect to location with which software application(s) is/are associated; execute software application(s) to create and overlay virtual content on image(s) representing real-world environment, based on relative location of display apparatus with respect to location with which software application(s) is/are associated; and display image(s) via light source(s).
An imaging system including visible-light camera(s), pose-tracking means, and processor(s). The processor(s) is/are configured to: control visible-light camera(s) to capture visible-light image, whilst processing pose-tracking data to determine pose of camera(s); obtain three-dimensional model of real-world environment; create occlusion mask, using three-dimensional model; cull part of virtual object(s) to generate culled virtual object(s), wherein virtual object(s) is to be embedded at given position in visible-light image; detect whether width of culled part or remaining part of virtual object(s) is less than predefined percentage of total width of virtual object(s); if width of culled part is less than predefined percentage, determine new position and embed entirety of virtual object(s) at new position to generate extended-reality image; and if width of remaining part is less than predefined percentage, cull entirety of virtual object(s).
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 7/536 - Récupération de la profondeur ou de la forme à partir des effets de perspective, p.ex. en utilisant des points de fuite
G06T 7/586 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir de plusieurs sources de lumière, p.ex. stéréophotométrie
An imaging apparatus including: an image sensor having a photo-sensitive surface; an optical device arranged on an optical path of light incidenting on the photo-sensitive surface, the optical device being electrically controllable to have a spatially variable focal length; and a processor configured to: generate and send a drive signal to the optical device to compensate for field curvature of optical device by adjusting focal lengths of different portions of the optical device to different extents, wherein a focal length of a first portion of the optical device is higher than a focal length of a second portion of the optical device surrounding the first portion; and control the image sensor to capture a distorted image of a real-world environment.
H04N 5/262 - Circuits de studio, p.ex. pour mélanger, commuter, changer le caractère de l'image, pour d'autres effets spéciaux
G02B 7/04 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02F 1/29 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c. à d. déflexion
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
41.
TRACKING METHOD FOR IMAGE GENERATION, A COMPUTER PROGRAM PRODUCT AND A COMPUTER SYSTEM
The transmitted information from a gaze tracker camera to a control unit of a VR/AR system can be controlled by an image signal processor (ISP) for use with a camera arranged to provide a stream of images of a moving part of an object in a VR or AR system to a gaze tracking function of the VR or AR system, the image signal processor being arranged to receive a signal from the gaze tracking function indicating at least one desired property of the images and to provide the stream of images to the gaze tracking function according to the signal. The ISP may be arranged to provide the image as either a full view of the image with reduced resolution or a limited part of the image with a second resolution which is high enough to enable detailed tracking of the object.
The transmitted information from a gaze tracker camera to a control unit (19) of a VR/AR system (1) can be controlled by an image signal processor (ISP) (15) for use with a camera (14) arranged to provide a stream of images of a moving part of an object in a VR or AR system to 5 a gaze tracking function of the VR or AR system, the image signal processor being arranged to receive a signal from the gaze tracking function indicating at least one desired property of the images and to provide the stream of images to the gaze tracking function according to the signal. The ISP may be arranged to provide the image as either a 10 full view of the image with reduced resolution or a limited part of the image with a second resolution which is high enough to enable detailed tracking of the object.
A method of transmitting image data in an image display system, includes dividing the image data into framebuffers, and for each framebuffer: dividing the framebuffer into a number of vertical stripes, each stripe including one or more scanlines, dividing each vertical stripe into at least a first and a second block, each of the first and the second block comprising pixel data to be displayed in an area of the image, and storing first pixel data in the first block with a first resolution and second pixel data in the second block having a second resolution which is lower than the first resolution, transmitting the framebuffer over the digital display interface to a decoder unit, and unpacking the framebuffer, including upscaling the pixel data in the second block to compensate for the lower second resolution and optionally upscaling the pixel data in the first block.
H04N 19/59 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage spatial ou une interpolation spatiale, p.ex. modification de la taille de l’image ou de la résolution
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p.ex. un objet la zone étant une image, une trame ou un champ
G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
H04N 21/44 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4
H04N 21/4402 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
44.
Imaging systems and methods for facilitating local lighting
An imaging system including visible-light camera(s), depth sensor(s), pose-tracking means, and server(s) configured to: control visible-light camera(s) and depth sensor(s) to capture visible-light images and depth images of real-world environment, respectively, whilst processing pose-tracking data to determine poses of visible-light camera(s) and depth sensor(s); reconstruct three-dimensional lighting model of real-world environment representative of lighting in different regions of real-world environment; receive, from client application, request message comprising information indicative of location in real-world environment where virtual object(s) is to be placed; utilise three-dimensional lighting model to create sample lighting data for said location, wherein sample lighting data is representative of lighting at given location in real-world environment; and provide client application with sample lighting data.
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
An imaging system including: first camera and second camera; depth-mapping means; gaze-tracking means; and processor configured to: generate depth map of real-world scene; determine gaze directions of first eye and second eye; identify line of sight and conical region of interest; determine optical depths of first object and second object present in conical region; when first and second objects are placed horizontally opposite, adjust optical focuses of first and second cameras to focus on respective objects on same side as them; when first and second objects are placed vertically opposite, adjust optical focus of one camera corresponding to dominant eye to focus on object having greater optical depth, and adjust optical focus of another camera to focus on another object; and capture first image(s) and second image(s) using adjusted optical focuses of cameras.
A tracking method for tracking a target in a VR/AR system having a tracker function for determining the position of the target, includes obtaining a stream of images of the target, placing two or more markers in determined positions on the target in the image said markers being arranged to follow the movement of the determined positions, detecting the movement of the markers between two images in the stream of images, if the detected movement is within a set of consistency criteria, determining the position of the target based on the detected movement, and if the detected movement is outside the set of consistency criteria, activating the tracker function. This reduces the computation power required for tracking.
A display apparatus comprising: first light source(s) per eye, scanning mirror(s) per eye, pattern converting element per eye, and processor(s) configured to control first light source(s) to emit a light beam, whilst controlling scanning mirror(s) to draw subframe(s) of first image frame over pattern converting element, wherein subframe(s), when drawn, comprises plurality of light spots arranged in first pattern, wherein pattern converting element is employed to direct light beam incident thereon towards target surface, whilst converting first pattern of plurality of light spots into second pattern, thereby producing on target surface output image having spatially-variable resolution.
G09G 5/38 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire avec des moyens pour commander la position de l'affichage
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
48.
Systems and methods employing multiple graphics processing units for producing images
A system for producing image frames for display at display device. The system includes graphics processing units including first graphics processing unit and second graphics processing unit that are communicably coupled to each other and pose-tracking means. Second graphics processing unit is configured to: process pose-tracking data, to determine device pose and velocity and/or acceleration with which device pose is changing; execute rendering application(s) to generate framebuffer data corresponding to image frame; and send, to first graphics processing unit, framebuffer data and information indicative of device pose and velocity and/or acceleration. First graphics processing unit is configured to: execute first compositing application to post-process framebuffer data, based at least on said information; and drive light source(s) using post-processed framebuffer data to display image frame.
An AR system is arranged to display an image stream of an environment with one or more virtual objects, each virtual object being associated with a marker in the image stream. The AR system includes a tracking subsystem arranged to track a first pose of the marker in the image, and inform a frame rendering subsystem, which generates a rendering of the VR object and provides the rendering to the reprojecting subsystem together with information about the first pose of the marker and information identifying a set of pixels included in the VR image. The tracking subsystem further determines a second pose of the marker based on detected movement and informs the reprojecting subsystem about the second pose. The reprojecting subsystem renders an image frame including the image stream of the environment with the rendering of the VR object reprojected in dependence of the second pose.
A system including: image sensor including pixels arranged on photo-sensitive surface thereof; and image signal processor configured to: receive, from image sensor, image signals captured by corresponding pixels; and process image signals to generate at least one image. When processing, image signal processor is configured to: determine at least one region within photo-sensitive surface that corresponds to image segment of at least one image over which blend object is to be superimposed; and selectively perform sequence of image signal processes on given image signal and control plurality of parameters employed therefor, based on whether a given pixel that is employed to capture given image signal lies in at least one region or remaining region within photo-sensitive surface.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
Disclosed is a system (100, 200) comprising image sensor(s) (102, 202, 204) comprising a plurality of pixels arranged on a photo-sensitive surface (300) thereof; and image signal processor(s) (104, 206) configured to: receive, from image sensor(s), a plurality of image signals captured by corresponding pixels of image sensor(s); and process the plurality of image signals to generate at least one image, wherein, when processing, image signal processor(s) is configured to: determine, for a given image signal to be processed, a position of a given pixel on the photo-sensitive surface that is employed to capture the given image signal; and selectively perform a sequence of image signal processes on the given image signal and control a plurality of parameters employed for performing the sequence of image signal processes, based on the position of the given pixel.
A system including image sensor(s) including a plurality of pixels arranged on a photo-sensitive surface thereof; and image signal processor(s) configured to: receive, from image sensor(s), a plurality of image signals captured by corresponding pixels of image sensor(s); and process the plurality of image signals to generate at least one image, wherein, when processing, image signal processor(s) is configured to: determine, for a given image signal to be processed, a position of a given pixel on the photo-sensitive surface that is employed to capture the given image signal; and selectively perform a sequence of image signal processes on the given image signal and control a plurality of parameters employed for performing the sequence of image signal processes, based on the position of the given pixel.
Disclosed is a display apparatus (100) comprising: light source(s) (102); camera(s) (104); and processor(s) (106) configured to: display extended-reality image for presentation to user, whilst capturing eye image(s) of user's eyes; analyse eye image(s) to detect eye features; employ existing calibration model to determine gaze directions of user's eyes; determine gaze location of user; identify three-dimensional bounding box at gaze location within extended-reality environment, based on position and optical depth of gaze location; identify inlying pixels of extended-reality image lying within three-dimensional bounding box, based on optical depths of pixels in extended-reality image; compute probability of user focussing on given inlying pixel and generate probability distribution of probabilities computed for inlying pixels; identify at least one inlying pixel calibration target, based on probability distribution; and map position of calibration target to eye features, to update existing calibration model to generate new calibration model.
A display apparatus including: light source(s); camera(s); and processor(s) configured to: display extended-reality image for presentation to user, whilst capturing eye image(s) of user's eyes; analyse eye image(s) to detect eye features; employ existing calibration model to determine gaze directions of user's eyes; determine gaze location of user; identify three-dimensional bounding box at gaze location within extended-reality environment, based on position and optical depth of gaze location; identify inlying pixels of extended-reality image lying within three-dimensional bounding box, based on optical depths of pixels in extended-reality image; compute probability of user focussing on given inlying pixel and generate probability distribution of probabilities computed for inlying pixels; identify at least one inlying pixel calibration target, based on probability distribution; and map position of calibration target to eye features, to update existing calibration model to generate new calibration model.
Disocclusion in a VR/AR system may be handled by obtaining depth and color data for the disoccluded area from a 3D model of the imaged environment. The data may be obtained by raytracing and included in the image stream by the reprojecting subsystem.
Disclosed is a gaze-tracking system (100) for use in head-mounted display apparatus (102, 200, 300). The gaze-tracking system comprises: illuminators (104); camera (106); and processor (108) configured to: illuminate illuminators in sequential manner; control camera to capture eye images of user's eye (308, 310) during illumination of illuminators; identify reflection(s) of illuminator in eye image; determine extent of deformation in shape of reflection(s) with respect to shape of illuminator; determine extent of displacement in position of reflection(s) with respect to position of illuminator; compute user-specific score for illuminator based on extents of deformation and displacement; select illuminator(s) based on user-specific scores; illuminate illuminator(s); control camera to capture eye image (306) of user's eye during illumination of illuminator(s); and detect gaze direction of user based upon relative position of pupil of user's eye with respect to reflections of illuminator(s) in eye image.
An imaging system for correcting visual artifacts during production of extended-reality images for display apparatus. The imaging system includes at least first camera and second camera for capturing first image and second image of real-world environment, respectively; and processor(s) configured to: analyse first and second images to identify visual artifact(s) and determine image segment of one of first image and second image that corresponds to visual artifact(s); generate image data for image segment, based on at least one of: information pertaining to virtual object, other image segment(s) neighbouring image segment, corresponding image segment in other of first image and second image, previous extended-reality image(s), photogrammetric model of real-world environment; and process one of first image and second image, based on image data, to produce extended-reality image for display apparatus.
An imaging system including: first camera; N second cameras, optical axes of first and second cameras being arranged at an angle; and processor(s) configured to: control first camera and N second cameras to capture first image and N second images, second field of view (FOV) is narrower than first FOV and overlaps with portion of first FOV; determine first overlapping portions (P, P′, A, A′, A″), second overlapping portion(s) (Q, B, B′, B″), and third overlapping portion (C) of first image; for given overlapping portion of first image, determine corresponding overlapping portion of at least one of N second images; and process corresponding overlapping portions of first and second images to generate corresponding portion of output image.
Disclosed is a system (100) for producing extended-reality images for a display apparatus (102). The system comprises camera(s) (104) and processor (106) communicably coupled to camera(s), wherein processor is configured to: control camera(s) to capture image(s) (202, 302, 402) representing test object (206, 306, 406) present in real-world environment, wherein test object is physically covered three- dimensionally with coded pattern (210, 310, 410); obtain information pertaining to three-dimensional geometry of coded pattern; analyze image(s) to identify first image segment representing part of coded pattern visible in image(s); determine virtual content (212, 312, 412) to be presented for test object, based on said part of coded pattern; process image(s) to generate extended-reality image(s) (204, 304, 404) in which virtual content is virtually superimposed over said part of the coded pattern, based on information pertaining to three-dimensional geometry of coded pattern.
A display apparatus including first and second light sources, gaze-tracking means, and processor(s) configured to: process gaze-tracking data to determine gaze direction; identify gaze region; determine first and second portions of gaze region; send, to rendering server, resolution information indicative of at least one of: gaze direction, gaze region, first and second portions of gaze region, different required resolutions of at least two input images; receive said input images comprising first input image(s) and second input image(s), from rendering server; process first input image(s) to generate first region of first output image and second region of second output image; process second input image(s) to generate remaining regions of first and second output images; and display first and second output images via first and second light sources.
A gaze-tracking system for use in head-mounted display apparatus. The gaze-tracking system includes: illuminators; camera; and processor configured to: illuminate illuminators in sequential manner; control camera to capture eye images of user's eye during illumination of illuminators; identify reflection(s) of illuminator in eye image; determine extent of deformation in shape of reflection(s) with respect to shape of illuminator; determine extent of displacement in position of reflection(s) with respect to position of illuminator; compute user-specific score for illuminator based on extents of deformation and displacement; select illuminator(s) based on user-specific scores; illuminate illuminator(s); control camera to capture eye image of user's eye during illumination of illuminator(s); and detect gaze direction of user based upon relative position of pupil of user's eye with respect to reflections of illuminator(s) in eye image.
An optical device. The optical device includes a fibre optic plate having an input surface and an output surface, the fibre optic plate including a plurality of optical fibres; and a colour filter array including a plurality of colour filters formed on at least one of: the input surface, the output surface of the fibre optic plate.
Disclosed is an encoder (100, 302) for encoding images. The encoder comprises processor (102). The processor is configured to: receive, from display apparatus (200, 304), information indicative of at least one of: head pose of user, gaze direction of user; identify gaze location (X) in input image (400, 500), based on the at least one of: head pose, gaze direction; divide input image into first input portion (402, 502) and second input portion (404, 504), wherein first input portion includes and surrounds gaze location; and encode first input portion and second input portion at first compression ratio and at least one second compression ratio to generate first encoded portion and second encoded portion, respectively, wherein at least one second compression ratio is larger than first compression ratio.
H04N 19/167 - Position dans une image vidéo, p.ex. région d'intérêt [ROI]
H04N 19/146 - Débit ou quantité de données codées à la sortie du codeur
A61B 5/16 - Dispositifs pour la psychotechnie; Test des temps de réaction
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 19/12 - Sélection parmi plusieurs transformées ou standards, p.ex. sélection entre une transformée en cosinus discrète [TCD] et une transformée en sous-bandes ou sélection entre H.263 et H.264
H04N 19/17 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p.ex. un objet
H04N 19/20 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage d'objets vidéo
H04N 19/29 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage d'objets vidéo mettant en œuvre l'échelonnage au niveau de l’objet, p.ex. par couche objet vidéo [VOL]
64.
Encoders, methods and display apparatuses incorporating gaze-directed compression ratios
An encoder for encoding images. The encoder includes processor. The processor is configured to: receive, from display apparatus, information indicative of at least one of: head pose of user, gaze direction of user; identify gaze location in input image, based on the at least one of: head pose, gaze direction; divide input image into first input portion and second input portion, wherein first input portion includes and surrounds gaze location; and encode first input portion and second input portion at first compression ratio and at least one second compression ratio to generate first encoded portion and second encoded portion, respectively, wherein at least one second compression ratio is larger than first compression ratio.
A display apparatus including means for tracking pose of user's head, light source(s) and processor configured to: process pose-tracking data to determine position, orientation, velocity and acceleration of head; predict viewpoint and view direction of user in extended-reality environment; determine region of extended-reality environment to be presented, based on viewpoint and view direction; determine sub-region(s) of region whose rendering information is to be derived from previous rendering information of corresponding sub-region(s) of previously-presented region of extended-reality environment; generate rendering information of sub-region(s) based on previous rendering information; send, to rendering server, information indicating remaining sub-regions required to be re-rendered and pose information indicating viewpoint and view direction; receive, from rendering server, rendering information of remaining sub-regions; merge rendering information of sub-region(s) and rendering information of remaining sub-regions to generate image(s); and display image(s) via light source(s).
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/111 - Transformation de signaux d’images correspondant à des points de vue virtuels, p.ex. interpolation spatiale de l’image
H04L 67/131 - Protocoles pour jeux, simulations en réseau ou réalité virtuelle
H04L 67/12 - Protocoles spécialement adaptés aux environnements propriétaires ou de mise en réseau pour un usage spécial, p.ex. les réseaux médicaux, les réseaux de capteurs, les réseaux dans les véhicules ou les réseaux de mesure à distance
66.
DISPLAY APPARATUSES AND RENDERING SERVERS INCORPORATING PRIORITIZED RE-RENDERING
Disclosed is a display apparatus (100, 200) comprising means (102, 202) for tracking pose of user's head, light source(s) (104, 106, 204, 206) and processor (108, 208) configured to: process pose-tracking data to determine position, orientation, velocity and acceleration of head; predict viewpoint and view direction of user in extended-reality environment; determine region of extended-reality environment to be presented, based on viewpoint and view direction; determine sub-region(s) of region whose rendering information is to be derived from previous rendering information of corresponding sub-region(s) of previously-presented region of extended-reality environment; generate rendering information of sub-region(s) based on previous rendering information; send, to rendering server (110, 212), information indicating remaining sub-regions required to be re-rendered and pose information indicating viewpoint and view direction; receive, from rendering server, rendering information of remaining sub-regions; merge rendering information of sub-region(s) and rendering information of remaining sub-regions to generate image(s); and display image(s) via light source(s).
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
67.
Display apparatuses and methods incorporating image masking
A display apparatus including light source(s), camera(s), head-tracking means, and processor configured to: obtain three-dimensional model of real-world environment; control camera(s) to capture given image of real-world environment, whilst processing head-tracking data obtained from head-tracking means to determine pose of users head with respect to which given image is captured; determine region of three-dimensional model that corresponds to said pose of users head; compare plurality of features extracted from region of three-dimensional model with plurality of features extracted from given image, to detect object(s) present in real-world environment; employ environment map of extended-reality environment to generate intermediate extended-reality image based on pose of users head; embed object(s) in intermediate extended-reality image to generate extended-reality image; and display extended-reality image via light source(s).
A peripheral display apparatus that is wearable by a user. The peripheral display apparatus includes at least two light sources and at least one processor coupled to at least two light sources. The at least two light sources include a first light source and a second light source arranged at a first peripheral portion and a second peripheral portion of a field of view of user, respectively, first peripheral portion and second peripheral portion being positioned at opposite horizontal ends of field of view. The at least one processor or at least one external processor communicably coupled to at least one processor is configured to generate at least two images including a first image and a second image, wherein at least one processor is configured to display first image and second image simultaneously at first light source and second light source, respectively.
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
H04N 5/247 - Disposition des caméras de télévision
69.
Display apparatus and method incorporating per-pixel shifting
A display apparatus including: image renderer having array of pixels; liquid-crystal device comprising: liquid-crystal structure, wherein portions of liquid-crystal structure are arranged in front of corresponding pixels of said array; and control circuit including circuit elements employed to electrically control corresponding portions of liquid-crystal structure to shift light emanating from corresponding pixels to corresponding target positions; and processor(s) configured to: generate individual drive signals for circuit elements, based on corresponding target positions to which light emanating from corresponding pixels are to be shifted upon passing through corresponding portions of liquid-crystal structure; and send individual drive signals to control circuit to drive circuit elements to address corresponding portions of liquid-crystal structure separately, whilst displaying output image frame via image renderer.
G09G 3/36 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
09 - Appareils et instruments scientifiques et électriques
28 - Jeux, jouets, articles de sport
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Software; software programs for video games; virtual reality
glasses; virtual reality hardware; augmented reality
devices, namely headphone/microphone combinations and
augmented reality glasses; virtual reality software;
notebook computers; wireless headsets for mobile phones,
smart phones and tablet computers; computer hardware;
operating systems; computer software; peripherals adapted
for use with computers; wearable computers; wearable
computer peripherals; virtual reality game software; virtual
reality headset; handheld virtual reality controllers;
wearable digital electronic devices comprised of software
and display screen; motion tracking sensors; displays and
video cameras; computer peripherals for mobile devices for
remotely accessing and transmitting data; computer
peripherals for displaying data and video; carrying cases
and holders for electronic equipment, namely, for computers,
tablet computers, notebook computers, mobile phones,
headsets, head mounted displays, mixed reality glasses,
virtual reality glasses, augmented reality glasses, 3D
spectacles; apparatus for recording, transmission or
reproduction of sound or images or data; eyewear; 3D
eyeglasses; spectacles [optics]; eyepieces; eyewear; optical
goods; optical appliances; optical devices, namely eye
pieces for helmet mounted displays; optical glasses; optical
lenses; 3D spectacles; hologram apparatus; holographic
apparatus; virtual reality motion simulators; simulation
software; simulation apparatus; virtual reality and
augmented reality software for simulation. Hand-held units for playing electronic games; free-standing
video games apparatus; handheld computer games; hand-held
electronic games; portable games with liquid crystal
displays; video game apparatus; arcade video game machines;
home video game machines; video game joysticks. Software development; computer software design; maintenance
of software; development and testing of software;
development of virtual reality software; design of virtual
reality software; cloud hosting provider services; cloud
storage services for electronic data; private cloud hosting
provider service; software as a service; software as a
service [SaaS] featuring software for virtual reality and
augmented reality; design and development of engineering
products; design and development of computer hardware and
computer peripherals; design and development of wireless
data transmission apparatus, instruments and equipment;
product research and development; research and development
services in the field of engineering.
71.
Display apparatus and method incorporating gaze-dependent display control
A display system including eye-tracking means, first image renderer, second image renderer, optical combiner and processor is described. Eye-tracking data is processed to determine gaze directions of user's eyes. Gaze location and gaze velocity and/or acceleration of user is determined based on gaze directions of user's eyes. It is detected whether or not gaze has been fixated for predefined time period based on gaze location and gaze velocity and/or acceleration of user. If gaze has been fixated, input image is processed to generate and render first image and second image substantially simultaneously. Projection of rendered first image and projection of rendered second image are combined optically by optical combiner to create extended-reality scene. If gaze has not been fixated, input image is processed to generate and render first image via first image renderer, whilst switching off or dimming second image renderer.
A display system including: display apparatus; display-apparatus-tracking means; input device; processor. The processor is configured to: detect input event and identify actionable area of input device; process display-apparatus-tracking data to determine pose of display apparatus in global coordinate space; process first image to identify input device and determine relative pose thereof with respect to display apparatus; determine pose of input device and actionable area in global coordinate space; process second image to identify user's hand and determine relative pose thereof with respect to display apparatus; determine pose of hand in global coordinate space; adjust poses of input device and actionable area and pose of hand such that adjusted poses align with each other; process first image, to generate extended-reality image in which virtual representation of hand is superimposed over virtual representation of actionable area; and render extended-reality image.
A system for producing extended-reality images for a display apparatus. The system includes camera(s) and processor communicably coupled to camera(s), wherein processor is configured to: control camera(s) to capture image(s) representing test object present in real-world environment, wherein test object is physically covered three-dimensionally with coded pattern; obtain information pertaining to three-dimensional geometry of coded pattern; analyze image(s) to identify first image segment representing part of coded pattern visible in image(s); determine virtual content to be presented for test object, based on said part of coded pattern; process image(s) to generate extended-reality image(s) in which virtual content is virtually superimposed over said part of the coded pattern, based on information pertaining to three-dimensional geometry of coded pattern.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
An imaging system and a method for producing extended-reality images for a display apparatus. The imaging system includes camera and processor. The processor is configured to: control camera to capture image of real-world environment; analyse captured image to identify first image segment representing input device and to determine location of at least one actionable area of input device in first image segment; determine at least one functional element to be presented for the actionable area, the functional element being indicative of at least one of: functionality of the at least one actionable area, status of the at least one actionable area; and process captured image to generate extended-reality image in which the functional element is virtually superimposed over the actionable area of input device or a virtual representation of the actionable area of input device.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
An imaging system for producing extended-reality images for a display apparatus. The imaging system including camera that is employed to capture input image representing captured region of real-world environment; and processor configured to: generate intermediate image by correcting spatial distortion of input image; determine capturing region of intermediate image representing captured region of real-world environment and non-capturing regions of intermediate image corresponding to non-captured regions of real-world environment; generate image data for non-capturing region of intermediate image, based on at least one of: information pertaining to virtual object that is to be virtually superimposed, capturing region neighbouring non-capturing region, previous extended-reality image, photogrammetric model of real-world environment; and process intermediate image, based on generated image data, to produce extended-reality image to be presented at display apparatus.
An adjustment mechanism for allowing a user to adjust a length of a headband includes a stationary case, a clutch, and a knob. The stationary case defines an opening and a circular recess. The clutch includes a cam arranged within the circular recess and a pinion extending axially from the cam such that the pinion is positioned outside the case via the opening to engage with the headband. The clutch also includes a roller located adjacent to a first cam portion of the cam. The knob has an actuating element arranged between the roller and a tab of the cam. The knob is rotatable in a first direction for allowing the user to shorten the length of the headband. The knob is rotatable in a second direction allowing the user to increase the length of the headband.
communicating, to a display apparatus, the first image, the second image and information indicative of a size of the first input portion and sizes of the plurality of input rings.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Cloud hosting provider services; cloud storage services for
electronic data; private cloud hosting provider service;
software as a service [SaaS]; software as a service [SaaS]
featuring software for virtual reality and augmented
reality; design and development of engineering products;
design and development of computer hardware and computer
peripherals; design and development of wireless data
transmission apparatus, instruments and equipment; product
research and development; research and development services
in the field of engineering.
79.
Display apparatus and method incorporating adaptive pose locking
A display apparatus including pose-tracking means; image renderer per eye; liquid-crystal device including liquid-crystal structure and control circuit; and processor. Processor is configured to: process pose-tracking data to determine user's head pose; detect if rate at which head pose is changing is below predefined threshold rate; if yes, switch on lock mode, select head pose for session of lock mode, and generate output image frames according to head pose during session; if no, generate output image frames according to corresponding head poses of user using pose-tracking data; and display output image frames, whilst shifting light emanating to from pixels of image renderer to multiple positions (P1-P9) in sequential and repeated manner, said shifting causes resolution of output image frames to appear higher than display resolution of image renderer.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G09G 5/02 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par la manière dont la couleur est visualisée
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
In a display apparatus, a liquid-crystal structure, arranged in front of an image renderer, is controlled to shift light of a given sub-pixel to target positions according to a shifting sequence in a repeated manner, while output image frames are displayed. To generate a given output image frame, a given target position to which the light is to be shifted is determined based on the shifting sequence. An input colour value of the given sub-pixel provided in a given input image frame is then adjusted to generate an output colour value of the given sub-pixel for the given output image frame, based on an output colour value of at least one other sub-pixel whose light overlaps with the given target position during display of a previous output image frame, and a retention coefficient between a colour of the at least one other sub-pixel and a colour of the given sub-pixel.
G09G 5/02 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par la manière dont la couleur est visualisée
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G09G 3/36 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
81.
DISPLAY APPARATUS AND METHOD OF ENHANCING APPARENT RESOLUTION USING LIQUID-CRYSTAL DEVICE
A display apparatus (100, 200) includes an image renderer (102, 104, 202, 204, 304) per eye; a liquid-crystal device (106, 108, 206, 208) including a liquid-crystal structure (112, 114, 212, 214, 302, 400) and a control circuit (116, 118, 216, 218), the liquid-crystal structure being arranged in front of image-rendering surface (306) of image renderer, wherein liquid-crystal structure is to be electrically controlled, via control circuit, to shift light emanating from a given pixel of the image renderer to a plurality of positions in a sequential and repeated manner; and at least one processor (110, 210) configured to render a sequence of output image frames via the image renderer, wherein a shift in the light emanating from the given pixel of the image renderer to the plurality of positions causes a resolution of the output image frames to appear higher than a display resolution of the image renderer.
G09G 3/36 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
82.
Display apparatus and method incorporating gaze-based modulation of pixel values
A display apparatus including gaze-tracking means, image renderers, liquid-crystal devices including liquid-crystal structure and control circuit, to shift light emanating from given pixel of image renderer to multiple positions, given pixel including colour component; and processor configured to: process gaze-tracking data to determine gaze direction of user's eye; determine gaze point; display first output image frame; detect if magnitude of difference between first output value and initial second output value of colour component of given pixel in first and second output image frames exceeds first threshold difference; when detected that magnitude of difference exceeds first threshold difference, update initial second output value to sum of first output value and product of distance factor and difference between initial second output and first output values; and display second output image frame.
G09G 5/02 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par la manière dont la couleur est visualisée
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G09G 3/36 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
83.
LED-based display apparatus and method incorporating sub-pixel shifting
A display apparatus including: image renderer including light-emitting diodes that are to be employed as sub-pixels of image renderer; liquid-crystal device including liquid-crystal structure and control circuit, wherein liquid-crystal structure is arranged in front of light-emitting diodes of image renderer, wherein liquid-crystal structure is to be electrically controlled, via control circuit, to shift light emanating from light-emitting diode to target positions on image plane according to shifting sequence in repeated manner; and processor(s) configured to render output sequence of output image frames via image renderer, wherein shift in light emanating from light-emitting diode to target positions causes resolution of output image frames to appear higher than display resolution of image renderer.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G09G 5/02 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par la manière dont la couleur est visualisée
G09G 3/32 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des panneaux électroluminescents semi-conducteurs, p.ex. utilisant des diodes électroluminescentes [LED]
G09G 3/36 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
84.
LED-BASED DISPLAY APPARATUS AND METHOD INCORPORATING SUB-PIXEL SHIFTING
Disclosed is a display apparatus (100, 200) comprising: image renderer (102, 202, 302) comprising light-emitting diodes (108, 110, 112, 208, 210, 212, 314, 316, 318) that are to be employed as sub-pixels of image renderer; liquid-crystal device (104, 204) comprising liquid-crystal structure (114, 214) and control circuit (116, 216), wherein liquid-crystal structure is arranged in front of light-emitting diodes of image renderer, wherein liquid-crystal structure is to be electrically controlled, via control circuit, to shift light emanating from light-emitting diode to target positions on image plane according to shifting sequence in repeated manner; and processor(s) (106, 206) configured to render output sequence of output image frames via image renderer, wherein shift in light emanating from light-emitting diode to target positions causes resolution of output image frames to appear higher than display resolution of image renderer.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
85.
DISPLAY APPARATUS AND METHOD INCORPORATING ADAPTIVE POSE LOCKING
Disclosed is display apparatus (100, 200) comprising pose-tracking means (102, 202); image renderer (104, 106, 204, 206) per eye; liquid-crystal device (108, 110, 208, 210) comprising liquid-crystal structure (114, 116, 214, 216) and control circuit (118, 120, 218, 220); and processor (112, 212). Processor is configured to: process pose-tracking data to determine user's head pose; detect if rate at which head pose is changing is below predefined threshold rate; if yes, switch on lock mode, select head pose for session of lock mode, and generate output image frames according to head pose during session; if no, generate output image frames according to corresponding head poses of user using pose-tracking data; and display output image frames, whilst shifting light emanating from pixels of image renderer to multiple positions (P1-P9) in sequential and repeated manner, said shifting causes resolution of output image frames to appear higher than display resolution of image renderer.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
86.
Imaging system and method for producing images using means for adjusting optical focus
An imaging system for producing images for display apparatus. Imaging system includes at least one imaging unit arranged to face real-world scene including camera, optical element including first optical portion and second optical portion having different focal lengths, first focal length of first optical portion is smaller than second focal length of second optical portion, and means for adjusting optical focus; and processor. Processor is configured to obtain gaze direction of user; determine region of interest within real-world scene; and control means for adjusting optical focus of imaging unit, based on focal lengths of first and second optical portions, to capture warped image of real-world scene, the warped image having spatially-uniform angular resolution.
H04N 5/262 - Circuits de studio, p.ex. pour mélanger, commuter, changer le caractère de l'image, pour d'autres effets spéciaux
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 7/09 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement adaptés pour la mise au point automatique ou pour faire varier le grossissement de façon automatique
G03B 13/36 - Systèmes de mise au point automatique
G03B 30/00 - Modules photographiques comprenant des objectifs et des unités d'imagerie intégrés, spécialement adaptés pour être intégrés dans d'autres dispositifs, p.ex. des téléphones mobiles ou des véhicules
87.
Display apparatus and method of compensating for visual artifacts
A display apparatus including first display or projector for displaying first images for first eye; second display or projector for displaying second images for second eye; first portion and second portion arranged to face first and second eyes; means for tracking poses of first and second eyes relative to first and second optical axes, respectively; and processor. Processor or external processor is configured to: obtain given pose of given eye relative to given optical axis; generate information pertaining to given visual artifact that is formed over given image at image plane when given image is displayed; determine artifact-superposing portion of given image; and process given image based on generated information and artifact-superposing portion, to generate given artifact-compensated image. Processor displays given artifact-compensated image via given display or projector.
09 - Appareils et instruments scientifiques et électriques
28 - Jeux, jouets, articles de sport
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable augmented reality and mixed reality software for use with headsets and computer hardware and peripherals for medical, training and simulation, teaching, design and engineering, research, development, science and industrial, industry operations, recreation, entertainment, construction, architecture, consulting, marketing and sales, education, collaboration purposes; virtual reality glasses; virtual reality hardware; augmented reality devices, namely, headphone/microphone combinations and augmented reality glasses; downloadable software programs for playing video games; notebook computers; wireless headsets for mobile phones, smart phones and tablet computers; computer hardware; Downloadable virtual reality software for use with headsets and computer hardware and peripherals for medical, training and simulation, teaching, design and engineering, research, development, science and industrial, industry operations, recreation, entertainment, construction, architecture, consulting, marketing and sales, education, collaboration purposes; Downloadable operating system programs; peripherals adapted for use with computers; Wearable computers in the nature of smart glasses, virtual reality, mixed reality and augmented reality glasses; wearable computers in the nature of virtual reality, mixed reality and augmented reality headsets; computers embedded in clothing, headwear, and footwear; Wearable computer peripherals in the nature of virtual reality, augmented reality and mixed reality headsets, hand motion controllers, body, hand and eye tracking peripherals, and touch trackpads; downloadable virtual reality game software; virtual reality headset; Handheld virtual reality controllers in the nature of hand-motion controllers, touch trackpads; wearable digital electronic devices comprised of virtual, mixed and augmented reality software and a display screen incorporated into virtual, mixed and augmented reality displays; motion tracking sensors; Virtual, mixed and augmented reality headmounted displays and video cameras; computer peripherals for mobile devices for remotely accessing and transmitting data; computer peripherals for displaying data and video; carrying cases and holders for electronic equipment, namely, for computers, tablet computers, notebook computers, mobile phones, headsets, head mounted displays, mixed reality glasses, virtual reality glasses, augmented reality glasses, 3D spectacles; apparatus for recording, transmission or reproduction of sound or images or data; eyewear; 3D eyeglasses; optical spectacles; eyepieces in the nature of eyeglasses and lenses for virtual, mixed and augmented reality devices; optical appliances in the nature of optical glasses, optical lenses, optical transmitters, optical receivers, optical readers, optical reflectors, optical character recognition apparatus; eyewear; optical devices, namely, eye pieces for helmet mounted displays; optical glasses; optical lenses; 3D spectacles; hologram apparatus; holographic apparatus; Downloadable virtual, mixed and augmented reality simulation software for medical, training and simulation, teaching, design and engineering, research, development, science and industrial, industry operations, recreation, entertainment, construction, architecture, consulting, marketing and sales, education, collaboration purposes; Downloadable virtual, mixed and augmented reality simulators for medical, training and simulation, teaching, design and engineering, research, development, science and industrial, industry operations, recreation, entertainment, construction, architecture, consulting, marketing and sales, education, and collaboration purposes; Downloadable virtual, mixed and augmented reality software for medical, training and simulation, teaching, design and engineering, research, development, science and industrial, industry operations, recreation, entertainment, construction, architecture, consulting, marketing and sales, education, and collaboration purposes Hand-held units for playing electronic games; free-standing video games apparatus in the nature of stand-alone video game machines, home video game machines; Hand-held units for playing computer games; Hand-held units for playing electronic games; portable games with liquid crystal displays; Video game consoles; arcade video game machines; home video game machines; video game joysticks Software development; computer software design; maintenance of software; development and testing of software; development of virtual reality software; design of virtual reality software; cloud hosting provider services; cloud storage services for electronic data; private cloud hosting provider service; software as a service [SaaS] featuring software for virtual reality, mixed reality and augmented reality for medical, training and simulation, teaching, design and engineering, research, development, science and industrial, industry operations, recreation, entertainment, construction, architecture, consulting, marketing and sales, education, collaboration purposes; design and development of engineering products; design and development of computer hardware and computer peripherals; design and development of wireless data transmission apparatus, instruments and equipment; product research and development; research and development services in the field of engineering
89.
Systems and methods for facilitating shared rendering
A system for facilitating shared rendering between display devices including first display device and second display device that are communicably coupled with first computing device and second computing device, respectively. The system includes means for tracking pose of first display device, means for tracking pose of second display device, image server, first client, second client. First client is configured to: send, to image server, first information indicative of pose of first display device; receive first image frame; render first image frame at first display device; receive second information indicative of pose of second display device; send, to image server, second information; receive second image frame; and send second image frame, wherein second client renders second image frame at second display device.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
90.
Encoder and encoding method for mitigating discrepancies in reconstructed images
An encoder including data processing arrangement configured to analyze depth information pertaining to input image to identify depth edge(s) and to determine location of depth edge(s); analyze input image to identify edges of objects and to determine locations of said edges; select, amongst edges in input image, matching edge(s) whose location matches with location of depth edge(s); obtain intermediate decoded image by decoding intermediate encoded image obtained from intermediate encoding of input image; analyze intermediate decoded image to identify edges of objects and to determine locations of said edges; determine edge(s) in intermediate decoded image that corresponds to matching edge(s); modify pixel values in input image and/or optical depths in depth information, based on difference in location of determined edge(s) in intermediate decoded image and location of matching edge(s); and encode input image and depth information, upon said modification, to generate encoded data.
A display apparatus including an image renderer per eye; a liquid-crystal device including a liquid-crystal structure and a control circuit, the liquid-crystal structure being arranged in front of image-rendering surface of image renderer, wherein liquid-crystal structure is to be electrically controlled, via control circuit, to shift light emanating from a given pixel of the image renderer to a plurality of positions in a sequential and repeated manner; and at least one processor configured to render a sequence of output image frames via the image renderer, wherein a shift in the light emanating from the given pixel of the image renderer to the plurality of positions causes a resolution of the output image frames to appear higher than a display resolution of the image renderer.
G09G 5/02 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par la manière dont la couleur est visualisée
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G09G 3/36 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante utilisant des cristaux liquides
G02F 1/1333 - Dispositions relatives à la structure
G02F 1/1347 - Disposition de couches ou de cellules à cristaux liquides dans lesquelles un faisceau lumineux est modifié par l'addition des effets de plusieurs couches ou cellules
92.
Imaging system and method using projection apparatus
A projection apparatus including an array of infrared emitters, the infrared emitters of said array being arranged in a first pattern; a plurality of liquid-crystal cells and corresponding control circuits, a given liquid-crystal cell being arranged in front of a corresponding infrared emitter of said array; and a processor configured to generate drive signals for driving the control circuits in a random or pseudorandom manner; and control the plurality of liquid-crystal cells individually, via the corresponding control circuits, to project a second pattern of light spots onto objects present in a real-world environment, wherein, when driven by a given drive signal, a given control circuit electrically controls a corresponding liquid-crystal cell to any of: block light emanating from a corresponding infrared emitter, transmit the light in an unbended manner, bend the light.
G02F 1/1333 - Dispositions relatives à la structure
G02F 1/135 - Cellules à cristaux liquides associées structurellement avec une couche photoconductrice ou ferro-électrique dont les caractéristiques peuvent être optiquement ou électriquement modifiées
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Cloud hosting provider services; cloud storage services for electronic data; private cloud hosting provider service; software as a service [SaaS] featuring software for virtual reality, mixed reality and augmented reality for medical, training and simulation, teaching, design and engineering, research, development, science and industrial, industry operations, recreation, entertainment, construction, architecture, consulting, marketing and sales, education, and collaboration purposes; design and development of engineering products; design and development of computer hardware and computer peripherals; design and development of wireless data transmission apparatus, instruments and equipment; product research and development; research and development services in the field of engineering
94.
DISPLAY APPARATUS AND METHOD OF CORRECTING IMAGE DISTORTION THEREFOR
Disclosed is display apparatus (100, 200, 300) comprising first (102, 202) and second (104, 204) displays or projectors that display first and second images for first (302) and second (304) eyes, respectively; first (306) and second (308) portions facing first and second eyes, respectively, first and second portions having first (F-F') and second (S-S') optical axes, respectively; means (106, 206) for tracking positions and orientations of first and second eyes relative to corresponding optical axes, respectively; and processor (108, 208). The processor or external processor (110, 226) obtains current relative positions and orientations of both eyes; determines first and second transformations for first and second input image frames, respectively, given transformation being applied to correct apparent per-pixel distortions produced when given input image frame is displayed; and applies first and second transformations to generate first and second distortion-corrected image frames, respectively, wherein processor renders first and second distortion-corrected image frames.
A display apparatus including first and second displays or projectors that display first and second images for first and second eyes, respectively; first and second portions facing first and second eyes, respectively, first and second portions having first (F-F′) and second (S-S′) optical axes, respectively; means for tracking positions and orientations of first and second eyes relative to corresponding optical axes, respectively; and processor. The processor or external processor obtains current relative positions and orientations of both eyes; determines first and second transformations for first and second input image frames, respectively, given transformation being applied to correct apparent per-pixel distortions produced when given input image frame is displayed; and applies first and second transformations to generate first and second distortion-corrected image frames, respectively, wherein processor renders first and second distortion-corrected image frames.
Disclosed is system (100, 200) for presenting notifications on display device (202) and external computing device (204). The display device comprises image renderer (206), external visual indicator (208) and first processor (210), the external computing device comprises display (212) and second processor (214). The system comprises first (102A, 216) and second clients (102B, 218) executing on first processor, third client (102C, 220) executing on second processor, first, second and third clients are configured to generate and render first, second and third user interfaces on image renderer, external visual indicator and display, respectively; and control server (222). The control server is configured to obtain information; detect whether or not notification is to be presented; determine notification type and content; and select clients from amongst plurality of clients and send content on selected clients, wherein selected clients are configured to generate and render their respective user interfaces to present notification substantially simultaneously.
Disclosed is a display apparatus (100, 200) comprising image renderer (102, 202), camera (104, 204) and processor (106, 206). The processor or external processor (108, 208) communicably coupled to said processor is configured to: render at least one extended-reality image (302) during first mode of operation of display apparatus; determine second mode of operation to which display apparatus is to be switched; control camera to capture at least one real-world image (304) of real-world environment; generate at least one composite image (306, 500) from at least one next extended-reality image and at least one real-world image, wherein first portion (306A, 500A) of at least one composite image is derived from at least one next extended-reality image, and second portion (306B, 500B) of at least one composite image is derived from at least one real-world image; and render at least one composite image during second mode of operation of display apparatus.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
98.
System and method for producing images based on gaze direction and field of view
A system for producing images for a display apparatus. The system includes image source(s) and processor. The processor is configured to obtain information indicative of angular size of field of view providable by image renderer of display apparatus; obtain information indicative of gaze direction of user; receive sequence of images from image source(s); and process sequence of images to generate sequence of processed images. When processing sequence of images, processor is configured to crop a given image, based on gaze direction of user and angular size of field of view, to generate processed image. Angular size of field of view represented by processed image is larger than angular size of field of view providable by the image renderer.
A display system including display or projector , camera, means for tracking position and orientation of user's head, and processor. The processor is configured to control camera to capture images of real-world environment using default exposure setting, whilst processing head-tracking data to determine corresponding positions and orientations of user's head with respect to which images are captured; process images to create environment map of real-world environment; generate extended-reality image from images using environment map; render extended-reality image; adjust exposure of camera to capture underexposed image of real-world environment; process images to generate derived image; generate next extended-reality image from derived image using environment map; render next extended-reality image; and identify and modify intensities of oversaturated pixels in environment map, based on underexposed image and position and orientation with respect to which underexposed image is captured.
A display system and method for correcting drifts in camera poses. Images are captured via camera, and camera poses are determined in global coordinate system. First features are extracted from first image. Relative pose of first feature with respect to camera is determined. Pose of first feature in global coordinate system is determined, based on its relative pose and first camera pose. Second features are extracted from second image. Relative pose of second feature with respect to camera is determined. Pose of second feature in global coordinate system is determined, based on its relative pose and second camera pose. Matching features are identified between first features and second features. Difference is determined between pose of feature based on first camera pose and pose of feature based on second camera pose. Matching features that satisfy first predefined criterion based on difference are selected. Correction transform that, when applied to second camera pose, yields corrected second camera pose is generated, such that corrected differences between poses of matching features based on corrected second camera pose and corresponding poses of matching features based on first camera pose satisfy second predefined criterion. Correction transform is applied to second camera pose. Second image is processed, based on corrected second camera pose, to generate extended-reality image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H04N 13/246 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques Étalonnage des caméras