A method, system and product comprising: capturing a noisy audio signal from an environment of a user, a plurality of people is located in the environment, the user having a mobile device used for obtaining user input, the user having a hearable device used for providing audio output to the user; processing the noisy audio signal to generate first and second separate audio signals that represents first and second voices, said processing is performed based on first and second acoustic fingerprints that correspond to the first and second voices, respectively; combining the first and second separate audio signals to obtain an enhanced audio signal; and outputting to the user, via the hearable device, the enhanced audio signal.
G10L 21/02 - Amélioration de l'intelligibilité de la parole, p.ex. réduction de bruit ou annulation d'écho
G10L 17/02 - Opérations de prétraitement, p.ex. sélection de segment; Représentation ou modélisation de motifs, p.ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principales; Sélection ou extraction des caractéristiques
G10L 15/20 - Techniques de reconnaissance de la parole spécialement adaptées de par leur robustesse contre les perturbations environnantes, p.ex. en milieu bruyant ou reconnaissance de la parole émise dans une situation de stress
G10L 25/84 - Détection de la présence ou de l’absence de signaux de voix pour différencier la parole du bruit
A reading device may include a light source configured to illuminate an object; a trigger button configured to activate the light source, the trigger being operable by a finger of a hand of a user; a camera configured to capture images from an environment of the user; an audio output device configured to output audio signals; and at least one processor. The at least one processor may be programmed to: in response to operation of the trigger, project light from the light source to illuminate an area of the object; capture at least one image of the illuminated area of the object, wherein the at least one image includes a representation of written material; analyze the at least one image to recognize text; transform the recognized text into at least one audio signal; and output the at least one audio signal using the audio output device.
G10L 13/00 - Synthèse de la parole; Systèmes de synthèse de la parole à partir de texte
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06V 30/142 - Acquisition d’images - Détails structurels des instruments
A hearing interface device for generating processed audio signals is disclosed. In one implementation, the hearing interface device may include a housing configured to be at least partially inserted into an ear of a user, a microphone, a camera, and a processor included in the housing. The processor may be configured to receive a captured audio signal representative of sounds captured by the microphone; receive an image captured by the camera; generate a processed audio signal based on analysis of at least one of the captured audio signal or the image; and cause at least a portion of the processed audio signal to be presented to the ear of the user.
A wearable apparatus and methods for operating a wearable apparatus. In one implementation, a system for automatically tracking and guiding one or more individuals in an environment includes at least one tracking subsystem including one or more cameras. The tracking subsystem includes a camera unit configured to be worn by a user, and the at least one tracking subsystem includes at least one processor programmed to: receive a plurality of images from the one or more cameras; identify at least one individual represented by the plurality of images; determine at least one characteristic of the at least one individual; and generate and send an alert based on the at least one characteristic.
A hearing aid and related systems and methods. In one implementation, a hearing aid system may comprise a wearable camera configured to capture images from an environment of a user, a microphone configured to capture sounds from the environment of the user, and a processor. The processor may be programmed to receive images captured by the camera; receive audio signals representative of sounds captured by the microphone; operate in a first mode to cause a first selective conditioning of a first audio signal; determine, based on analysis of at least one of the images or the audio signals, to switch to a second mode to cause a second selective conditioning of the first audio signal; and cause transmission of the first audio signal selectively conditioned in the second mode to a hearing interface device configured to provide sound to an ear of the user.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G10L 21/0216 - Filtration du bruit caractérisée par le procédé d’estimation du bruit
6.
WEARABLE SYSTEMS AND METHODS FOR SELECTIVELY READING TEXT
Systems and methods are disclosed for selectively reading text. A system may comprise an image capture device, an audio capture device, and a processor. The processor may be configured to receive images captured by the image capture device and audio signals captured by the audio capture device. The processor may analyze the image to identify text represented in the image; identify, based on the image, a structural element of the text; identify a request to read a first portion of the text associated with the structural element, the request being identified by at least one of analyzing the audio signals to detect a spoken request or detecting a gesture in the plurality of images; and present the first portion of text to the user of the wearable device.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G09B 21/00 - Moyens d'enseignement ou de communication destinés aux aveugles, sourds ou muets
7.
WEARABLE SYSTEMS AND METHODS FOR LOCATING AN OBJECT
Systems and methods are disclosed for locating an object for a user. A system may comprise an image capture device, an audio capture device, and a processor. The processor may be configured to receive images captured by the image capture device and audio signals received by the audio capture device. The processor may analyze the audio signals to identify a descriptor word describing the object and retrieve a visual characteristic of the object based on the descriptor word. The processor may then determine a location of the object in the images based on the visual characteristic, determine a location of a hand of the user in the images, and determine a direction between the hand and the object. The processor may then determine feedback indicative of the direction and provide the feedback to the user.
A hearing aid and related systems and methods are disclosed. In one implementation, a hearing aid system (2300) may include a wearable camera (2301); a microphone (2302); and a processor (2303). The processor (2303) may be programmed to receive images captured by the camera (2301); receive audio signals representative of sounds received by the at least one microphone (2302); determine a look direction (2030) of the user based on analysis of the images; determine an amplitude of a first audio signal associated with an individual or object in a region associated with the look direction of the user; determine an amplitude of a second audio signal from a region other than the look direction of the user; adjust the second amplitude in accordance with the first amplitude; and cause transmission of the second audio signal at the adjusted amplitude to a hearing interface device configured to provide sound to an ear of the user (100).
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G10L 15/24 - Reconnaissance de la parole utilisant des caractéristiques non acoustiques
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
G10L 17/00 - Identification ou vérification du locuteur
Systems and methods are disclosed for using a wearable apparatus in social events. In one implementation, a system may comprise an image sensor, an audio sensor, and a processor. The processor may be configured to receive images captured by the image sensor and receive an audio signal representative of sound captured by the audio sensor. The processor may determine, based on the images or the audio signal, whether an individual is a recognized individual of the user. When the individual is not recognized, the processor may identify the individual based on an external resource. The processor may further identify a content source associated with the individual, identify a content item associated with the individual, and provide the content item to a computing device associated with the user.
G16H 80/00 - TIC spécialement adaptées pour faciliter la communication entre les professionnels de la santé ou les patients, p.ex. pour le diagnostic collaboratif, la thérapie collaborative ou la surveillance collaborative de l’état de santé
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A hearing aid and related systems and methods. In one implementation, a hearing aid system may selectively amplify sounds emanating from a detected look direction of a user of the hearing aid system. The system may include a wearable camera configured to capture a plurality of images from an environment of the user; at least one microphone configured to capture sounds from an environment of the user; and at least one processor programmed to receive the plurality of images captured by the camera, receive audio signals representative of sounds received by the at least one microphone from the environment of the user, determine a look direction for the user based on analysis of at least one of the plurality of images, cause selective conditioning of at least one audio signal received by the at least one microphone from a region associated with the look direction of the user, and cause transmission of the at least one conditioned audio signal to a hearing interface device configured to provide sound to an ear of the user.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G10L 15/24 - Reconnaissance de la parole utilisant des caractéristiques non acoustiques
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G10L 17/26 - Reconnaissance de caractéristiques spéciales de voix, p.ex. pour utilisation dans les détecteurs de mensonge; Reconnaissance des voix d’animaux
G10L 25/51 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation
G10L 25/78 - Détection de la présence ou de l’absence de signaux de voix
A wearable apparatus may automatically monitor consumption by a user of the wearable apparatus by analyzing images captured from an environment of the user. The wearable apparatus may include at least one image capture device configured to capture a plurality of images from an environment of the user of the wearable apparatus. The wearable apparatus may also include at least one processing device configured to: analyze the plurality of images to detect a consumable product represented in at least one of the plurality of images; based on the detection of the consumable product represented in at least one of the plurality of images, analyze one or more of the plurality of images to determine a type indicator associated with the detected consumable product; analyze the one or more of the plurality of images to estimate an amount of the consumable product consumed by the user; determine a feedback based on the type indicator of the detected consumable product and the estimated amount of the consumable product consumed by the user; and cause the feedback to be outputted to the user.
G16H 20/60 - TIC spécialement adaptées aux thérapies ou aux plans d’amélioration de la santé, p.ex. pour manier les prescriptions, orienter la thérapie ou surveiller l’observance par les patients concernant le contrôle de l’alimentation, p.ex. les régimes
G16H 30/40 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le traitement d’images médicales, p.ex. l’édition
12.
WEARABLE CAMERA SYSTEMS AND METHODS FOR AUTHENTICATING IDENTITY
Systems and methods may authenticate an identity of a wearer of a wearable device and manage radiation exposure of a wearer of a wearable device. In one implementation, a wearable device may include a housing configured to be worn by the wearer and at least one sensor in the housing. The at least one sensor may be configured to generate an output indicative of at least one aspect of an environment of the wearer. The at least one processor may be programmed to: alternatively operate in an unrestricted operation mode and a restricted operation mode; detect, based on the output generated by the at least one sensor, whether the wearer of the housing is authenticated with the wearable device; and operate in the unrestricted operation mode after the at least one processor detects that the wearer of the housing is authenticated with the wearable device.
G06F 21/34 - Authentification de l’utilisateur impliquant l’utilisation de dispositifs externes supplémentaires, p.ex. clés électroniques ou cartes à puce intelligentes
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 21/32 - Authentification de l’utilisateur par données biométriques, p.ex. empreintes digitales, balayages de l’iris ou empreintes vocales
The present disclosure relates to a user-augmented wearable camera system with variable image processing based on content. In one implementation, a wearable apparatus includes a wearable image sensor configured to capture a plurality of images from an environment of a user of the wearable apparatus. The wearable apparatus may include at least one processing device. The at least one processing device may be programmed to analyze at least one image to identify a visual context; determine, based on at least the visual context, feedback information for a user; provide the feedback information to the user; receive an input from the user, wherein the input reflects a determination by the user that the feedback information was insufficient or incorrect; and transmit, based on the input from the user, information related to the at least one image to an external device for additional processing.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/03 - Détection ou correction d'erreurs, p.ex. par une seconde exploration
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A wearable apparatus and methods may analyze images. In one implementation, a wearable apparatus for capturing and processing images may comprise a wearable image sensor configured to capture a plurality of images from an environment of a user of the wearable apparatus and at least one processing device. The at least one processing device may be programmed to: analyze the plurality of images to identify a plurality of people; analyze the plurality of images to determine an affinity level between the user and each of the plurality of people; obtain an image representation of each of the plurality of people; and generate, based on the affinity levels, a visualization comprising the image representations.
The present disclosure relates to systems and methods for selecting an action based on a detected person. In one implementation, a wearable apparatus may include a wearable image sensor configured to capture a plurality of images from the environment of the user of the wearable apparatus and at least one processing device. The at least one processing device may be programmed to analyze at least one of the plurality of images to detect the person; analyze at least one of the plurality of images to identify an attribute of the detected person; select at least one category for the detected person based on the identified attribute; select at least one action based on the at least one category; and cause the at least one selected action to be executed.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
16.
SYSTEMS AND METHODS FOR DIRECTING AUDIO OUTPUT OF A WEARABLE APPARATUS
The present disclosure relates to systems and methods for directing the audio output of a wearable device having a plurality of speakers. In one implementation, the system may include an image sensor configured to capture one or more images from an environment of the user of the wearable apparatus, a plurality of speakers, and at least one processing device. The at least one processing device may be configured to analyze the one or more images to determine at least one indicator of head orientation of the user of the wearable apparatus, select at least one of the plurality of speakers based on the at least one indicator of head orientation, and output the audio to the user of the wearable apparatus via the selected at least one of the plurality of speakers.
A wearable apparatus and method are provided for processing images including product descriptors. In one implementation, a wearable apparatus for processing images including a product descriptor is provided. The wearable apparatus includes a wearable image sensor configured to capture a plurality of images from an environment of a user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to analyze the plurality of images to identify one or more of the plurality of images that include an occurrence of the product descriptor. Based on analysis of the one or more identified images, the at least one processing device is also programmed to determine information related to the occurrence of the product descriptor. The at least one processing device is further configured to cause the information and an identifier of the product descriptor to be stored in a memory.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
18.
APPARATUS AND METHOD FOR EXECUTING SYSTEM COMMANDS BASED OH CAPTURED IMAGE DATA
An apparatus and method are provided for identifying and executing system commands based on captured image data. In one implementation, a method is provided for executing at least one command retrieved from a captured image. According to the method, image data is received from an image sensor, and the image data may include printed information associated with a specific system commands. The method further includes accessing a database including a plurality of predefined system commands associated with printed information, and identifying in the image data an existence of the printed information associated with the specific system command stored in the database. The specific system command is executed after the printed information associated with the specific system command is identified.
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
An apparatus and method are provided for performing one or more actions based on triggers detecting within captured image data. In one implementation, a method is provided for audibly reading text retrieved from a captured image. According to the method, real-time image data is captured from an environment of a user, and an existence of a trigger is determined within the captured image data. In one aspect, the trigger may be associated with a desire of the user to hear text read aloud, and the trigger identifies an intermediate portion of the text a distance from a level break in the text. The method includes performing a layout analysis on the text to identify the level break associated with the trigger, and reading aloud text beginning from the level break associated with the trigger.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
An apparatus connectable to a pair of glasses provides information to a user by processing images captured from the environment of the user. In one embodiment, the apparatus includes a support configured for mounting on the pair of glasses and a sensory unit. The sensory unit includes a housing configured for selective attachment to the support and an image sensor contained within the housing. The support and the housing are configured to enable selective aiming of the image sensor to establish a set aiming direction relative to the pair of glasses. The support and the housing are configured to cooperate with each other such that each time the sensory unit is attached to the support, the image sensor assumes the set aiming direction without need for directional calibration.
A system and method are provided for accelerating machine reading of text. In one embodiment, the system comprises at least one processor device. The processor device is configured to receive at least one image of text to be audibly read. The text includes a first portion and a second portion. The processor device is further configured to initiate optical character recognition (OCR) to recognize the first portion. The processor device is further configured to initiate an audible presentation of the first portion prior to initiating OCR of the second portion, and simultaneously perform OCR to recognize the second portion of the text to be audibly read during presentation of at least part of the first portion. The processor device is further configured to automatically cause the second portion of the text to be audibly presented immediately upon completion of the presentation of the first portion.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G09B 21/00 - Moyens d'enseignement ou de communication destinés aux aveugles, sourds ou muets
A apparatus and method are provided for processing images. In one embodiment, the apparatus includes an image sensor configured to capture real time images from an environment of a user. The apparatus also includes a mobile power source, and at least one processor device configured to process, at an initial resolution, images to determine existence of a trigger, and access rules associating image context with image capture resolution to enable images of a first context to be processed at a lower capture resolution than images of a second context. The processor device analyzes at least one first image, selects a first image capture resolution rule, and applies the first image capture resolution rule to a subsequent captured image. The processor device analyzes at least one second image, selects a second image capture resolution rule, and applies the second image capture resolution rule to a second subsequent captured image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
23.
APPARATUS AND METHOD FOR PROVIDING FAILED-ATTEMPT FEEDBACK USING A CAMERA ON GLASSES
Apparatuses and a method are provided for providing feedback to a user, who may be visually-impaired. In one implementation, a method is provided for providing feedback to a visually impaired user. The method comprises receiving from a mobile image sensor real time image data that includes a representation of an object in an environment of the visually impaired user. The mobile image sensor is configured to be connected to glasses worn by the visually impaired user. Further, the method comprises receiving a signal indicating a desire of the visually impaired user to obtain information about the object. The method also includes accessing a database holding information about a plurality of objects, and comparing information derived from the received real time image data with information in the database. The method comprises providing the visually impaired user with nonvisual feedback that the object is not locatable in the database.
G09B 21/00 - Moyens d'enseignement ou de communication destinés aux aveugles, sourds ou muets
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
24.
APPARATUS AND METHOD FOR AUTOMATIC ACTION SELECTION BASED ON IMAGE CONTEXT
Devices and a method are provided for providing context-related feedback to a user. In one implementation, the method comprises capturing real time image data from an environment of the user. The method further comprises identifying in the image data a hand-related trigger. Multiple context-based alternative actions are associated with the hand-related trigger. Further, the method comprises identifying in the image data an object associated with the hand-related trigger. The object is further associated with a particular context. Also, the method comprises selecting one of the multiple alternative actions based on the particular context. The method further comprises outputting the context-related feedback based on a result of the executed alternative action.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G09B 21/00 - Moyens d'enseignement ou de communication destinés aux aveugles, sourds ou muets
25.
SYSTEMS AND METHODS FOR PROVIDING FEEDBACK BASED ON THE STATE OF AN OBJECT
A device and method are provided for providing feedback based on the state of an object. In one implementation, an apparatus for processing images is provided. The apparatus may include an image sensor configured to capture real time images from an environment of a user and at least one processor device configured to initially process at least one image to determine whether an object is likely to change its state. If a determination is made that the object is unlikely to change its state, the at least one processor device may additionally process the at least one image and provide a first feedback. If a determination is made that the object is likely to change its state, the at least one processor device may continue to capture images of the object and alert the user with a second feedback after a change in the state of the object occurs.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A device and method are provided for automatic control of a continuous action. In one implementation, an apparatus for providing feedback to a user is provided. The apparatus may include an image sensor configured to be positioned for movement with a head of a user. The image sensor may also be configured to capture real time images from an environment of the user as the user's head moves. The apparatus may also include at least one processor device configured to process at least one image to identify an existence of an object within a field of view of the image sensor and to initiate a continuous action associated with the object. The processing device may also be configured to suspend the continuous action associated with the object when the object moves outside the field of view of the image sensor.
G09B 21/00 - Moyens d'enseignement ou de communication destinés aux aveugles, sourds ou muets
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
Apparatuses and a method are provided for providing environmental information to a user, who may be visually impaired. In one implementation, a method is provided for receiving real time image data including an object within the user environment. The method includes identifying a hand-related trigger in the image data. Upon identifying the hand-related trigger, the method includes executing a first search in the image data in an attempt to identify at least one object from a first category of objects. After initiating the first search, the method includes executing a second search in the image data in an attempt to identify at least one object from a second category of objects that differs from the first. The method also includes outputting audible feedback to the user associated with information related to an object or objects identified in the image data.
G09B 21/00 - Moyens d'enseignement ou de communication destinés aux aveugles, sourds ou muets
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
28.
SYSTEMS AND METHODS FOR AUDIBLE FACIAL RECOGNITION
A device and method are provided for audible facial recognition. In one implementation, an apparatus for aiding a visually impaired user to identify individuals is provided. The apparatus may include a portable image sensor configured to be worn by the visually impaired user and to capture real-time image data from an environment of the user. The apparatus may also include at least one portable processor device configured to determine an existence of face-identifying information in the real-time image data, and access stored facial information and audible indicators. The at least one portable processor device may also be configured to compare the face-identifying information with the stored facial information, and identify a match. Based on the match, the at least one portable processor may be configured to cause an audible indicator to be announced to the visually impaired user.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G09B 21/00 - Moyens d'enseignement ou de communication destinés aux aveugles, sourds ou muets
29.
SYSTEMS AND METHODS FOR AUDIBLY PRESENTING TEXTUAL INFORMATION INCLUDED IN IMAGE DATA
An apparatus and method are provided for identifying and audibly presenting textual information within captured image data. In one implementation, a method is provided for audibly presenting text retrieved from a captured image. According to the method, at least one image of text is received from an image sensor, and the text may include a first portion and a second portion. The method includes identifying contextual information associated with the text, and accessing at least one rule associating the contextual information with at least one portion of text to be excluded from an audible presentation associated with the text. The method further includes performing an analysis on the at least one image to identify the first portion and the second portion, and causing the audible presentation of the first portion.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G09B 21/00 - Moyens d'enseignement ou de communication destinés aux aveugles, sourds ou muets
A device and method are provided for recognizing text on a curved surface. In one implementation, the device comprises an image sensor configured to capture from an environment of a user multiple images of text on a curved surface. The device also comprises at least one processor device. The at least one processor device is configured to receive a first image of a first perspective of text on the curved surface, receive a second image of a second perspective of the text on the curved surface, perform optical character recognition on at least parts of each of the first image and the second image, combine results of the optical character recognition on the first image and on the second image, and provide the user with a recognized representation of the text, including a recognized representation of the first portion of text.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
G09B 21/00 - Moyens d'enseignement ou de communication destinés aux aveugles, sourds ou muets
31.
SYSTEMS AND METHODS FOR PERFORMING A TRIGGERED ACTION
A device and method are provided for performing a triggered action. In one implementation, an apparatus for processing real time images of an environment of a user is provided. The apparatus may include an image sensor configured to capture image data for providing a plurality of sequential images of the environment of the user. The apparatus may also include at least one processor device configured to identify a trigger associated with a desire of the user to cause at least one pre¬ defined action associated with an object. The trigger may include an erratic movement of the object. In response to identification of the trigger, the at least one processor device may also be configured to identify a captured representation of the object. Based on at least the captured representation of the object, the at least one processor device may be configured to execute the at least one pre-defined action.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06F 17/30 - Recherche documentaire; Structures de bases de données à cet effet
Devices and a method are provided for providing feedback to a user. In one implementation, the method comprises obtaining a plurality of images from an image sensor. The image sensor is configured to be positioned for movement with the user's head. The method further comprises monitoring the images, and determining whether relative motion occurs between a first portion of a scene captured in the plurality of images and other portions of the scene captured in the plurality of images. If the first portion of the scene moves less than at least one other portion of the scene, the method comprises obtaining contextual information from the first portion of the scene. The method further comprises providing the feedback to the user based on at least part of the contextual information.
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A device and method are provided for adjusting image capture settings, in one implementation, a wearable apparatus may include at least one wearable image sensor configured to capture a plurality of images from an environment of a user. The wearable apparatus may also include at least one processing device configured to identify, in at least one representation of at least one of the plurality of images, an existence of at least one visual trigger in the environment of the user, determine, based on a type of the at least one visual trigger, a value for at least one capturing parameter, cause the image sensor to capture at least one subsequent image according to at least the value of the at least one capturing parameter, determine a data size for storing at least one representation of the at least one subsequent image, and store, in a memory, the at least one representation of the at least one subsequent image.
A device and method are provided for processing mages to prolong battery life. In one implementation, a wearable apparatus may include a wearable image sensor configured to capture a plurality of images from an environment of a user. The wearable apparatus may also include at least one processing device configured to, in a first processing-mode, process representations of the plurality of images to determine a value of at least one capturing parameter for use in capturing at least one subsequent image, and in a second processing-mode, process the representations of the plurality of images to extract information. In addition, the at least one processing device may operate in the first processing-mode when the wearable apparatus is powered by a mobile power source included in the wearable apparatus and may operate in the second processing-mode when the wearable apparatus is powered by an external power source.