The image capture device includes a receptacle defined within a housing and an optic system that is removably connected with the receptacle and configured to generate thermal energy. The image capture device includes a heatsink positioned at a base of the receptacle. The heatsink includes an inner support positioned within the housing and an outer support in thermal communication with the optic system and the inner support. The heatsink further includes a gasket integrated with the receptacle to form a watertight seal, sandwiched between portions of the inner and outer supports, and configured to flexibly retain a physical connection between the optic system and the outer support.
G03B 17/14 - Corps d'appareils avec moyens pour supporter des objectifs, des lentilles additionnelles, des filtres, des masques ou des tourelles de façon interchangeable
H04N 23/57 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande - Détails mécaniques ou électriques de caméras ou de modules de caméras spécialement adaptés pour être intégrés dans d'autres dispositifs
H05K 1/18 - Circuits imprimés associés structurellement à des composants électriques non imprimés
2.
COUPLERS FOR IMAGE CAPTURE DEVICES AND ACCESSORIES
A coupler (mount) is disclosed that is configured to releasably connect an image capture device and an accessory. The coupler includes a shaft and a lever that is operatively connected to the shaft, and is repositionable between unlocked and locked positions as well as between disengaged and engaged positions. When the coupler is in the unlocked positioned, the image capture device and the accessory are movable in relation to each other, and when the coupler is in the locked positioned, the image capture device and the accessory are fixed in relation to each other. When the coupler is in the disengaged position, the shaft is removable from the image capture device and the accessory, and when the coupler is in the engaged position, the shaft is non-removable from the image capture device and the accessory.
F16B 21/16 - Dispositifs sans filetage pour empêcher le mouvement relatif selon l'axe d'une broche, d'un ergot, d'un arbre ou d'une pièce analogue par rapport à l'organe qui l'entoure; Fixations à ergots et douilles largables sans filetage à parties séparées par gorges ou encoches pratiquées dans l'axe ou l'arbre
A computer accesses a feed of image frames from a camera. The computer determines, based on at least a first image frame from the feed, electronic image stabilization (EIS) data, and internal motion unit (IMU) data, a future image height. The computer enables or disables, for at least a second image frame from the feed, image lines on a sensor of the camera to obtain the determined future image height.
H04N 23/68 - Commande des caméras ou des modules de caméras pour une prise de vue stable de la scène, p. ex. en compensant les vibrations du boîtier de l'appareil photo
4.
METHODS AND APPARATUS FOR REAL-TIME GUIDED ENCODING
Systems, apparatus, and methods for real-time guided encoding. In one exemplary embodiment, an image processing pipeline (IPP) is implemented within a system-on-a-chip (SoC) that includes multiple stages, ending with a codec. The codec compresses video obtained from the previous stages into a bitstream for storage within removable media (e.g., an SD card), or transport (over e.g., Wi-Fi, Ethernet, or similar network). While most hardware implementations of real-time encoding allocate bit rate based on a limited look-forward (or look-backward) of the data in the current pipeline stage, the exemplary IPP leverages real-time guidance that was collected during the previous stages of the pipeline.
H04N 19/85 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo
H04N 19/117 - Filtres, p.ex. pour le pré-traitement ou le post-traitement
H04N 19/139 - Analyse des vecteurs de mouvement, p.ex. leur amplitude, leur direction, leur variance ou leur précision
H04N 19/527 - Estimation de vecteurs de mouvement globaux
H04N 23/68 - Commande des caméras ou des modules de caméras pour une prise de vue stable de la scène, p. ex. en compensant les vibrations du boîtier de l'appareil photo
5.
MULTI-SUPPORT ACCESSORY WITH INTEGRATED POWER SUPPLY FOR IMAGE CAPTURE DEVICES
An accessory for an image capture device is disclosed that includes: a body; a rotatable first support that is configured for connection to the image capture device; a second support that is pivotably reconfigurable between stowed and deployed configurations; and a third support including first and second legs that are pivotably connected to the body such that the third support is reconfigurable between collapsed and expanded configurations. In the stowed configuration, the second support is concealed within the body, and in the deployed configuration, the second support extends outwardly from the body to facilitate connection of the accessory to an ancillary product. In the collapsed configuration, the third support defines a grip for the image capture device, and in the expanded configuration, the body defines a third leg that cooperates with the first and second legs to provide a freestanding base for the image capture device.
Methods and systems for auto-detecting and auto-connecting communication protocols with respect to an image capture device connected to an accessory device via an interface cable. A method for seamless connectivity including automatically detecting, by one of an image capture device and an accessory device, of a wired connection between the image capture device and the accessory device via an interface cable, automatically initiating, by the one of the image capture device and the accessory device, processing associated with a communication protocol supported by the image capture device and the accessory device, and automatically connecting the image capture device to the accessory device via the communication protocol when the processing is complete.
Devices for active athermalization and lens position indexing include an image capture device that includes a lens assembly, a lens mount, a memory, a printed circuit board (PCB), an image sensor, a temperature sensor, and actuator, and a processor. The memory stores a calibration look up table (LUT) that includes focus positions across a temperature range. The image sensor may be disposed on the PCB. The temperature sensor measures a temperature of the lens assembly. The processor determines a position of the lens assembly relative to the image sensor to maintain a focus point over the temperature range based on the calibration LUT, the measured temperature, or both. The processor may be configured to transmit a control signal to the actuator to modify the position of the lens assembly relative to the image sensor to maintain the focus point based on the measured temperature.
G02B 7/04 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement
G02B 7/28 - Systèmes pour la génération automatique de signaux de mise au point
Modular lens assemblies and mounting systems are disclosed. A modular lens assembly includes a removable portion and a fixed portion. The removable portion includes a first lens stack configured to produce a near-collimated ray path or a collimated ray path. The fixed portion includes a second lens stack configured to receive the near-collimated ray path or the collimated ray path from the removable portion. The modular lens assembly may be implemented in an image capture device. An image sensor of the image capture device is positioned at an end of the modular lens assembly. The image sensor is configured to capture images based on light incident on the image sensor through the first lens stack and the second lens stack such that the light incident on an outer lens of the first lens stack is refracted through the second lens stack to the image sensor.
G02B 15/12 - Objectifs optiques avec moyens de faire varier le grossissement en modifiant, ajoutant ou retirant une partie de l'objectif, p.ex. objectifs transformables par adjonction d'une pièce, p.ex. bonnette d'approche par adjonction d'organes annexes téléscopiques
G03B 17/12 - Corps d'appareils avec moyens pour supporter des objectifs, des lentilles additionnelles, des filtres, des masques ou des tourelles
9.
ACTUATOR LOCKING MECHANISM FOR IMAGE CAPTURE DEVICE
A system includes a barrel mount disposed within a body, and the barrel mount includes a central axis. The system includes a lens barrel secured within the barrel mount and aligned with the central axis. The system includes an actuator that adjusts a position of the lens barrel by moving the barrel mount along the central axis. The system includes an actuator locking mechanism securable over the barrel mount and/or the lens barrel that prevents movement of the actuator and/or barrel mount when applying a force along the central axis towards the actuator.
G02B 7/10 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement par déplacement axial relatif de plusieurs lentilles, p.ex. lentilles d'objectif à distance focale variable
Accessory lens structures for cameras are described. For example, an image capture device may include a mother lens assembly including a first stack of lenses; an image sensor positioned at a first end of the mother lens assembly and configured to detect images based on light incident on the image sensor through the first stack of lenses; a conversion lens assembly including a second stack of lenses, wherein the second stack of lenses is afocal; and a conversion lens mounting apparatus configured to removably attach the conversion lens assembly to the image capture device in a position over a second end of the mother lens assembly, opposite from the image sensor, such that light incident on an outer lens of the second stack of lenses will be refracted through the second stack of lenses and the first stack of lenses to the image sensor.
G02B 15/12 - Objectifs optiques avec moyens de faire varier le grossissement en modifiant, ajoutant ou retirant une partie de l'objectif, p.ex. objectifs transformables par adjonction d'une pièce, p.ex. bonnette d'approche par adjonction d'organes annexes téléscopiques
G03B 17/14 - Corps d'appareils avec moyens pour supporter des objectifs, des lentilles additionnelles, des filtres, des masques ou des tourelles de façon interchangeable
11.
WIDE ANGLE ADAPTER LENS FOR ENHANCED VIDEO STABILIZATION
An image capture system for enhanced electronic image stabilization (EIS) includes an image capture device and an adapter lens. The image capture device includes an image sensor, a lens housing, a processor, and a lens assembly that includes a first group of optical elements disposed within the lens housing. The first group of optical elements are used project an image onto the image sensor. The processor performs EIS. The adapter lens is used to enhance EIS of the image capture device. The adapter lens has an adapter lens housing that interfaces with the lens housing. The adapter lens has a second group of optical elements disposed within the adapter lens housing. The second group of optical elements are used to project the image as an image circle on the image sensor.
An image capture device (100) is disclosed that includes a body (102) and a door assembly (400) that is configured for removable connection to the body (102). The door assembly (400) includes a door body (500); a locking mechanism (600) that is slidable in relation to the door body (500) between a locked position and an unlocked position; and at least one biasing member (700) that is configured for engagement (contact) with the door body (500) and the locking mechanism (600) to automatically move the locking mechanism (600) into the locked position upon closure of the door assembly (400).
G03B 17/00 - APPAREILS OU DISPOSITIONS POUR PRENDRE DES PHOTOGRAPHIES, POUR LES PROJETER OU LES VISIONNER; APPAREILS OU DISPOSITIONS UTILISANT DES TECHNIQUES ANALOGUES UTILISANT D'AUTRES ONDES QUE DES ONDES OPTIQUES; LEURS ACCESSOIRES - Parties constitutives des appareils ou corps d'appareils; Leurs accessoires
13.
IMPROVED MICROPHONE FUNCTIONALITY IN WET CONDITIONS
An image capture device is equipped with a microphone assembly that eliminates the need for a complex drain channel that facilitates water flow away from a microphone after submersion. The image capture device includes a housing, a microphone, a membrane, a microphone port, and a mesh. The microphone is disposed in an internal portion of the housing. The membrane includes an active portion, a non-active portion, or both. A first surface of the non-active portion of the membrane is coupled to an outer surface of the housing. The microphone port is disposed between the active portion of the membrane and the microphone. The mesh is coupled to the outer surface of the housing, the non-active portion of the membrane, or both.
An image capture device is described that includes a body; a mounting structure that is connected to the body; an integrated sensor-lens assembly (ISLA) that defines an optical axis and extends through the body and the mounting structure; and an accessory that is releasably connectable to the mounting structure via rotation through a range of motion less than approximately 90 degrees. The mounting structure and the accessory include corresponding angled bearing surfaces that are configured for engagement such that rotation of the accessory relative to the mounting structure creates a bearing effect that displaces the accessory along the optical axis to thereby reduce any axial force required during connection and disconnection of the accessory.
G02B 15/10 - Objectifs optiques avec moyens de faire varier le grossissement en modifiant, ajoutant ou retirant une partie de l'objectif, p.ex. objectifs transformables par adjonction d'une pièce, p.ex. bonnette d'approche
G03B 17/14 - Corps d'appareils avec moyens pour supporter des objectifs, des lentilles additionnelles, des filtres, des masques ou des tourelles de façon interchangeable
An image capture device includes a first component that is configured to provide thermal energy. A heatsink is spaced a distance apart from the first component, and a second component is positioned between the first component and the heatsink. A conductor contacts the first component and the heatsink, and the conductor extends from the first component along sides of the second component to the heatsink.
G03B 17/55 - APPAREILS OU DISPOSITIONS POUR PRENDRE DES PHOTOGRAPHIES, POUR LES PROJETER OU LES VISIONNER; APPAREILS OU DISPOSITIONS UTILISANT DES TECHNIQUES ANALOGUES UTILISANT D'AUTRES ONDES QUE DES ONDES OPTIQUES; LEURS ACCESSOIRES - Parties constitutives des appareils ou corps d'appareils; Leurs accessoires avec des dispositions pour chauffer ou réfrigérer, p.ex. avion
A housing assembly for an image capture device is disclosed. The housing assembly includes: a front housing portion defining an opening; a rear housing portion that is connected (secured) to the front housing portion so as to form a watertight seal therebetween; a mounting structure configured as a discrete component that is separate from the front housing portion and the rear housing portion and connected (secured) to the front housing portion adjacent to the opening; a first sealing member that is positioned between the mounting structure and the front housing portion and configured to form a watertight seal therebetween; an integrated sensor-lens assembly (ISLA) that is connected (secured) to the mounting structure such that the ISLA extends through the opening in the front housing portion; and a second sealing member that is positioned between the ISLA and the mounting structure and configured to form a watertight seal therebetween.
An image capture device includes a heatsink having a cutout within the heatsink. The image capture device also includes a housing, a mounting structure located on an external side of the housing, and an integrated sensor and lens assembly (ISLA) extending through the cutout in the heatsink and connecting to the mounting structure. The ISLA is free of contact with the heatsink. The heatsink can include mounting flanges to support components including printed circuit boards and battery cages.
G03B 17/55 - APPAREILS OU DISPOSITIONS POUR PRENDRE DES PHOTOGRAPHIES, POUR LES PROJETER OU LES VISIONNER; APPAREILS OU DISPOSITIONS UTILISANT DES TECHNIQUES ANALOGUES UTILISANT D'AUTRES ONDES QUE DES ONDES OPTIQUES; LEURS ACCESSOIRES - Parties constitutives des appareils ou corps d'appareils; Leurs accessoires avec des dispositions pour chauffer ou réfrigérer, p.ex. avion
Systems and methods are disclosed for sensor prioritization for composite image capture. For example, methods may include selecting an image sensor as a prioritized sensor from among an array of two or more image sensors; determining one or more image processing parameters based on one or more images captured using the prioritized sensor; applying image processing using the one or more image processing parameters to images captured with each image sensor in the array of two or more image sensors to obtain respective processed images for the array of two or more image sensors; and stitching the respective processed images for the array of two or more image sensors to obtain a composite image.
H04N 5/369 - Transformation d'informations lumineuses ou analogues en informations électriques utilisant des capteurs d'images à l'état solide [capteurs SSIS] circuits associés à cette dernière
19.
IMAGE SENSOR WITH VARIABLE EXPOSURE TIME AND GAIN FACTOR
An image capture system includes an image sensor which has pixel lines, a reset circuit, a readout circuit, and a processor. The processor configured to determine a first exposure time for a first portion of the image information, determine a second exposure time for a second portion of the image information, and variably control a pixel line exposure time for a selected line of pixels by signaling the reset circuit and the readout circuit at different times which are based on the first exposure time and the second exposure time, and where the first exposure time, the second exposure time, and the pixel line exposure time are elapsed times between the reset time and the read time. The image sensor further includes a gain element which is pixel line variably controlled using a first gain factor for the first portion and a second gain factor for the second portion.
An image capture device is disclosed that includes a body defining a peripheral cavity and a removable door assembly that is configured to close and seal the peripheral cavity. The door assembly includes a door body; a locking mechanism; and a biasing member that is configured for engagement with the locking mechanism to resist unlocking of the locking mechanism until a threshold force is applied, at which time, the biasing member is moved from a normal position, in which the biasing member extends at a first angle in relation to the locking mechanism, to a deflected position, in which the biasing member extends at a second angle in relation to the locking mechanism. When locked, the door assembly is rotationally fixed in relation to the body of the image capture device, and when unlocked, the door assembly is rotatable in relation to the body of the image capture device.
An image capture device is disclosed that includes a body defining a peripheral cavity and a door assembly that is movable between an open position and a closed position to close and seal the peripheral cavity. The door assembly includes a door body; a slider that is supported by the door body for axial movement between a first position and a second position; a biasing member that is configured for engagement (contact) with the slider; a door lock including a stop that is configured for engagement (contact) with the biasing member; and a sealing member that is fixedly connected to the door lock.
Systems and processes for cameras with a reconfigurable lens assembly are described. For example, some methods include automatically detecting that an accessory lens structure has been mounted to an image capture device including a mother lens and an image sensor configured to detect light incident through the mother lens, such that an accessory lens of the accessory lens structure is positioned covering the mother lens; responsive to detecting that the accessory lens structure has been mounted, automatically identifying the accessory lens from among a set of multiple supported accessory lenses; accessing an image captured using the image sensor when the accessory lens structure is positioned covering the mother lens; determining a warp mapping based on identification of the accessory lens; applying the warp mapping to the image to obtain a warped image; and transmitting, storing, or displaying an output image based on the warped image.
G02B 15/02 - Objectifs optiques avec moyens de faire varier le grossissement en modifiant, ajoutant ou retirant une partie de l'objectif, p.ex. objectifs transformables
G03B 17/14 - Corps d'appareils avec moyens pour supporter des objectifs, des lentilles additionnelles, des filtres, des masques ou des tourelles de façon interchangeable
23.
APPARATUS AND METHODS FOR PRE-PROCESSING AND STABILIZATION OF CAPTURED IMAGE DATA
Apparatus and methods for the pre-processing of image data so as to enhance quality of subsequent encoding and rendering. In one embodiment, a capture device is disclosed that includes a processing apparatus and a non-transitory computer readable apparatus comprising a storage medium have one or more instructions stored thereon. The one or more instructions, when executed by the processing apparatus, are configured to: receive captured image data (such as that sourced from two or more separate image snesors) and pre-process the data to enable stabilization of the corresponding images priro to encoding. In some implementations, the pre-processing includes combination (e.g., stitching) of the captured image data associated with the two or more sensors to facilitates the stabilization. Advantageously, undesirable artifacts such as object "jitter" can be reduced or eliminated. Methods and non-transitory computer readable apparatus are also disclosed.
Apparatus and methods for enabling indexing and playback of media content before the end of a content capture. In one aspect, a method for enabling indexing of media data obtained as part of a content capture is disclosed. In one embodiment, the indexing enables playback of the media data during the capture and before cessation thereof. In one variant, the method includes generating an "SOS track" for one or more images. The SOS track does not contain the same information as a full index, but provides sufficient information to allow an index to be subsequently constructed. In one implementation, the provided information includes identifiable markers relating to video data, audio data, or white space, but it does not provide an enumerated or complete "table of contents" as in a traditional index.
G11B 27/00 - Montage; Indexation; Adressage; Minutage ou synchronisation; Contrôle; Mesure de l'avancement d'une bande
G11B 27/10 - Indexation; Adressage; Minutage ou synchronisation; Mesure de l'avancement d'une bande
G11B 27/28 - Indexation; Adressage; Minutage ou synchronisation; Mesure de l'avancement d'une bande en utilisant une information détectable sur le support d'enregistrement en utilisant des signaux d'information enregistrés par le même procédé que pour l'enregistrement principal
G11B 27/30 - Indexation; Adressage; Minutage ou synchronisation; Mesure de l'avancement d'une bande en utilisant une information détectable sur le support d'enregistrement en utilisant des signaux d'information enregistrés par le même procédé que pour l'enregistrement principal sur la même piste que l'enregistrement principal
G11B 27/32 - Indexation; Adressage; Minutage ou synchronisation; Mesure de l'avancement d'une bande en utilisant une information détectable sur le support d'enregistrement en utilisant des signaux d'information enregistrés par le même procédé que pour l'enregistrement principal sur des pistes auxiliaires séparées du même support d'enregistrement ou d'un support auxiliaire
G11B 27/36 - Contrôle, c. à d. surveillance du déroulement de l'enregistrement ou de la reproduction
H04N 21/20 - Serveurs spécialement adaptés à la distribution de contenu, p.ex. serveurs VOD; Leurs opérations
H04N 21/40 - Dispositifs clients spécialement adaptés à la réception de contenu ou à l'interaction avec le contenu, p.ex. boîtier décodeur [STB]; Leurs opérations
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
A video may be captured by an image capture device in motion. A stabilized view of the video may be generated by providing a punchout of the video. The punchout of the video may compensate for rotation of the image capture device during capture of the video. Different field of view punchouts, such as wide field of view punchout and linear field of view punchout, may be used to stabilize the video. Different field of view punchouts may provide different stabilization margins to stabilize the video. The video may be stabilized by switching between different field of view punchouts based on the amount of stabilization margin needed to stabilize the video.
An image capture system includes an image capture device and an integrated sensor-optical component accessory. The integrated sensor-optical component accessory is releasably connectable to the image capture device and includes identification data. A processor in the image capture device configures itself and the image capture device based on the identification data. Image data from the integrated sensor-optical component accessory is processed and image data from the image capture data is either processed or ignored depending on the configuration. Attachment information may also be used for configuration. Multiple integrated sensor-optical component accessories may be connected to the image capture device. The center axis of the fields of view of the image capture device and the integrated sensor-optical component accessory may be in different directions or the same direction, and the fields of view may be overlapping or non-overlapping.
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
G03B 17/14 - Corps d'appareils avec moyens pour supporter des objectifs, des lentilles additionnelles, des filtres, des masques ou des tourelles de façon interchangeable
27.
METHODS AND APPARATUS FOR MULTI-ENCODER PROCESSING OF HIGH RESOLUTION CONTENT
Methods and apparatus for multi-encoder processing of high resolution content. In one embodiment, the method includes capturing high resolution imaging content; splitting up the captured high resolution imaging content into respective portions; feeding the split up portions to respective imaging encoders; packing encoded content from the respective imaging encoders into an A/V container; and storing and/or transmitting the A/V container. In another embodiment, the method includes retrieving and/or receiving an A/V container; splitting up the retrieved and/or received A/V container into respective portions; feeding the split up portions to respective imaging decoders; stitching the decoded imaging portions into a common imaging portion; and storing and/or displaying at least a portion of the common imaging portion.
Apparatus and methods for optimized stitching of overcapture content. In one embodiment, the optimized stitching of the overcapture content includes capturing the overcapture content; producing overlap bands associated with the captured overcapture content; downsampling the produced overlap bands; generating derivative images from the downsampled overlap bands; generating a cost map associated with the generated derivative images; determining shortest path information for the generated cost map; generating a warp file based on the determined shortest path information, the generated warp file being utilized for the optimized stitching of the overcapture content. Camera apparatus and a non-transitory computer-readable apparatus are also disclosed.
Systems and methods are disclosed for high dynamic rate processing based on angular rate measurements. For example, methods may include receiving a short exposure image that was captured using an image sensor; receiving a long exposure image that was captured using the image sensor; receiving an angular rate measurement captured using an angular rate sensor attached to the image sensor during exposure of the long exposure image; determining, based on the angular rate measurement, whether to apply high dynamic range processing to an image portion of the short exposure image and the long exposure image; and responsive to a determination not to apply high dynamic range processing to the image portion, selecting the image portion of the short exposure image for use as the image portion of an output image and discard the image portion of the long exposure image.
A face located along a stitch line in a spherical image is detected by rendering views of regions of the spherical image along the stitch line. The spherical image may be produced by combining first and second images. A first view of a projection of the spherical image is rendered. A scaling factor for rendering a second view of the projection is determined based characteristics of the first portion of the face. The second view is then rendered according to the scaling factor. The use of the scaling factor to render the second view causes a change in the depiction of the second portion of the face. For example, the scaling factor can indicate to change the resolution or expected size of the second portion of the face when rendering the second view. A face is then detected within the spherical image based on the rendered first and second views.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G03B 37/04 - Photographie panoramique ou à grand écran; Photographie de surfaces étendues, p.ex. pour la géodésie; Photographie de surfaces internes, p.ex. de tuyaux avec appareils ou projecteurs qui permettent la juxtaposition ou le recouvrement partiel des champs de vision
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
31.
DIGITAL IMAGE CAPTURING DEVICES INCLUDING LENS MODULES WITH DUAL ACTIVELY-ALIGNED INTEGRATED SENSOR-LENS ASSEMBLIES
In one aspect of the present disclosure, a digital image capturing device (DICD) is described that includes a first integrated sensor-lens assembly (ISLA) defining a first optical axis and facing in a first direction; a second ISLA defining a second optical axis offset from the first optical axis and facing in a second direction generally opposite the first direction (i.e., such that the second ISLA is rotated approximately 180 from the first ISLA); and a bridge member that is positioned between the first and second ISLAs to fixedly secure together the first and second ISLAs. The bridge member is configured as a discrete structure (i.e., as being separate from both the first ISLA and the second ISLA), and defines a longitudinal axis that is generally parallel in relation to the first and second optical axes.
G02B 7/00 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques
G03B 37/04 - Photographie panoramique ou à grand écran; Photographie de surfaces étendues, p.ex. pour la géodésie; Photographie de surfaces internes, p.ex. de tuyaux avec appareils ou projecteurs qui permettent la juxtaposition ou le recouvrement partiel des champs de vision
An expansion module includes an expansion accessory and expansion fastening structures that removably couple the expansion module to retention features of a housing of an image capture device when an access door of the image capture device is in a removed position. The expansion module also includes an expansion communication interface that couples an imaging communication interface of the image capture device when the access door of the image capture device is in the removed position. The retention features include hinge structures and cavities defined in the housing of the image capture device. The expansion fastening structures comprise hinge structures and latch structures that engage the retention features of the housing of the image capture device.
An image capture device may detect and repair banding artifacts in a video. The image capture device may include an image sensor and an image processor. The image sensor may capture a frame that includes a sinusoidal light waveform banding artifact. The image processor may detect a sinusoidal light waveform in the frame. The image processor may perform a sinusoidal regression. The image processor may obtain an inverted gain map. The image processor may apply the inverted gain map to the frame. The image processor may output the frame.
H04N 5/335 - Transformation d'informations lumineuses ou analogues en informations électriques utilisant des capteurs d'images à l'état solide [capteurs SSIS]
34.
COLOR FRINGING PROCESSING INDEPENDENT OF TONE MAPPING
Systems and methods are disclosed for image signal processing. For example, methods may include receiving an image from an image sensor, detecting, in a linear domain, color fringing areas in the image, correcting detected color fringing areas to obtain a corrected image, performing tone mapping to the corrected image to obtain a tone mapped image and storing, displaying, or transmitting an output image based on at least the tone mapped image.
H04N 1/62 - Retouches, c.à d. modification de couleurs isolées uniquement ou dans des zones d'image isolées uniquement
H04N 1/387 - Composition, repositionnement ou autre modification des originaux
H04N 5/341 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner
Systems and methods are disclosed for high dynamic range anti-ghosting and fusion. For example, methods may include receiving images from image sensors in a linear domain, each image having different exposures or gains, blending luminance values at each pixel from each of the images to generate a blended image, selecting an useful image based on degree of useful information for a pixel, calculating a distance value from the images for the pixel, locating from a look-up table (LUT) an anti-ghosting weight using the useful image and the distance value for the pixel, proportionally applying the located anti-ghosting weight to the pixel for each of the input images to generate an output image, all being performed in the linear domain, and storing, displaying, or transmitting the output image based on at least the anti-ghosting weight.
Systems and methods are disclosed for replaceable outer lenses for use in water. For example, an image capture device may include a lens barrel in a body of the image capture device; a replaceable lens structure mountable on the body of the image capture device, the replaceable lens structure including a first set of two or more stacked lenses, including a first outer lens, and a first retaining ring configured to fasten the first set of two or more stacked lenses against a first end of the lens barrel in a first arrangement, wherein the first set of two or more stacked lenses is configured to collimate light incident on the first outer lens when the first outer lens is underwater at an interface between the lens barrel and the first replaceable lens structure; and an image sensor mounted within the body at a second end of the lens barrel.
An image is processed using a combination of techniques, particularly, in which low frequency information is processed using multiple tone control and high frequency information is processed using local tone mapping. An image is divided into a plurality of blocks including a given block. Low frequency information and high frequency information of the given block are separated. The low frequency information is processed using multiple tone control. The low frequency information is processed using local tone mapping. A processed image is then produced based on the processed low frequency information and based on the processed high frequency information, the processed image corresponding to the image captured using the image sensor. The processed image is then output for storage or display. Processing the low frequency information can include using a gain curve and bilinear interpolation. Processing the high frequency information can include using an edge preservation filter.
A camera includes a lens, an image sensor, and a display. The lens and the image sensor are configured to capture images. The display may include an array of lights that are selectively illuminable to display the graphics. The lights may be arranged in a grid with each of the lights forming a pixel of the display. The display is hidden from view when not illuminated and displays graphics when illuminated. The camera may include a body that hides the display from view when not illuminated and permits light from the display to pass therethrough when illuminated. The body may include an elastomeric outer layer through which light from the display passes. The body may have a first side having the lens and on which the display displays the graphics. The camera may further include a second display on a second side of the body facing opposite the first side.
Visual content is captured by an image capture device during a capture duration. The image capture devices experiences change in position during the capture duration. The trajectory of the image capture device is smoothed based on a look ahead of the trajectory. A punchout of the visual content is determined based on the smoothed trajectory. The punchout of the visual content is used to generate stabilized visual content.
Visual content is captured by an image capture device during a capture duration. The image capture devices experiences motion during the capture duration. The intentionality of the motion of the image capture device is determined based on angular acceleration of the image capture device during the capture duration. A punchout of the visual content is determined based on the intentionality of the motion of the image capture device. The punchout of the visual content is used to generate stabilized visual content.
A camera mode to use for capturing an image or video is selected by estimating high dynamic range (HDR), motion, and light intensity with respect to a scene of the image or video to capture. An image capture device includes a HDR estimation unit to detect whether HDR is present in a scene of an image to capture, a motion estimation unit to determine whether motion is detected within the scene, and a light intensity estimation unit to determine whether a scene luminance for the scene meets a threshold. A mode selection unit selects a camera mode to use for capturing the image based on output of the HDR estimation unit, the motion estimation unit, and the light intensity estimation unit. An image sensor captures the image according to the selected camera mode.
Systems and methods are disclosed for image signal processing. For example, methods may include accessing an image from an image sensor; detecting an object area on the image; classifying the object area on the image; applying a filter to an object area of the image to obtain a low-frequency component image and a high-frequency component image; determining a first enhanced image based on a weighted sum of the low-frequency component image and the high-frequency component image, where the high-frequency component image is weighted more than the low-frequency component image; determining a second enhanced image based on the first enhanced image and a tone mapping; and storing, displaying, or transmitting an output image based on the second enhanced image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
In one aspect of the present disclosure, a digital image capturing device (DICD) is disclosed that includes a device body with a printed circuit board (PCB), and an integrated sensor-lens assembly (ISLA) that is configured for releasable connection to the device body. The PCB defines a plurality of openings that extend therethrough and includes a plurality of connector pins that are fixedly positioned within the openings. The ISLA includes at least one connective surface that is configured for contact with the connector pins to establish electrical communication between the device body and the ISLA.
A handheld image stabilization device may each include a housing, a motor assembly, a microphone, a dampener, or any combination thereof. The housing may include a first port. The motor assembly may be attached to the housing. The motor assembly may include a plurality of motors. Each motor may include a coil assembly, a ring surrounding the coil assembly, magnets adhered to the ring, and a shaft configured to generate a non-overlapping natural resonance with an operating frequency of the motor. The microphone may be configured to detect audio waves via the first port, vibration noise of the motors via the housing, or both. The audio waves may include acoustic noise from the motors. The dampener may be coupled to the housing. The dampener may be configured to reduce the acoustic noise from the motors, the vibration noise from the motors, or both.
An electronic device includes a body, electronic components contained in the body, and two finger members. The two finger members movable relative to the body between an extended state and a collapsed state. In the extended state, the two finger members extend outward from the body for receipt by a mount of an external support. in the collapsed state, the two finger members are collapsed toward the body. In the extended state, the two finger members may extend parallel with each other for receipt in parallel slots of the mount of the external support.
F16M 13/04 - Autres supports ou appuis pour positionner les appareils ou les objets; Moyens pour maintenir en position les appareils ou objets tenus à la main pour être portés par une personne ou pour maintenir fixe par rapport à une personne, p.ex. par des chaînes
In one aspect of the present disclosure, a module is disclosed for use with a digital image capturing device (DICD) including an integrated sensor-lens assembly (ISLA). The module includes a cradle configured for connection to a housing of the DICD, and at least one dampener that is configured for positioning between the module and the housing of the DICD to reduce vibrations transmitted to the ISLA.
G03B 17/00 - APPAREILS OU DISPOSITIONS POUR PRENDRE DES PHOTOGRAPHIES, POUR LES PROJETER OU LES VISIONNER; APPAREILS OU DISPOSITIONS UTILISANT DES TECHNIQUES ANALOGUES UTILISANT D'AUTRES ONDES QUE DES ONDES OPTIQUES; LEURS ACCESSOIRES - Parties constitutives des appareils ou corps d'appareils; Leurs accessoires
Disclosed herein are implementations of a cooling apparatus for use with a battery or battery pack. The battery may be a lithium-ion battery, and the battery pack may be two or more lithium-ion batteries. A cooling apparatus may include a heatsink. The heatsink may have a clearance portion configured to allow expansion of the lithium-ion battery. The cooling apparatus may include a metal plate coupled to the heatsink. The heatsink may be configured to contact the battery on at least three sides.
A button is disclosed for use with an image capture device in an underwater environment. The button includes a movable plunger configured to cause actuation of the image capture device, and upper and lower components collectively defining an internal cavity that is configured to receive the plunger. The upper and lower components are configured and connected such that actuation of the image capture device is prevented until a threshold pressure is applied to the upper component that is greater than external water pressure applied in the underwater environment. The upper component includes at least one opening that is configured to allow water to enter the internal cavity to modify an internal pressure within the internal cavity so as to reduce the threshold pressure required to actuate the image capture device in the underwater environment.
G03B 17/42 - Blocage réciproque du fonctionnement de l'obturateur et de l'avancement du film ou du changement de plaque ou de film semi-rigide
H01H 13/02 - Interrupteurs ayant un organe moteur à mouvement rectiligne ou des organes adaptés pour pousser ou tirer dans une seule direction, p.ex. interrupteur à bouton-poussoir - Détails
An image capture apparatus may include an image sensor, a motion sensor, and an auto-exposure unit. The auto-exposure unit may obtain an input image captured during an exposure interval and corresponding motion data indicating motion of the image capture apparatus during the exposure interval. The auto-exposure unit may obtain image information-amount data for the input image. The auto-exposure unit may obtain derivative information-amount data based on the information-amount data and a candidate exposure adjustment. The auto-exposure unit may obtain an information-amount maximizing exposure interval based on the information-amount data and the derivative information-amount data. The image capture apparats may control the image sensor to obtain a subsequent input image signal representing a subsequent input image captured during the information-amount maximizing exposure interval, and output or store information representing the subsequent input image.
Systems and methods are disclosed for replaceable outer lenses. For example, an image capture device may include a lens barrel in a body of the image capture device, the lens barrel including multiple inner lenses; a replaceable lens structure that is mountable on the body of the image capture device, the replaceable lens structure including an outer lens and a retaining ring configured to fasten the outer lens in a position covering a first end the lens barrel in a first arrangement and configured to disconnect the outer lens from the body of the image capture device in a second arrangement; and an image sensor mounted within the body at a second end of the lens barrel, the image sensor configured to capture images based on light incident on the image sensor through the outer lens and the multiple inner lenses when the retaining ring is in the first arrangement.
Image signal processing may include flare reduction, which may include obtaining a first input frame captured by a first image capture device of an image capture apparatus, the first image capture device having a first field-of-view, and the first input frame including lens flare corresponding to a primary light source; obtaining a second input frame captured by a second image capture device of the image capture apparatus, the second image capture device having a second field-of-view partially overlapping the first field-of-view; obtaining primary light source information corresponding to the primary light source based on the first input frame and the second input frame; obtaining a processed frame by modifying the first input frame based on the primary light source information to minimize the lens flare; and outputting the processed frame.
Video content may be captured by an image capture device during a capture duration. The video content may include video frames that define visual content viewable as a function of progress through a progress length of the video content. Rotational position information may characterize rotational positions of the image capture device during the capture duration. Time-lapse video frames may be determined from the video frames of the video content based on a spatiotemporal metric. The spatiotemporal metric may characterize spatial smoothness and temporal regularity of the time-lapse video frames. The spatial smoothness may be determined based on the rotational positions of the image capture device corresponding to the time-lapse video frames, and the temporal regularity may be determined based on moments corresponding to the time-lapse video frames. Time-lapse video content may be generated based on the time-lapse video frames.
Images with an optical field of view are captured by an image capture device. An observed trajectory of the image capture device reflects the positions of the image capture device at different moments may be determined. A capture trajectory of the image capture device reflects virtual positions of the image capture device from which video content may be generated. The capture trajectory is determined based on a subsequent portion of the observed trajectory such that a portion of the capture trajectory corresponding to a portion of the observed trajectory is determined based on a subsequent portion of the observed trajectory. Orientations of punch-outs for the images are determined based on the capture trajectory. Video content is generated based on visual content of the images within the punch-outs.
Video information and rotational position information may be obtained. The video information may define spherical video content having a progress length and captured by image capture device(s) during a capture duration. The rotational position information may characterize rotational positions of the image capture device(s) during the capture duration. The rotational positions of the image capture device(s) during the capture duration may be determined based on the rotational position information. The spherical video content may be rotated based on the rotational positions of the image capture device(s) during the capture duration. The rotation of the spherical video content may include rotation of one or more spherical video frames of the spherical video to compensate for the rotational positions of the image capture device(s) during the capture duration and to stabilize the spherical video content.
Systems and methods are disclosed for lens water dispersion. For example, an image capture device may include a lens mounted on a body of the image capture device; an image sensor mounted within the body, behind the lens and configured to capture images based on light incident on the image sensor through the lens; and a dispersion structure around a perimeter of the lens on an external surface of the body, wherein the dispersion structure includes gaps sized to cause capillary action to move water away from the lens, from a first edge of the dispersion structure to a second edge of the dispersion structure.
G02B 1/18 - Revêtements pour garder des surfaces optiques propres, p.ex. films hydrophobes ou photocatalytiques
G03B 17/00 - APPAREILS OU DISPOSITIONS POUR PRENDRE DES PHOTOGRAPHIES, POUR LES PROJETER OU LES VISIONNER; APPAREILS OU DISPOSITIONS UTILISANT DES TECHNIQUES ANALOGUES UTILISANT D'AUTRES ONDES QUE DES ONDES OPTIQUES; LEURS ACCESSOIRES - Parties constitutives des appareils ou corps d'appareils; Leurs accessoires
56.
SYSTEMS AND METHODS FOR MINIMIZING VIBRATION SENSITIVITY FOR PROTECTED MICROPHONES
Protected microphone systems may include one or more dampeners, one or more protective layers, or a combination thereof to minimize the vibration sensitivity of a microphone of the protected microphone systems. The dampeners, when present, may be constructed of a foam material or a thin metal material. The protective layer may be a membrane, a mesh, or any suitable material. The protective layer may be air permeable or non-air permeable.
Image analysis and processing may include using an image processor to receive image data corresponding to an input image, determine an initial gain value for the image data based on at least one of a two-dimensional gain map or a parameterized radial gain model, determine whether the initial gain value is below a threshold, determine a maximum RGB triplet value for the image data where the initial gain value is below the threshold, determine a pixel intensity as output of a function for saturation management, determine a final gain value for the image data based on the maximum RGB triplet value and the pixel intensity, apply the final gain value against the image data to produce processed image data, and output the processed image data for further processing using the image processor.
H04N 5/341 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner
Image analysis and processing may include a multiple-tone-control (MTC) unit configured to obtain a tone-control gain lookup table, a plurality of MTC gain lookup tables, wherein the input image is divided into a plurality of blocks and wherein the plurality of MTC gain lookup tables includes a respective MTC gain lookup table corresponding to each respective block from the plurality of blocks, MTC grid parameters, MTC weighting parameters, a tone-control gain based on the tone-control gain lookup table, a MTC gain based on at least one MTC gain lookup table from the plurality of MTC gain lookup tables, the MTC grid parameters, and the MTC weighting parameters, and an output value based on the tone-control gain and the MTC gain.
H04N 19/86 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre la diminution des artéfacts de codage, p.ex. d'artéfacts de blocs
H04N 1/407 - Commande ou modification de la gradation des tons ou des niveaux extrêmes, p.ex. du niveau de fond
Image analysis and processing may include a first sensor readout unit configured to receive first Bayer format image data, a second sensor readout unit configured to receive second Bayer format image data, a first Bayer-to-RGB unit configured to obtain first RGB format image data based on the first Bayer format image data, a second Bayer-to-RGB unit configured to obtain second RGB format image data based on the second Bayer format image data, a high dynamic range unit configured to obtain high dynamic range image data based on a combination of the first RGB format image data and the second RGB format image data, an RGB-to-YUV unit configured to obtain YUV format image data based on the high dynamic range image data, and a three-dimensional noise reduction unit configured to obtain noise reduced image data based on the YUV format image data.
Controlling an unmanned aerial vehicle may include obtaining a first image from a fixed orientation image capture device of the unmanned aerial vehicle, obtaining a second image from an adjustable orientation image capture device of the unmanned aerial vehicle, obtaining feature correlation data based on the first image and the second image, obtaining relative image capture device orientation calibration data based on the feature correlation data, the relative image capture device orientation calibration data indicating an orientation of the adjustable orientation image capture device relative to the fixed orientation image capture device, obtaining relative object orientation data based on the relative image capture device orientation calibration data, the relative object orientation data representing a three-dimensional orientation of an external object relative to the adjustable orientation image capture device, and controlling a trajectory of the unmanned aerial vehicle in response to the relative object orientation data.
Systems and methods are disclosed for image capture. For example, methods may include accessing a sequence of images from an image sensor; determining a sequence of parameters for respective images in the sequence of images based on the respective images; storing the sequence of images in a buffer; determining a temporally smoothed parameter for a current image in the sequence of images based on the sequence of parameters, wherein the sequence of parameters includes parameters for images in the sequence of images that were captured after the current image; applying image processing to the current image based on the temporally smoothed parameter to obtain a processed image; and storing, displaying, or transmitting an output image based on the processed image.
A dynamic propulsion system may be implemented to recover some or all of the wasted energy during a flight. The dynamic propulsion system may include one or more propellers that are configured to act as a propulsion system when altitude is rising and act as a windmill to generate energy to charge a battery during descent. The one or more propellers may include blades that are configured to adjust their angle of attack or pitch on command to switch from propulsion mode to regenerative mode and vice versa.
G05D 13/62 - Commande de la vitesse linéaire; Commande de la vitesse angulaire; Commande de l'accélération ou de la décélération, p.ex. d'une machine motrice caractérisée par l'utilisation de moyens électriques, p.ex. l'emploi de dynamos-tachymétriques, l'emploi de transducteurs convertissant des valeurs électriques en un déplacement
B64C 27/58 - Transmissions, p.ex. en liaison avec les moyens déclenchant ou agissant sur les pales
Controlling an unmanned aerial vehicle to traverse a portion of an operational environment of the unmanned aerial vehicle may include obtaining an object detection type, obtaining object detection input data, obtaining relative object orientation data based on the object detection type and the object detection input data, and performing a collision avoidance operation based on the relative object orientation data. The object detection type may be monocular object detection, which may include obtaining the relative object orientation data by obtaining motion data indicating a change of spatial location for the unmanned aerial vehicle between obtaining the first image and obtaining the second image based on searching along epipolar lines to obtain optical flow data.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B64C 39/02 - Aéronefs non prévus ailleurs caractérisés par un emploi spécial
64.
ROUTING OF TRANSMISSION MEDIA THROUGH ROTATABLE COMPONENTS
In one aspect of the present disclosure, a gimbal assembly is described for use with an image capturing device. The gimbal assembly includes a motor assembly, a first housing defining an internal compartment that is configured and dimensioned to receive the motor assembly, and a second housing that is mechanically connected to the motor assembly such that actuation of the motor assembly causes relative rotation between the first and second housings. The first housing includes a first guide that is configured and dimensioned to support transmission media adapted to communicate electrical and/or digital signals. The second housing defines a channel that is configured and dimensioned to receive the first guide such that the first guide extends into the second housing through the channel. The transmission media is supported on the first guide such that the first guide routes the transmission media from the first housing into the second housing.
F16M 11/12 - Moyens pour la fixation des appareils; Moyens permettant le réglage des appareils par rapport au banc permettant la rotation dans plus d'une direction
65.
METHOD AND SYSTEM FOR USER FEEDBACK IN A MOTION CONSTRAINED IMAGE STABILIZATION SYSTEM
The disclosure describes systems and methods for a stabilization mechanism. The stabilization mechanism may be used in conjunction with an imaging device. The method may be performed by a control system of the stabilization mechanism and includes obtaining a device setting from an imaging device. The method may also include obtaining a configuration of the stabilization mechanism. The method includes determining a soft stop based on the device setting, the configuration, or both. The soft stop may be a virtual hard stop that indicates to the stabilization mechanism to reduce speed as a field of view of the imaging device approaches the soft stop. The method may also include setting an image stabilization mechanism parameter based on the determined soft stop.
Systems and methods are disclosed for image capture. For example, systems may include an image capture module including an image sensor configured to capture images, a base that includes a processing apparatus and a connector, and an integrated mechanical stabilization system configured to control an orientation of the image sensor relative to the base, wherein the processing apparatus is configured to send commands to motor controllers of the mechanical stabilization system and includes an image signal processor that is configured to receive image data from the image sensor; and a handheld module configured to be removably attached to the image capture module by the connector, wherein the handheld module includes a display configured to display images received from the image sensor via conductors of the connector.
B64C 39/02 - Aéronefs non prévus ailleurs caractérisés par un emploi spécial
G03B 15/00 - Procédés particuliers pour prendre des photographies; Appareillage à cet effet
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
Image signal processing includes obtaining two or more image signals from a first hyper-hemispherical image sensor, where each of the two or more image signals has a different exposure and obtaining two or more image signals from a second hyper-hemispherical image sensor, where each of the two or more image signals has a different exposure. Image signal processing includes generating an exposure compensated image based on a gain value applied to an exposure level of a first image and a gain value applied to an exposure level of a second image. Image signal processing further includes performing high dynamic range (HDR) processing on the exposure compensated image. The HDR processing may be performed on a high a frequency portion of the exposure compensated image.
A condition of an unmanned aerial vehicle (UAV) is detected using one or more sensors of the UAV and signaled according to an alert definition associated with the condition. For example, an alert definition can indicate to signal the condition by using a motor of the UAV to produce an audible tone. A tonal signal having a frequency within an audible spectrum can be generated according to the alert definition. The tonal signal and a drive signal used for supplying current to the motor can be combined to produce a combined signal. The combined signal can then be transmitted to the motor to cause the motor to produce the audible tone. In some cases, an amplitude of the tonal signal can be modulated, such as where the amplitude of the combined signal exceeds a threshold associated with an operating margin of the UAV.
G08C 13/02 - Dispositions pour influencer la relation entre les signaux d'entrée et ceux de sortie, p.ex. différenciation, retardement pour donner un signal qui soit une fonction de plusieurs signaux, p.ex. la somme ou le produit
An image capture system and methods for auto-recording media data are disclosed. A method includes receiving an activity type selection, selecting an activity-specific monitor based on the activity type selection, and capturing media data. The activity-specific monitor defines an auto-recording condition that, when satisfied, cause the image capture system to begin recording media data. The method includes executing the activity-specific monitor, the activity-specific monitor: receiving current sensor data from a sensor; determining whether the auto-recording condition defined by the activity-specific monitor are met by the current sensor data; and outputting a notification indicating that the auto-recording condition is met. The method further includes writing portions of the media data captured after the auto-recording condition is met to the persistent storage of the image capture system based on receipt of the notification.
A system accesses an image with each pixel of the image having luminance values each representative of a color component of the pixel. The system generates a first histogram for aggregate luminance values of the image, and accesses a target histogram for the image representative of a desired global image contrast. The system computes a transfer function based on the first histogram and the target histogram such that when the transfer function is applied, a histogram of the modified aggregate luminance values is within a threshold similarity of the target histogram. The system modifies the image by applying the transfer function to the luminance values of the image to produce a tone mapped image, and outputs the modified image.
Systems and methods are disclosed for denoising chrominance channels of images. For example, methods may include receiving an image from one or more image sensors; determining a set of weights for the image based on a luminance channel of the image, wherein a weight in the set of weights corresponds to a subject pixel and a candidate pixel and is determined based on luminance values of one or more pixels of the image centered at the subject pixel and one or more pixels of the image centered at the candidate pixel; applying the set of weights to chrominance channels of the image to obtain a denoised image, wherein the subject pixel of the denoised image is determined based on the weight multiplied by the candidate pixel of the image; and storing, displaying, or transmitting an output image based on the denoised image.
Image signal processing includes generating an exposure compensated image based on a gain value applied to an exposure level of a first image and a gain value applied to an exposure level of a second image. The gain value may be progressively increased from an approximate center of the first image to an edge of the first image to a common exposure level. The gain value may be progressively decreased from an approximate center of the second image to an edge of the second image to the common exposure level. Gain values may be scaled on each color channel for a pixel based on a saturation level of the pixel.
G02B 13/06 - Objectifs panoramiques; Lentilles dites "de ciel"
G03B 37/00 - Photographie panoramique ou à grand écran; Photographie de surfaces étendues, p.ex. pour la géodésie; Photographie de surfaces internes, p.ex. de tuyaux
Systems and methods are disclosed for image signal processing. For example, methods may include receiving an image from an image sensor; applying a filter to the image to obtain a low-frequency component image and a high-frequency component image; determining a first enhanced image based on a weighted sum of the low-frequency component image and the high-frequency component image, where the high-frequency component image is weighted more than the low-frequency component image; determining a second enhanced image based on the first enhanced image and a tone mapping; and storing, displaying, or transmitting an output image based on the second enhanced image.
Systems and methods are disclosed for image signal processing. For example, methods may include receiving a first image from a first image sensor; receiving a second image from a second image sensor; determining an electronic rolling shutter correction mapping for the first image and the second image; determining a parallax correction mapping based on the first image and the second image for stitching the first image and the second image; determining a warp mapping based on the parallax correction mapping and the electronic rolling shutter correction mapping, wherein the warp mapping applies the electronic rolling shutter correction mapping after the parallax correction mapping; applying the warp mapping to image data based on the first image and the second image to obtain a composite image; and storing, displaying, or transmitting an output image that is based on the composite image.
Systems and methods are disclosed for testing image capture devices. For example, methods may include obtaining a test image from an image sensor; applying a low-pass filter to the test image to obtain a blurred image; determining an enhanced image based on a difference between the blurred image and the test image; comparing image portions of the enhanced image to a threshold to determine whether there is a blemish of the image sensor; and storing, transmitting, or displaying an indication of whether there is a blemish of the image sensor.
H04N 17/00 - Diagnostic, test ou mesure, ou leurs détails, pour les systèmes de télévision
H04N 5/367 - Traitement du bruit, p.ex. détection, correction, réduction ou élimination du bruit appliqué au bruit à motif fixe, p.ex. non-uniformité de la réponse appliqué aux défauts, p.ex. pixels non réactifs
76.
SYSTEMS AND METHODS FOR GENERATING CUSTOM VIEWS OF VIDEOS
Spherical video content may be presented on a display. Interaction information may be received during presentation of the spherical content on the display. Interaction information may indicate a user's viewing selections of the spherical video content, including viewing directions for the spherical video content. Display field of view may be determined based on the viewing directions. The display fields of view may define extents of the visual content viewable as a function of progress through the spherical video content. User input to record a custom view of the spherical video content may be received and a playback sequence for the spherical video content may be generated. The playback sequence may mirror at least a portion of the presentation of the spherical video content on the display.
H04N 21/2343 - Traitement de flux vidéo élémentaires, p.ex. raccordement de flux vidéo ou transformation de graphes de scènes MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
H04N 21/422 - Périphériques d'entrée uniquement, p.ex. système de positionnement global [GPS]
H04N 21/431 - Génération d'interfaces visuelles; Rendu de contenu ou données additionnelles
H04N 21/4402 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
H04N 21/4728 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de services; Interface pour utilisateurs finaux pour l'interaction avec le contenu, p.ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés pour la sélection d'une région d'intérêt [ROI], p.ex. pour la requête d'une version de plus haute résolution d'une région sélectionnée
H04N 21/8549 - Création de résumés vidéo, p.ex. bande annonce
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
77.
METHODS AND APPARATUS FOR PROVIDING IN-LOOP PADDING TECHNIQUES FOR ROTATED SPHERE PROJECTIONS
Apparatus and methods for providing in-loop padding techniques for projection formats, such as, Rotated Sphere Projections (RSP). In one embodiment, methods and apparatus for the encoding of video data that includes a projection format that has redundant data, the apparatus and methods include obtaining a frame of video data, the frame of video data including reduced quality areas within the frame of video data; transmitting the obtained frame of the video data to a reconstruction engine; reconstructing the reduced quality areas to nearly original quality within the frame by using other portions of the frame of video data in order to construct a high fidelity frame of video data; storing the high fidelity frame of video data; and using the high fidelity frame of video data stored for encoding of subsequent frames of the video data. Methods and apparatus for the decoding of encoded video data are also disclosed.
Systems and methods are disclosed for image signal processing. For example, methods may include receiving a first image from a first image sensor; receiving a second image from a second image sensor; stitching the first image and the second image to obtain a stitched image; identifying an image portion of the stitched image that is positioned on a stitching boundary of the stitched image; and inputting the image portion to a machine learning module to obtain a score, wherein the machine learning module has been trained using training data that included image portions labeled to reflect an absence of stitching and image portions labeled to reflect a presence of stitching, wherein the image portions labeled to reflect a presence of stitching included stitching.
Video information defining video content may be accessed. One or more highlight moments in the video content may be identified. One or more video segments in the video content may be identified based on one or more highlight moments. Derivative video information defining one or more derivative video segments may be generated based on one or more video segments. The derivative video information may be transmitted over a network to a computing device. One or more selections of the derivative video segments may be received from the computing device. Video information defining one or more video segments corresponding to one or more selected derivative video segments may be transmitted to the computing device. The computing device may generate video composition information defining a video composition based on the video information defining one or more video segments corresponding to one or more selected derivative video segments.
G11B 27/00 - Montage; Indexation; Adressage; Minutage ou synchronisation; Contrôle; Mesure de l'avancement d'une bande
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
H04N 5/93 - Régénération du signal de télévision ou de parties sélectionnées de celui-ci
G06F 17/30 - Recherche documentaire; Structures de bases de données à cet effet
G11B 27/022 - Montage électronique de signaux d'information analogiques, p.ex. de signaux audio, vidéo
80.
APPARATUS AND METHODS FOR THE STORAGE OF OVERLAPPING REGIONS OF IMAGING DATA FOR THE GENERATION OF OPTIMIZED STITCHED IMAGES
Apparatus and methods for stitching images, or re-stitching previously stitched images. Specifically, the disclosed systems in one implementation save stitching information and/or original overlap source data during an original stitching process. During subsequent retrieval, rendering, and/or display of the stitched images, the originally stitched image can be flexibly augmented, and/or re-stitched to improve the original stitch quality. Practical applications of the disclosed solutions enable, among other things, a user to create and stitch a wide field of view (FOV) panorama from multiple source images on a device with limited processing capability (such as a mobile phone or other capture device). Moreover, post-processing stitching allows for the user to convert from one image projection to another without fidelity loss (or with an acceptable level of loss).
Apparatus and methods for providing a frame packing arrangement for the encoding/decoding of, for example, panoramic content. In one embodiment, an encoder apparatus is disclosed. In a variant, the encoder apparatus is configured to encode Segmented Sphere Projections (SSP) imaging data and/or Rotated Sphere Projections (RSP) imaging data into an extant imaging format. In another variant, a decoder apparatus is disclosed. In one embodiment, the decoder apparatus is configured to decode SSP imaging data and/or RSP imaging data from an extant imaging format. Computing devices, computer-readable storage apparatus, integrated circuits and methods for using the aforementioned encoder and decoder are also disclosed.
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
This disclosure describes a system including an imaging device. The imaging device includes an imaging unit with a lens that focuses light onto an image sensor that produces image data, wherein the image data is only stored by the imaging device for transfer to and processing by an external device. The imaging device includes a control interface that provides commands to the imaging unit. The system also includes a mount with a wall that releasably secures the imaging device and provides access to the control interface and a securing structure that secures the mount to a variety of locations.
A wearable imaging device is disclosed. The wearable imaging device comprises a frame defining openings aligned to a user's field of view when in a wearable position and a pair of temple arms each extending from one end of the frame and moveable between the wearable position and a folded position. The wearable imaging device includes lenses within the openings of the frame, an imaging unit coupled to the frame and comprising an imaging lens that captures light through at least one of the lenses, an electronics unit coupled to the frame and in communication with the imaging unit, a power source coupled to the frame and providing power to the imaging unit and to the electronics unit, and an input/output module coupled to at least one of the imaging unit, the electronics unit, or the power source that includes a communications interface for communicating with external devices.
This disclosure describes systems and methods for vision-based navigation of an aerial vehicle. A method includes operations of acquiring a first image, and identifying features in a first image pyramid of the first image. The method includes acquiring a second image after the first image, and identifying the features in a second image pyramid of the second image. The method includes determining current navigation information of the aerial vehicle according to changes in position of the respective features between the first image pyramid and the second image pyramid. The method also includes predicting whether a sufficient number of the features will be identifiable at a third time that is in the future and after the second image was taken, and limiting flight as compared to the user flight instructions if an insufficient number of the features are predicted to be identifiable.
G01C 21/00 - Navigation; Instruments de navigation non prévus dans les groupes
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
An unmanned aerial vehicle ("UAV") receives location information describing geographic boundaries of a polygonal no-fly zone ("NFZ"), the NFZ having a plurality of virtual walls each associated with a geographic line segment. The UAV identifies a closest and a second closest virtual wall of the plurality of virtual walls of the NFZ to a geographic location of the UAV. The UAV determines a first distance from the location of the UAV to a portion of the closest virtual wall nearest to the location of the UAV and a second distance from the location of the UAV to a portion of the second closest virtual wall nearest to the location of the UAV. In response to the first and/or second determined distances being less than a threshold distance, the UAV modifies a velocity and/or a trajectory of the UAV such that the UAV does not cross the virtual walls.
Apparatus and methods for the stitch zone calculation of a generated projection of a spherical image. In one embodiment, a computing device is disclosed which includes logic configured to: obtain a plurality of images; map the plurality of images onto a spherical image; re-orient the spherical image in accordance with a desired stitch line and a desired projection for the desired stitch line; and map the spherical image to the desired projection having the desired stitch line. In a variant, the desired stitch line is mapped onto an optimal stitch zone, the optimal stitch zone characterized as a set of points that defines a single line on the desired projection in which the set of points along the desired projection lie closest to the spherical image in a mean square sense.
This disclosure relates to systems and methods for determining predicted risk for a flight path of an unmanned aerial vehicle. A previously stored three-dimensional representation of a user-selected location may be obtained. The three-dimensional representation may be derived from depth maps of the user-selected location generated during previous unmanned aerial vehicle flights. The three-dimensional representation may reflect a presence of objects and object existence accuracies for the individual objects. The object existence accuracies for the individual objects may provide information about accuracy of existence of the individual objects within the user-selected location. A user-created flight path may be obtained for a future unmanned aerial flight within the three-dimensional representation of the user-selected location. Predicted risk may be determined for individual portions of the user-created flight path based upon the three-dimensional representation of the user-selected location.
An audio capture device selects between multiple microphones to generate an output audio signal depending on detected conditions. When the presence of wind noise or other uncorrected noise is detected, the audio capture device selects, for each of a plurality of different frequency sub-bands, an audio signal having the lowest noise and combines the selected frequency sub-bands signals to generate an output audio signal. When wind noise or other uncorrelated noise is not detected, the audio capture device determines whether each of a plurality of microphones are wet or dry and selects one or more audio signals from the microphones depending on their respective conditions.
A continuous slanted edge focus measurement system characterizes focus of a camera lens. The measurement system may be used to measure effects on focus on caused by factors such as thermal focal shift, humidity focal shift, focal shift caused by changing parts in the camera, and focal shift by changing the camera design. An accurate measurement system enables camera designers to optimize focus under a variety of different conditions and ensure consistency in the products.
Systems and methods are disclosed for image signal processing. For example, methods may include receiving a current image of a sequence of images from an image sensor; combining the current image with a recirculated image to obtain a noise reduced image, where the recirculated image is based on one or more previous images of the sequence of images from the image sensor; determining a noise map for the noise reduced image, where the noise map is determined based on estimates of noise levels for pixels in the current image, a noise map for the recirculated image, and a set of mixing weights; recirculating the noise map with the noise reduced image to combine the noise reduced image with a next image of the sequence of images from the image sensor; and storing, displaying, or transmitting an output image that is based on the noise reduced image.
Image signal processing including generating image signal processing based encoding hints for motion estimation may include an image signal processor obtaining an input image portion of an input image from the input image signal, generating motion information for the input image portion, processing the input image portion based on the motion information, outputting processed image data, and outputting the motion information as encoding hints, such that the motion information is accessible by an encoder for generating an encoded output bitstream by obtaining the processed image data as source image data, obtaining the motion information, generating prediction data for encoding the source image data based on the motion information, generating encoded image data based on the prediction data, and including the encoded image data in an encoded output bitstream.
Disclosed are a system and a method for determining enabling or disabling electronic image stabilization (EIS) for a video frame. An image sensor of a camera system captures a video stream that comprises a plurality of video frames. An image processor determines availability of a computational resource that may process application of EIS on each video frame. Simultaneously, the image processor receives motion data of the camera system from a gyroscope. Based on the computational resource availability, a motion frequency threshold is determined. Based on the gyroscope motion data, a motion frequency of each video frame is estimated. The estimated motion frequency is compared to the determined motion frequency threshold. If the estimated motion frequency is greater than the determined motion frequency threshold, application of EIS is disabled. If the estimated motion frequency is less than or equal to the determined motion frequency threshold, application of EIS is enabled.
Systems and methods for presenting and viewing a spherical video segment is provided. The spherical video segment including tag information associated with an event of interest may be obtained. The tag information may identify a point in time and a viewing angle at which the event of interest is viewable in the spherical video segment. An orientation of a two dimensional display may be determined based upon output signals of a sensor. A display field of view within the spherical video segment may be determined and presented on the display based upon the orientation of the display. The display field of view may be captured as a two dimensional video segment. If the viewing angle of the event of interest is outside the display field of view proximate the point in time, a notification may be presented within the display field of view.
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 9/87 - Régénération des signaux de télévision en couleurs
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
Systems and methods for providing video content using spatially adaptive video encoding. Panoramic and/or virtual reality content may be viewed by a client device using a viewport with viewing dimension(s) configured smaller than available dimension(s) of the content. Client device may include a portable media device characterized by given energy and/or computational resources. Video content may be encoded using spatially varying encoding. For image playback, portions of panoramic image may be pre-encoded using multiple quality bands. Pre-encoded image portions, matching the viewport, may be provided and reduce computational and/or energy load on the client device during consumption of panoramic content. Quality distribution may include gradual quality transition area allowing for small movements of the viewport without triggering image re-encoding. Larger movements of the viewport may automatically trigger transition to another spatial encoding distribution.
H04N 19/167 - Position dans une image vidéo, p.ex. région d'intérêt [ROI]
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p.ex. liés aux standards de compression
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 21/2343 - Traitement de flux vidéo élémentaires, p.ex. raccordement de flux vidéo ou transformation de graphes de scènes MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
H04N 19/17 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p.ex. un objet
A video may be presented on a touchscreen display. Reception of annotation input may be determined based on user's engagement with the touchscreen display. Annotation input may define an in-frame visual annotation for the video. In-frame visual annotation may be associated with a visual portion of the video and one or more points within a duration of the video such that a subsequent presentation of the video includes the in-frame visual annotation positioned at the visual portion of the video at the one or more points. A graphical user interface may be presented on the touchscreen display. The graphical user interface may include one or more animation fields that provide options for selection by the user. The options may define different properties of a moving visual element added to the video. The options may define visual characteristics, presentation periods, and motions of the moving visual element.
H04N 21/84 - Génération ou traitement de données de description, p.ex. descripteurs de contenu
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
96.
APPARATUS AND METHOD FOR AUDIO BASED VIDEO SYNCHRONIZATION
Multiple video recordings may be synchronized using audio features of the recordings. A synchronization process may compare energy tracks of each recording within a multi-resolution framework to correlate audio features of one recording to another.
G11B 27/00 - Montage; Indexation; Adressage; Minutage ou synchronisation; Contrôle; Mesure de l'avancement d'une bande
H04N 5/93 - Régénération du signal de télévision ou de parties sélectionnées de celui-ci
H04N 9/80 - Transformation du signal de télévision pour l'enregistrement, p.ex. modulation, changement de fréquence; Transformation inverse pour la reproduction
H04N 5/92 - Transformation du signal de télévision pour l'enregistrement, p.ex. modulation, changement de fréquence; Transformation inverse pour le surjeu
H04N 9/82 - Transformation du signal de télévision pour l'enregistrement, p.ex. modulation, changement de fréquence; Transformation inverse pour la reproduction les composantes individuelles des signaux d'image en couleurs n'étant enregistrées que simultanément
97.
APPARATUS AND METHODS FOR VIDEO COMPRESSION USING MULTI-RESOLUTION SCALABLE CODING
Apparatus and methods for digital video data compression via a scalable, multi-resolution approach. In one embodiment, the video content may be encoded using a multi-resolution and/or multi-quality scalable coding approach that reduces computational and/or energy load on a client device. In one implementation, a low fidelity image is obtained based on a first full resolution image. The low fidelity image may be encoded to obtain a low fidelity bitstream. A second full resolution image may be obtained based on the low fidelity bitstream. A portion of a difference image obtained based on the second full resolution image and the first full resolution may be encoded to obtain a high fidelity bitstream. The low fidelity bitstream and the high fidelity bitstream may be provided to e.g., a receiving device.
H04N 19/30 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant des techniques hiérarchiques, p.ex. l'échelonnage
H04N 19/187 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couche de vidéo échelonnable
H04N 19/167 - Position dans une image vidéo, p.ex. région d'intérêt [ROI]
A gimbal mount system is configured to a couple to a gimbal coupled to and securing a camera. The gimbal mount system includes a handle, a power source, a user interface, a mounting interface, a communication interface, and a communication bus. The mounting interface is located within an end of the gimbal mount system and includes an opening configured to receive a reciprocal mounting protrusion of the gimbal. A locking mechanism removably couples the gimbal to the gimbal mount system. The communication interface is located within the mounting interface and is configured to couple to a reciprocal communication interface of the gimbal. The communication bus is coupled to the power source, user interface, and communication interface and is configured to provide power from the power source to the gimbal. The communication bus may provide instructions to the gimbal based on user input received via the user interface.
Systems and methods for controlling an unmanned aerial vehicle recognize and interpret gestures by a user. The gestures are interpreted to adjust the operation of the unmanned aerial vehicle, a sensor carried by the unmanned aerial vehicle, or both.
B64C 39/02 - Aéronefs non prévus ailleurs caractérisés par un emploi spécial
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
100.
DYNAMIC SYNCHRONIZATION OF FRAME RATE TO A DETECTED CADENCE IN A TIME LAPSE IMAGE SEQUENCE
A frame rate is synchronized to a detected cadence in order to generate an output image sequence that is substantially stabilized. In an in-camera process, a camera receives motion data of the camera while the camera captures the sequence of image frames. A dominant frequency of motion is determined and the capture frame rate is dynamically adjusted to match the frequency of detected motion so that each image frame is captured when the camera is at approximately the same position along the axis of motion. Alternatively, in a post-processing process, frames of a captured image sequence are selectively sampled at a sampling rate corresponding to the dominant frequency of motion so that each sampled frame corresponds to an image capture that occurred when the camera is at approximately the same position along the axis of motion.