G06F 3/147 - Digital output to display device using display panels
H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
G09G 5/02 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
An acoustic image source model for early reflections in a room is generated by iteratively mirroring (305) rooms around boundaries (e.g. walls) of rooms of the previous iteration. Determination of mirror positions in the image rooms for an audio source in the original room is performed by determining (605, 607) matching reference positions in the two rooms and a relative mapping of directions between the two rooms (611). A mirror position in the mirror room from an audio source in the original room is determined (701, 703, 705) by mapping relative offsets between the positions of the audio source and the reference positions. The approach may provide a computationally very efficient approach.
:A method for transmitting multi-view image frame data. The method comprises obtaining multi-view components representative of a scene generated from a plurality of sensors, wherein each multi-view component corresponds to a sensor and wherein at least one of the multi-view components includes a depth component and at least one of the multi-view components does not include a depth component. A virtual sensor pose is obtained for each sensor in a virtual scene, wherein the virtual scene is a virtual representation of the scene and wherein the virtual sensor pose is a virtual representation of the pose of the sensor in the scene when generating the corresponding multi-view component. Sensor parameter metadata is generated for the multi-view components, wherein the sensor parameter metadata contains extrinsic parameters for the multi-view components and the extrinsic parameters contain at least the virtual sensor pose of a sensor for each of the corresponding multi-view components. The extrinsic parameters enable the generation of additional depth components by warping the depth components based on their corresponding virtual sensor pose and a target position in the virtual scene. The multi-view components and the sensor parameter metadata is thus transmitted.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
A method for processing multi-view image data. The method comprises obtaining source view data from a plurality of sensors, the source view data containing source texture data and source depth data of a scene with one or more objects. The positions of one or more of the objects in the scene are obtained and a stack of layers is generated in a virtual scene for at least one of the objects, wherein the position of a stack of layers in the virtual scene is based on the position of the corresponding object in the scene. Generating a stack of layers comprises generating a plurality of layers, wherein each layer comprises texture data and transparency data for the corresponding object.
An apparatus comprises a receiver (601) receiving captured video data for a real world scene and being linked with a capture pose region. A store (615) stores a 3D mesh model of the real world scene. A renderer (605) generates an output image for a viewport for a viewing pose. The renderer (605) comprises a first circuit (607) arranged to generate first image data for the output image by projection of captured video data to the viewing pose and second circuit (609) arranged to determine second image data for a first region of the output image in response to the three-dimensional mesh model. A third circuit (611) generates the output image to include at least some of the first image data and to include the second image data for the first region. A fourth circuit (613) determines the first region based on a deviation of the viewing pose relative to the capture pose region.
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/6587 - Control parameters, e.g. trick play commands or viewpoint selection
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
H04N 13/111 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
G06T 15/00 - 3D [Three Dimensional] image rendering
A method of depth segmentation for the generation of a multi-view video data. The method comprises obtaining a plurality of source view images and source view depth maps representative of a 3D scene from a plurality of sensors. Foreground objects in the 3D scene are segmented from the source view images and/or the source view depth maps. One or more patches are then generated for each source view image and source view depth map containing at least one foreground object, wherein each patch corresponds to a foreground object and wherein generating a patch comprises generating a patch texture image, a patch depth map and a patch transparency map based on the source view images and the source view depth maps.
:The invention provides a light output system for delivering light to a region of interest, for providing at least a minimum light intensity to all of the region of interest. The system has more light sources of a particular kind than are needed to reach the minimum light intensity (to all of the region of interest), and they are operated with a duty cycle. The duty cycle ratio is reduced by a factor which is greater than the factor by which the number of light sources in increased above the minimum number, so that energy savings are obtained as well as an increased lifetime of the system.
A method for preparing immersive video data prior to processing into an immersive video. The method comprises receiving immersive video data containing one or more images of a scene, the scene comprising one or more image regions, and obtaining a relevance factor for at least one of the image regions, the relevance factor being indicative of the relative importance of the image region to a viewer. The immersive video data is separated into one or more sets of region data, wherein each set of region data corresponds to the data of one or more image regions and, based on the relevance factor of an image region, a bitrate is selected at which the set of region data corresponding to the image region is to be sent to an external processing unit.
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/164 - Feedback from the receiver or from the transmission channel
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
9.
DEPTH ORDERS FOR MULTI-VIEW FRAME STORING AND RENDERING
A method for storing multi-view data with depth order data. The method comprises obtaining image frames of a scene from an imaging system with a plurality of cameras, obtaining depth maps from the imaging system and/or the image frames and obtaining qualitative depth information relating to the depth of at least one object present in the scene relative to other objects in the scene, the qualitative depth information being additional to the information conveyed by the depth map. A depth order is determined for a set of at least two objects present in the scene based on the qualitative depth information, wherein the depth order determines the depth of an object relative to other objects with different depth orders. The image frames of the scene, the corresponding depth maps and the depth order for the objects in the scene are then stored as the multi-view data.
An image synthesis apparatus comprises a first receiver (201) receiving three dimensional image data describing at least part of a three dimensional scene and second receiver (203) receiving a view pose for a viewer. An image region circuit (207) determines at least a first image region in the three dimensional image data and a depth circuit (209) determines a depth indication for the first image region from depth data of the three dimensional image data. A region circuit (211) determines a first region for the first image region. A view synthesis circuit (205) generates a view image from the three dimensional image data where the view image representing a view of the three dimensional scene from the view pose. The view synthesis circuit (205) is arranged to adapt a transparency for the first image region in the view image in response to the depth indication and a distance between the view pose and the first region.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
The processing of a depth map comprises for at least a first pixel of the depth map performing the steps of: determining a set of candidate depth values (105) including other depth values of the depth map, determining (107) a cost value for each of the candidate depth values in response to a cost function; selecting (109) a first depth value in response to the cost values for the set of candidate depth values; and determining (111) an updated depth value for the first pixel in response to the first depth value. The set of candidate depth values comprises a first candidate depth value along a first direction which is further away from the first pixel than at least one pixel along the first direction which is not included in the set of candidate depth values or which has a higher cost function than the first candidate depth value.
A teat for a drinking bottle has a nipple opening in the form of a slit arrangement. The slit arrangement has end portions which comprise curved paths. These curved paths resist tearing.
A method for transitioning from a first set of video tracks, VT1, to a second set of video tracks, VT2, when rendering a multi-track video, wherein each video track has a corresponding rendering priority. The method comprises receiving an instruction to transition from a first set of first video tracks VT1 to a second set of second video tracks VT2, obtaining the video tracks VT2 and, if the video tracks VT2 are different to the video tracks VT1, applying a lowering function to the rendering priority of one or more of the video tracks in the first set of video tracks VT1 and/or an increase function to the rendering priority of one or more video tracks in the second set of video tracks VT2. The lowering function and the increase function decrease and increase the rendering priority over time respectively. The rendering priority is used in the determination of the weighting of a video track and/or elements of a video track used to render a multi-track video.
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
A method for storing data representative of virtual objects on a computer storage system. The method comprises storing constant data corresponding to physical properties of the virtual objects which will remain constant when the data is read. The constant data comprises one or more constant elements representative of physical properties of one or more of the virtual objects. The method also comprises storing variable data corresponding to physical properties of the virtual objects which are uncertain at the time of storing the data. The variable data comprises one or more variable elements representative of uncertain physical properties of one or more of the virtual objects and wherein each variable element comprises a range of values and a probability function for the range of values.
A bottle analysis system receives image data of a bottle to be analyzed, and the data is processed to identify a shape of the bottle, and optionally any identifying markings. A bottle type is then determined. Image analysis is used to determine a liquid level in the bottle and thereby determine a liquid volume in the bottle.
G01F 23/02 - Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm by gauge glasses or other apparatus involving a window or transparent tube for directly observing the level to be measured or the level of a liquid column in free communication with the main body of the liquid
16.
A METHOD AND APPARATUS FOR ENCODING AND DECODING ONE OR MORE VIEWS OF A SCENE
Methods are provided for encoding and decoding image or video data comprising two or more views (10) of a scene. The encoding method comprises obtaining (11), for each of the two or more views, a respective block segmentation mask (12) of the view and block image data (13) of the view. The method further comprises generating (14) at least one packed frame (40) containing the two or more block segmentation masks and the block image data of the two or more views; and encoding (15) the at least one packed frame into at least one bitstream (16). Each view is divided into blocks of pixels (30), and the block segmentation mask indicates which blocks of pixels belong to an area of interest (31) in the view. The block image data comprises the blocks of pixels that belong to the area of interest. Also provided are a corresponding encoder, decoder, and bitstream.
H04N 19/132 - Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Concepts for encoding and decoding multi-view data for immersive video are disclosed. In an encoding method, metadata is generated comprising a field indicating if a patch data unit of the multi-view data comprises in-painted data for representing missing data. The generated metadata provides a means of distinguishing patch data units comprising original texture and depth data from patch data units comprising in-painted data (e.g. in-painted texture and depth data). The provision of such information within the metadata of immersive video may address problems associated with blending and pruned view reconstruction. Also provided are an encoder and a decoder for multi-view data for immersive video, and a corresponding bitstream, comprising metadata.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
18.
ANTI-FOULING UNIT AND METHOD OF APPLYING A PLURALITY OF ANTI-FOULING UNITS TO A SURFACE
An anti-fouling unit (1) is configured to be arranged on a surface and comprises at least one electric circuit (30) including a light-emitting arrangement (31) configured to emit anti-fouling light. Further, the anti-fouling unit (1) comprises a carrier slab (40) carrying the at least one electric circuit (30), which carrier slab (40) includes at least one active slab zone (42) where the at least one electric circuit (30) is located and at least one passive slab zone (43) outside the active slab zone (42) that is configured to allow a division of the anti-fouling unit (1) in separate pieces without deteriorating the anti-fouling functionality, and the light-emitting arrangement (31) of the at least one electric circuit (30) is configured to realize the anti-fouling functionality both at a position of the at least one active slab zone (42) and at a position of the at least one passive slab zone (43).
An encoder, decoder, encoding method and decoding method for 3DoF+ video are disclosed. The encoding method comprises receiving (110) multi-view image or video data comprising a basic view and at least a first additional view of a scene. The method proceeds by identifying (220) pixels in the first additional view that need to be encoded because they contain scene-content that is not visible in the basic view. The first additional view is divided (230) into a plurality of first blocks of pixels. First blocks containing at least one of the identified pixels are retained (240); and first blocks that contain none of the identified pixels are discarded. The retained blocks are rearranged (250) so that they are contiguous in at least one dimension. A packed additional view is generated (260) from the rearranged first retained blocks and encoded (264).
H04N 19/129 - Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
H04N 19/132 - Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
H04N 19/134 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
20.
A SYSTEM AND METHOD FOR PROVIDING ASSISTANCE DURING BOTTLE-FEEDING
A system provides assistance during bottle-feeding. Video images of a subject bottle-feeding an infant are captured and displayed. Using image analysis, a reorientation of the bottle and/or infant is determined that is required in order to reach a desired bottle orientation and/or infant orientation. Reorientation instructions are provided in combination with the video images to assist the subject in reorienting the bottle and/or the infant to achieve the desired bottle orientation.
An audio apparatus for generating a diffuse reverberation signal comprises a receiver (501) receiving audio signals representing sound sources and metadata comprising a diffuse reverberation signal to total source relationship indicative of a level of diffuse reverberation sound relative to total emitted sound in the environment. The metadata also for each audio signal comprises a signal level indication and a directivity data indicative of directivity of sound radiation from the sound source represented by the audio signal. A circuit (505, 507) determines a total emitted energy indication based on the signal level indication and the directivity data, and a downmix coefficient based on the total emitted energy and the diffuse reverberation signal to total signal relationship. A downmixer (509) generates a downmix signal by combining signal components for each audio signal generated by applying the downmix coefficient for each audio signal to the audio signal. A reverberator (407) generates the diffuse reverberation signal for the environment from the downmix signal component.
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
22.
EXTENDED REALITY-BASED USER INTERFACE ADD-ON, SYSTEM AND METHOD FOR REVIEWING 3D OR 4D MEDICAL IMAGE DATA
The invention relates to a system (1) for reviewing 3D or 4D medical image data (2), the system (1) comprising (a) a medical review application (MRA) (4) comprising a processing module (6) configured to process a 3D or 4D dataset (2) to generate 3D content (8), and a 2D user interface (16); wherein the 2D user interface (16) is configured to display the 3D content (8) and to allow a user (30) to generate user input (18) commands; (b) an extended reality (XR)-based user interface add-on (XRA) (100); and (c) a data exchange channel (10), the data exchange channel (10) being configured to interface the processing module (6) with the XRA (100); wherein the XRA (100) is configured to interpret and process the 3D content (8) and convert it to XR content displayable to the user (30) in an XR environment (48); wherein the XR environment (48) is configured to allow a user to generate user input (18) events, and the XRA (100) is configured to process the user input (18) events and convert them to user input (18) commands readable by the MRA (4). The invention also relates to an extended reality -based user interface add-on (100), a related method for analysing a 3D or 4D dataset (2), and a related computer program.
A method for calibrating at least one of the six-degrees-of-freedom of all or part of cameras in a formation positioned for scene capturing, the method comprising a step of initial calibration before the scene capturing. The step comprises creating a reference video frame which comprises a reference image of a stationary reference object. During scene capturing the method further comprises a step of further calibration wherein the position of the reference image of the stationary reference object within a captured scene video frame is compared to the position of the reference image of the stationary reference object within the reference video frame, and a step of adapting the at least one of the six-degrees-of-freedom of a multiple cameras of the formation if needed in order to get an improved scene capturing after the further calibration.
Presented are concepts for generating an input image dataset that may be used for training the alignment of multiple synthesized images. Such concepts may be based on an idea to generate an input image dataset from copies of an arbitrary reference image that are shifted in various directions. In this way, a single arbitrary image may be used to create an artificial misaligned input image input sample (i.e. input image dataset) that can be used to train a neural network.
An internal element (310) for a feeding bottle (100) is provided, the feeding bottle comprising a teat component (110), and a container component (120), which together define a bottle volume extending longitudinally between a base end of the container component, and a top end of the teat component. The internal element (310) comprises a disc element (620) configured to be positioned within the bottle volume extending transverse the longitudinal axis, and further comprises one or more tab elements (640) protruding from an outer periphery (630) of the disc element for being received between interfacing parts of a coupling arrangement (340, 342) of the bottle.
An arrangement (300) for a feeding bottle is provided, the feeding bottle comprising a teat component (110), and a container component (120), which together define an internal bottle volume extending longitudinally between a base end of the container component, and a top end of the teat component. The arrangement comprises an internal element (310) for positioning inside the bottle volume, and a protruding element (320) arranged for extending from the internal element to an outside of the bottle when the bottle is in an assembled state with the internal element in position, for providing an interconnection between inside and outside of the bottle.
A partitioning component (210) for dividing a feeding bottle (110) into two sections: one (125) associated with a container (120) part of the bottle and one (115) associated with a teat part (110) of the bottle. The partition allows for at least partial retention of liquid in the teat part even when the bottle is tipped in a horizontal position, the more natural position for feeding a user such as a baby or toddler. To enable flow of fluid between the two sections, the partitioning component comprises a passageway arrangement (215) which comprises one or more openings (225) and the passageway arrangement configured to enable flow of both liquid and air across the partition in different directions. This allows liquid to pass in, and air to pass out, of the teat section (115) during filling of the teat. To enable maximal retention of liquid inside the teat section when the bottle is tilted in the horizontal position, the openings of the passageway arrangement are all confined to a single region of the partitioning component which, in use, is arranged offset on one diametric side of the bottle volume or of the teat volume.
A wireless device initializes (3210) a candidate resource set. A first resource is excluded (3220) from the candidate resource set based on: the first resource being offset from a second resource by one or more reservation periods; and the second resource not being monitored in a sensing window. A sidelink control information (SCI) indicating (3230) a resource reservation of the first resource is received. A sidelink transmission is transmitted (3240) via the first resource.
A breast pump device (1) comprises i) a fluid pressure arrangement (2, 3) configured to interact with a breast from which milk is to be extracted and to realize a pressure cycle at a position where the fluid pressure arrangement (2, 3) is to face the breast, ii) a controller configured to execute an operation software program for controlling operation of the breast pump device (1), the operation software program including instructions to cause the fluid pressure arrangement (2, 3) to realize the pressure cycle, and iii) a monitoring arrangement configured to monitor functioning of the operation software program and to cause release of underpressure in case malfunctioning of the operation software program is detected. As a result, malfunctioning of the operation software program cannot last and any vacuum that may have built up in such a situation is automatically released.
Methods of encoding and decoding video data are provided. In an encoding method, source video data comprising one or more source views is encoded into a video bitstream. Depth data of at least one of the source views is nonlinearly filtered and downsampled prior to encoding. After decoding, the decoded depth data is up-sampled and nonlinearly filtered.
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/2365 - Multiplexing of several video streams
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/4728 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for selecting a ROI [Region Of Interest], e.g. for requesting a higher resolution version of a selected region
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
A pump arrangement is provided for a breast pump system having a pump motor assembly and a damping arrangement, providing a coupling between an outer surface of the pump motor assembly and an inner surface of a housing for the pump arrangement. A set of one or more limbs extends outwardly from a frame. For example, cushions are formed by a pair of radially extending limbs and a connecting piece connecting to the pair of limbs. This arrangement provides positioning within the housing as well as damping.
A61M 1/00 - Suction or pumping devices for medical purposes; Devices for carrying-off, for treatment of, or for carrying-over, body-liquids; Drainage systems
F04B 17/03 - Pumps characterised by combination with, or adaptation to, specific driving engines or motors driven by electric motors
F04B 53/00 - Component parts, details or accessories not provided for in, or of interest apart from, groups or
A monitoring system is provided for monitoring an infant during bottle feeding. Based on bottle orientation information and movement information in respect of the feeding bottle during feeding, infant orientation information in respect of the infant is obtained.
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
G16H 40/60 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
A61B 5/107 - Measuring physical dimensions, e.g. size of the entire body or parts thereof
A monitoring system for monitoring milk flow during breast feeding or milk expression uses a flow sensor arrangement to monitor milk flow levels from different regions of the breast. A map of milk flow levels for different regions of the breast is then generated and displayed.
An image synthesis system comprises receivers (201, 203, 205) receiving scene data describing at least part of a scene; object data describing a 3D object from a viewing zone having a relative pose with respect to the object, and a view pose in the scene. A pose determiner circuit (207) determines an object pose for the object in the scene in response to the scene data and the view pose; and a view synthesis circuit (209) generates a view image of the object from the object data, the object pose, and the view pose. A circuit (211) determines a viewing region in the scene which corresponds to the viewing zone for the object being at the object pose. The pose determiner circuit (207) determines a distance measure for the view pose relative to the viewing region and changes the object pose if the distance measure meets a criterion including a requirement that a distance between the view pose and a pose of the viewing region exceeds a threshold.
A wireless device receives configuration parameters of a bandwidth part comprising resource block (RB) sets, wherein control channel elements (CCEs) are across the RB sets and a subset of the CCEs, within each RB set of the RB sets, are indexed from a same initial value. Control information is received via one or more CCEs of a first subset, of the CCEs, within an RB set of the RB sets. The wireless device transmits a signal via an uplink resource based on an index of a CCE of the one or more CCEs.
A method is provided for selecting a transparency setting and color values of pixels in a virtual image. The virtual image can be formed by combining reference images taken at different angles to produce the virtual image that views an object at a new, uncaptured angle. The method includes determining for each pixel of the virtual image, what information it carries from the reference view images. The information of the pixel is used to define a pixel category, and the category is used to select, based on logical conditions, what information will be displayed by the pixel and to set the color of the pixels.
A method for compressing data includes obtaining a compression schema customized to a format of a delimited text file, and using the compression schema to parse the delimited text file into a plurality of data blocks, split each of the data blocks into a plurality of data units for efficient selective access, and compress the plurality of data units in the plurality of data blocks using different compression algorithms for improved compression ratio. The delimited file is split into a plurality of data blocks based on the region definitions in the schema. Each of the plurality of data blocks is split into the plurality of data units based on its respective data unit size specified in the schema. The plurality of data units in each of the plurality of data blocks are compressed using the different compression algorithms indicated by the compression instructions in the schema. The compressed file consists of the compressed data blocks, the compression schema and various metadata for data decompression, file reconstruction and functionalities such as data security and search query. The delimited text file may include genomic information or another type of information.
A method for storing, by a processor, a genome graph representing a plurality of individual genomes, including: storing a linear representation of a reference genome in a data storage; receiving a first genome; identifying variations in the first genome from the reference genome; generating graph edges for each variation in the first genome from the reference genome; generating for each generated graph edge: an edge identifier that uniquely identifies the current edge in the genome graph; a start edge identifier that identifies the edge from which the current edge branches out; a start position that indicates the position on the start edge that serves as an anchoring point for the current edge; an end edge identifier that identifies the edge into which the current edge joins in; an end position that indicates the position on the end edge that serves as an anchoring point for the current edge; and a sequence indicating the nucleotide sequence of the current edge; and storing the edge identifier, start edge identifier, start position, end edge identifier, end edge position, and sequence for each generated graph edge in the data storage. Based on this genome graph data structure, we further propose a scheme for specifying a path, which may traverse one or more edges, and the ways to extend existing genomic data formats such as SAM, VCF and MPEG-G to support the use of genome graph reference using our proposed coordinate system.
Methods of encoding and decoding immersive video are provided. In an encoding method, source video data comprising a plurality of source views is encoded into a video bitstream. At least one of the source views is down-sampled prior to encoding. A metadata bitstream associated with the video stream comprises metadata describing a configuration of the down-sampling, to assist a decoder to decode the video bitstream. It is believed that the use of down-sampled views may help to reduce coding artifacts, compared with a patch-based encoding approach. Also provided are an encoder and a decoder for immersive video, and an immersive video bitstream.
A computer-implemented system and method are for alerting an expectant mother to a medical risk during pregnancy. A profile of the expectant mother is used, and reports are received from the expectant mother identifying experienced symptoms. In response to a report at a particular time, any reports received over a subsequent time window are monitored, and based on the combination of reports received during the time window and the profile of the expectant mother, the need for a risk alert is determined. The user is more willing to report symptoms, because a risk alert (which functions as reporting symptoms to a medical expert) only takes place when a real risk is identified.
G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
G08B 21/02 - Alarms for ensuring the safety of persons
42.
METHOD AND SYSTEM FOR PROTECTING A SURFACE AGAINST BIOFOULING
An anti-fouling system is used for protecting a surface against biofouling. Inductive power transfer is used to power an anti-fouling light source arrangement and a voltage multiplier is used at the receiver (secondary) side. The voltage multiplier enables a reduction in the optical impact of the secondary coils in the panel.
B08B 17/02 - Preventing deposition of fouling or of dust
B01J 19/12 - Processes employing the direct application of electric or wave energy, or particle radiation; Apparatus therefor employing electromagnetic waves
Methods of encoding and decoding depth data are disclosed. In an encoding method, depth values and occupancy data are both encoded into a depth map. The method adapts how the depth values and occupancy data are converted to map values in the depth map. For example, it may adaptively select a threshold, above or below which all values represent unoccupied pixels. By adapting how the depth and occupancy are encoded, based on analysis of the depth values, the method can enable more effective encoding and transmission of the depth data and occupancy data. The encoding method outputs metadata defining the adaptive encoding. This metadata can be used by a corresponding decoding method, to decode the map values. Also provided are an encoder and a decoder for depth data, and a corresponding bitstream, comprising a depth map and its associated metadata.
An apparatus for evaluating a quality for image capture comprises a stored (101) for storing a model of a scene and a capture circuit (105) for generating virtual captured images for a camera configuration by rendering from the model. A depth generation circuit (107) generates model depth data from the model and a depth estimation circuit (111) generates estimated depth data from the virtual captured images. A first synthesis circuit (109) and a second synthesis circuit (113) generates first and second view images for test poses by processing the virtual captured images based on the model depth data or estimated depth data respectively. A reference circuit (103) generates reference images for the f test poses by rendering based on the model. A quality circuit (115) generates a quality metric based on a comparison of the first view images, the second view images, and the reference images.
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
G06T 7/55 - Depth or shape recovery from multiple images
G01B 11/245 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
G01C 11/02 - Picture-taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
G01S 3/00 - Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
H04N 13/00 - PICTORIAL COMMUNICATION, e.g. TELEVISION - Details thereof
45.
METHOD FOR ANALYSING MEDICAL IMAGE DATA IN A VIRTUAL MULTI-USER COLLABORATION, A COMPUTER PROGRAM, A USER INTERFACE AND A SYSTEM
There is provided a method for analysing medical image data (34) in a virtual multi-user collaboration, wherein the medical image data (34) is analysed by at least two users (A, N,C, S), each user having his/her own workspace (30), wherein the workspace is a VR- and/or AR- and/ or MR-workspace, the method including the steps of providing medical image data (34) including 3D or 4D image information, loading the medical image data (34) into the workspace (30) of each user so as to simultaneously display a visualization of the medical image data (34) to each user, allowing each user to individually and independently from each other change the visualization of the medical image data (34), so as to obtain an individual visualization of the medical image data (34) in each workspace (30) pertaining to each user, allowing at least one user to execute an analysing process of the medical image data (34) in his/her workspace, displaying the result of the analysing process in the workspace (30) in which the analysing process was carried out, and synchronizing the result of the analysing process in real-time with the at least one other workspace (30) such that each workspace (30) displays the result of the analysing process in the respective individual visualization of the medical image data (34). Further, there is provided a computer program relating to the above method. In addition, a user interface and a system used during execution of the above method are provided.
An electrical breast pump (10) comprising a vacuum source (100) having a vacuum pump (120) with an electrical motor (121) and an aerate valve (150) is provided. Furthermore, a controller (110) controls an operation of the vacuum (120) and the aerate valve (150). Each pumping cycle comprises a pumping period (PP) of the vacuum pump (120) and an aerate period (AP) during which the aerate valve (150) is switched on and the vacuum pump (120) is in active and the electrical motor (120) is switched off. A drive circuit supplies a motor supply voltage for the electrical motor (121) under the control of the controller (110). The drive circuit (140) detects an electromotive force induced voltage at the electrical motor (121) when the motor supply voltage is switched off. The controller adapts the control of the operation of the vacuum pump (120) based on the detected induced voltage.
A two-phase recommendation system for a recommendation device, employing both an external recommendation process and an internal, to a recommendation device, recommendation process. In particular, a processing unit uses a first data file, which is modifiable by an external source, and a second data file stored in a memory unit to recommend one or more content items to a user. The first and second data files are stored in a memory unit of the recommendation device.
48.
PRESSURE SENSOR FOR BEING INTRODUCED INTO THE CIRCULATORY SYSTEM OF A HUMAN BEING
The invention relates to a passive pressure sensor (501) for being introduced into the circulatory system of a human being and for being wirelessly read out by an outside reading system. The pressure sensor comprises a casing (502) with a diffusion blocking layer for maintaining a predetermined pressure within the casing and a magneto-mechanical oscillator with a magnetic object (508) providing a permanent magnetic moment. The magneto-mechanical oscillator transduces an external magnetic or electromagnetic excitation field into a mechanical oscillation of the magnetic object, wherein at least a part of the casing is flexible for allowing to transduce external pressure changes into changes of the mechanical oscillation of the magnetic object. The pressure sensor can be very small and nevertheless provide high quality pressure sensing.
A61B 5/0215 - Measuring pressure in heart or blood vessels by means inserted into the body
G01L 9/00 - Measuring steady or quasi-steady pressure of a fluid or a fluent solid material by electric or magnetic pressure-sensitive elements; Transmitting or indicating the displacement of mechanical pressure-sensitive elements, used to measure the steady or quasi-steady pressure of a fluid or fluent solid material, by electric or magnetic means
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
49.
TRACKING SYSTEM AND MARKER DEVICE TO BE TRACKED BY THE TRACKING SYSTEM
A tracking system for tracking a marker device for being attached to a medical device is provided, whereby the marker device includes a sensing unit comprising a magnetic object which may be excited by an external magnetic or electromagnetic excitation field into a mechanical oscillation of the magnetic object, and the tracking system comprises a field generator for generating a predetermined magnetic or electromagnetic excitation field for inducing mechanical oscillations of the magnetic object, a transducer for transducing a magnetic or electromagnetic field generated by the induced mechanical oscillations of the magnetic object into one or more electrical response signals, and a position determination unit for determining the position of the marker device on the basis of the one or more electrical response signals.
A61B 5/06 - Devices, other than using radiation, for detecting or locating foreign bodies
A61B 34/20 - Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
A61M 25/01 - Introducing, guiding, advancing, emplacing or holding catheters
50.
ASSESSING AT LEAST ONE STRUCTURAL FEATURE OF AN ANTI-BIOFOULING ARRANGEMENT
In an anti-biofouling context, an anti-biofouling system (20) is provided, which is configured to emit anti-biofouling light in an activated state thereof and to be applied to an object (10). Further, the anti-biofouling system (20) comprises a sensor system (30) that is configured to obtain measurement data relating to at least one structural feature of an anti-biofouling arrangement (1) including both the anti-biofouling system (20) and the object (10) in an actual case of the anti-biofouling system (20) being in place on the object (10). By having the sensor system (30) as mentioned in the anti-biofouling system (20), it is achieved that one or more structural aspects of the anti-biofouling arrangement (1) may be checked/monitored without a need for providing separate means for fulfilling such functionality.
A wireless device receives configuration parameters indicating: a first downlink bandwidth part for activation of a secondary cell, and a second downlink bandwidth part for transitioning from a dormant state to a non-dormant state of the secondary cell. The wireless device activates the first downlink bandwidth part in response to receiving a medium access control activation command indicating activation of the secondary cell. The wireless device transitions the secondary cell from the non-dormant state to the dormant state based on a command or a timer. The wireless device receives a downlink control information comprising a field indicating transitioning the secondary cell from the dormant state to the non-dormant state. The wireless device activates the second downlink BWP as an active downlink BWP in response to the transitioning the secondary cell to the non-dormant state.
A wireless device receives configuration parameters indicating: a first downlink bandwidth part for activation of a secondary cell, and a second downlink bandwidth part for transitioning from a dormant state to a non-dormant state of the secondary cell. The wireless device activates the first downlink bandwidth part in response to receiving a medium access control activation command indicating activation of the secondary cell. The wireless device transitions the secondary cell from the non-dormant state to the dormant state based on a command or a timer. The wireless device receives a downlink control information comprising a field indicating transitioning the secondary cell from the dormant state to the non-dormant state. The wireless device activates the second downlink BWP as an active downlink BWP in response to the transitioning the secondary cell to the non-dormant state.
Generating an image signal comprises a receiver (401) receiving source images representing a scene. A combined image generator (403) generates combined images from the source images. Each combined image is derived from only parts of at least two images of the source images. An evaluator (405) determines prediction quality measures for elements of the source images where the prediction quality measure for an element of a first source image is indicative of a difference between pixel values in the first source image and predicted pixel values for pixels in the element. The predicted pixel values are pixel values resulting from prediction of pixels from the combined images. A determiner (407) determines segments of the source images comprising elements for which the prediction quality measure is indicative of a difference above a threshold. An image signal generator (409) generates an image signal comprising image data representing the combined images and the segments of the source images.
An image source (407) provides an image divided into segments of different sizes with only a subset of these comprising image data. A metadata generator (409) generates metadata structured in accordance with a tree data structure where each node is linked to a segment of the image. Each node is a branch node linking the parent node to child nodes linked to segments that are subdivisions of the parent node, or a leaf node which has no children. A leaf node is either an unused leaf node linked to a segment for which the first image comprises no image data or a used leaf node linked to a segment for which the first image comprises image data. The metadata indicates whether each node is a branch node, a used leaf node, or an unused leaf node. An image signal generator (405) generates an image signal comprising the image data of the first image and the metadata.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/88 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks
A method of processing depth maps comprises receiving (301) images and corresponding depth maps. The depth values of a first depth map of the corresponding depth maps are updated (303) based on depth values of at least a second depth map of the corresponding depth maps. The updating is based on a weighted combination of candidate depth values determined from other maps. A weight for a candidate depth value from the second depth map is determined based on the similarity between a pixel value in the first image corresponding to the depth being updated and a pixel value in a third image at a position determined by projecting the position of the depth value being updated to the third image using the candidate depth value. More consistent depth maps may be generated in this way.
An apparatus comprises receivers (201, 203) receiving texture maps and meshes representing a scene from a first and second view point. An image generator (205) determines a light intensity image for a third view point based on the received data. A first view transformer (207) determines first image positions and depth values in the image for vertices of the first mesh and a second view transformer (209) determines second image positions and depth values for vertices of the second mesh. A first shader (211) determines a first light intensity value and a first depth value based on the first image positions and depth value, and a second shader (213) determines a second light intensity value and a second depth value from the second image positions depth values. A combiner (215) generates an output value as a weighted combination of the first and second light intensity values where the weighting of a light intensity value increases for an increasing depth value.
An apparatus comprises a receiver (301) for receiving an image representation of a scene. A determiner (305) determines viewer poses for a viewer with respect to a viewer coordinate system. An aligner (307) aligns a scene coordinate system with the viewer coordinate system by aligning a scene reference position with a viewer reference position in the viewer coordinate system. A renderer (303) renders view images for different viewer poses in response to the image representation and the alignment of the scene coordinate system with the viewer coordinate system. An offset processor (309) determines the viewer reference position in response to an alignment viewer pose where the viewer reference position is dependent on an orientation of the alignment viewer pose and has an offset with respect to a viewer eye position for the alignment viewer pose. The offset includes an offset component in a direction opposite to a view direction of the viewer eye position.
An apparatus comprises a receiver (301) receiving an image signal representing a scene. The image signal includes image data comprising a number of images where each image comprises pixels that represent an image property of the scene along a ray having a ray direction from a ray origin. The ray origins are different positions for at least some pixels. The image signal further comprises a plurality of parameters describing a variation of the ray origins and/or the ray directions for pixels as a function of pixel image positions. A renderer (303) renders images from the number of images based on the plurality of parameters.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
H04N 13/178 - Metadata, e.g. disparity information
59.
TEAT FOR USE WITH A CONTAINER FOR CONTAINING LIQUID
A teat (1) comprises a hollow teat body (10) including a deformable hollow mouthpiece (11) and a normally-closed valve (30) arranged at a level of the mouthpiece (11) or a level more downstream, the valve (30) being openable under the influence of suction forces exerted to the mouthpiece (11) by a user of the teat (1) during a liquid intake action. The valve (30) is included in a valve area (32) of a valve body (31) that is configured and arranged to prevent a closed-opened condition of the valve (30) from being changed under the influence of deformation of the mouthpiece (11) inflicted by a user of the teat (1) during a liquid intake action, so that the closed-opened condition of the valve (30) is controllable by means of suction forces exerted to the mouthpiece (11) by a user of the teat (1) during a liquid intake action.
The invention relates to an apparatus for generating or processing an image signal. A first image property pixel structure is a two-dimensional non-rectangular pixel structure representing a surface of a view sphere for the viewpoint. A second image property pixel structure is a two-dimensional rectangular pixel structure and is generated by a processor (305) to have a central region derived from a central region of the first image property pixel structure and at least a first corner region derived from a first border region of the first image property pixel structure. The first border region is a region proximal to one of an upper border and a lower border of the first image property pixel structure. The image signal is generated to include the second image property pixel structure and the image signal may be processed by a receiver to recover the first image property pixel structure.
H04N 19/132 - Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/88 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks
61.
ANTI-BIOFOULING ARRANGEMENT AND METHOD OF DESIGNING SUCH AN ARRANGEMENT
In an anti-biofouling context, an arrangement (1) is provided which comprises an object (10) and a light-emitting system (20) arranged on at least a main surface (11a) of the object (10). The light-emitting system (20) includes a plurality of light sources (21) for emitting anti-biofouling light and is configured to emit the light in a direction away from the object (10). The light sources (21) are arranged in the light-emitting system (20) in at least two different light-emitting groups (22) having respective main directions of emission (23) of the anti-biofouling light, the main directions of emission (23) of at least two light-emitting groups (22) having different spatial orientations when viewed on an unfolded and flattened area of the main surface (11a) where the light-emitting groups (22) are located. Consequently, the area that is reached by the anti-biofouling light during operation of the arrangement (1) can be adjusted to particular requirements.
A baby bottle device (100) is provided which comprises a teat (110) having a teat volume (115), a container (120) having a container volume (125) and a partitioning element (300) between the teat volume (110) and the container volume (125). The partitioning element (300) comprises a plurality of openings (310) for letting fluid in the container volume (125) flow into the teat volume (115). The baby bottle device comprises a floater (400) having a buoyancy, being coupled to the partitioning element (300) and being adapted to close at least one of the plurality of openings (310) in the partitioning element (300).
A baby bottle device (100) is provided which comprises a container (110) with a container volume (115), a teat (120) with a teat volume (125) and a valve unit (200). The teat comprises a first valve (112) for inletting air into the inside volume of the bottle (100). The inside volume of the bottle corresponds to the volume of the container (110) and the teat volume (125). The valve unit (200) with a second valve (210 230) is arranged outside the teat volume (125). The threshold of the vent unit is lower than the threshold of the first vent (112) in the teat.
A drinking behavior monitoring device is provided which comprises a stress level detection sensor (300) detecting a stress level of a baby (20), a suction frequency detection sensor (200) detecting a suction frequency during a feeding of the baby and an analyzer (400) analyzing a drinking behavior of a baby based on the detected stress level and the detected suction frequency. The analyzer (400) compares the analyzed drinking behavior with a typical or predetermined drinking behavior and can output a notification based on the analyzed drinking behavior.
The present invention relates to a cutter assembly (10) for a hair cutting appliance (100). The cutter assembly comprises a cutting element (20), and a clamping element (30). The cutter assembly is configured to contact skin of a user of the hair cutting appliance. The clamping element is configured to clamp hair of the user that is growing out of the skin. The clamping element is configured to move within the cutter assembly to pull the clamped hair away from the skin of the user. The cutting element is configured to cut the clamped hair that has been pulled away from the skin of the user.
B26B 19/42 - Clippers or shavers operating with a plurality of cutting edges, e.g. hair clippers, dry shavers - Details of, or accessories for, hair clippers or dry shavers, e.g. housings, casings, grips or guards providing for tensioning the skin, e.g. by means of rollers, ledges
Various embodiments of the present disclosure include a thermal ablation probabilistic controller (30) employing an ablation probability model (32) trained to render a pixel ablation probability for each pixel of an ablation scan image illustrative of a static anatomical ablation. In operation, the thermal ablation probabilistic controller (30) spatially aligns a temporal sequence of ablation scan datasets representative of a dynamic anatomical ablation, and applies the ablation probability model (32) to the spatial alignment of the temporal sequence of ablation scan datasets to render the pixel ablation probability for each pixel of the ablation scan image illustrative of the static anatomical ablation.
A61B 5/05 - Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
A61B 8/08 - Detecting organic movements or changes, e.g. tumours, cysts, swellings
A61B 18/12 - Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body by heating by passing a current through the tissue to be heated, e.g. high-frequency current
A baby bottle device (100) is provided which comprises at least one movement sensor (140, 150) for detecting a movement of the baby bottle device (100). The movement data from the movement sensor (140, 150) is analyzed in an analyzer (200) to perform a suck-swallow-breathe analysis during a drinking phase of the baby based on the movement data from the movement sensor (140, 150). Thus, a drinking behavior of a baby can be efficiently analyzed.
A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
G16H 40/00 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
An image synthesis apparatus comprises a receiver (301) for receiving image parts and associated depth data of images representing a scene from different view poses from an image source. A store (311) stores a depth transition metric for each image part of a set of image parts where the depth transition metric for an image part is indicative of a direction of a depth transition in the image part. A determiner (305) determines a rendering view pose and an image synthesizer (303) synthesizes at least one image from received image part. A selector is arranged to select a first image part of the set of image parts in response to the depth transition metric and a retriever (309) retrieves the first image part from the image source. The synthesis of an image part for the rendering view pose is based on the first image part.
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/6587 - Control parameters, e.g. trick play commands or viewpoint selection
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
An antifouling system for reducing and/or preventing fouling of an object exposed to fouling conditions when in use, comprising a plurality of antifouling devices (26) for providing an antifouling radiation to at least part of the object and/or at least part of the antifouling system; wherein the antifouling system further comprises: - a power transmission system comprising: - an inductive power emitter (10) comprising at least one inductive emitter element (12); and - a plurality of inductive power receivers (24) each one comprising at least one inductive receiver element;wherein the inductive power emitter and the plurality of inductive power receivers are for mounting on the object in a fixed configuration with respect to each other thereby to provide an inductive coupling between each one of the at least one inductive receiver elements and the at least one inductive emitter element such that power may be inductively transmitted when the power transmission system is in use; andwherein the plurality of antifouling devices (26) are configured to be driven using transmitted power from at least one of the plurality of inductive power receivers when the system is in use.
In the context of anti-biofouling of marine objects, a light emitting unit is configured to be applied to a surface area of a marine object and comprises at least one light source (12, 13) configured to emit anti-fouling light, and two electrically conductive plates (14, 15), wherein the at least one light source (12, 13) is at the one side electrically connected to one of the plates (14, 15) and at the other side to an electric energy distribution arrangement of the light emitting unit. The plates (14, 15) are arranged to constitute respective capacitors (21, 22) with an electrically conductive surface area of, or over, the marine object, said capacitors (21, 22) being connected in series through the electrically conductive surface area once the light emitting unit is actually applied to a surface area of a marine object.
An anti-fouling lighting system is used for protecting a surface (16) against biofouling while the surface (16) is submerged in water. A non-contact water sensor (60) is used for sensing water thereby to detect whether or not a light source arrangement (26), or a portion of the light source arrangement (26), is submerged in water. The light source arrangement (26), or the portion of the light source arrangement (26), is controlled in dependence on the water sensor (60) output.
At least some applications in the total HDR video chain desire some more sophisticated approach, such as a high dynamic range video encoder (900), arranged to receive via an image input (920) an input high dynamic range image (MsterHDR) which has a first maximum pixel luminance (PB_C_H50), the encoder being arranged to receive via a metadata input (921) a master luma mapping function (FL_50t1), which luma mapping function defines the relationship between normalized lumas of the input high dynamic range image and normalized lumas of a corresponding standard dynamic range image (Im_LDR) having a maximum pixel luminance of preferably 100 nit, characterized in that the encoder further comprises a metadata input (923) to receive a second maximum pixel luminance (PB_CH), and the encoder further being characterized in that it comprises: - a HDR function generation unit (901) arranged to apply a standardized algorithm to transform the master luma mapping function (FL_50t1) into a adapted luma mapping function (F_H2hCI), which relates normalized lumas of the input high dynamic range image to normalized luminances of an intermediate dynamic range image (IDR) which is characterized by having a maximum possible luminance being equal to the second maximum pixel luminance (PB_CH); an IDR image calculation unit (902) arranged to apply the adapted luma mapping function (F_H2hCI) to lumas of pixels of the input high dynamic range image (MsterHDR) to obtain lumas of pixels of the intermediate dynamic range image (IDR); and an IDR mapping function generator (903) arranged to derive on the basis of the master luma mapping function (FL_50t1) and the adapted luma mapping function (F_H2hCI) a channel luma mapping function (F_I2sCI), which defines as output the respective normalized lumas of the standard dynamic range image (Im_LDR) when given as input the respective normalized lumas of the intermediate dynamic range image (IDR); the encoder being further characterized to have as output: the intermediate dynamic range image (IDR), as first metadata the second maximum pixel luminance (PB_CH), as second metadata the channel luma mapping function (F_I2sCI); and as third metadata the first maximum pixel luminance (PB_C_H50).
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
The invention relates to a feeding bottle device, feeding method and a partitioning component (210) for a feeding bottle device (100), comprising a teat component (110) defining a teat volume (115) therein and a container component (120) defining a container volume (125) therein, the teat component (110) being attachable to the container component (120) by means of an attachment component (130). The partitioning component (210) comprises a first passage (212) allowing a passage of air and liquid between the container volume (125) and the teat volume (115) and a second passage (214) allowing a passage of liquid and preventing a passage of air between the teat volume (115) and the container volume (125). The solutions increase the user convenience when operating the feeding bottle device without increasing the risk of colic-like symptoms for the infant while feeding in a horizontal or near-horizontal feeding position.
An apparatus comprises a store (209) storing a set of anchor poses for a scene, as well as typically 3D image data for the scene. A receiver (201) receives viewer poses for a viewer and a render pose processor (203) determines a render pose in the scene for a current viewer pose of the viewer pose where the render pose is determined relative to a reference anchor pose. A retriever (207) retrieves 3D image data for the reference anchor pose and a synthesizer (205) synthesizes images for the render pose in response to the 3D image data. A selector selects the reference anchor pose from the set of anchor poses and is arranged to switch the reference anchor pose from a first anchor pose of the set of anchor poses to a second anchor pose of the set of anchor poses in response to the viewer poses.
A wireless pressure sensing unit (20) comprises a membrane (25) forming an outer wall portion of a cavity and two permanent magnets (26,28) inside the cavity. One magnet is coupled to the membrane, and at least one magnet is free to oscillate with a rotational movement. At least one is free to oscillate with a rotational movement. The oscillation takes place at a resonance frequency, which is a function of the sensed pressure, which pressure influences the spacing between the two permanent magnets. This oscillation frequency can be sensed remotely by measuring a magnetic field altered by the oscillation. The wireless pressure sensing unit may be provided on a catheter (21) or guidewire.
The present invention proposes an apparatus and method for estimating a level of thermal ablation. The apparatus (110) comprises a data interface (111) configured to receive the three-dimensional ultrasound echo data of the tissue region; and a data processor (113) configured to measure in-plane strain and out-of-plane motion regarding the tissue region on basis of the received ultrasound echo data. The data processor (113) is further configured to estimate the level of tissue ablation for the tissue region on basis of the in-plane strain, the out-of-plane motion, and a predetermined model, and the predetermined model at least reflecting a first causal relationship between the in-plane strain and the level of tissue ablation and a second causal relationship between the out-of-plane motion and the level of tissue ablation. While the out-of-plane motion is merely regarded as artifacts to be compensated, the present invention makes use of the causal relationship between the level of tissue ablation and the out-of-plane motion, and thus the estimation is more accurate and/or reliable.
A controller for registering a magnetic resonance imaging (MRI) image to a tracking space includes a memory that stores instructions; and a processor that executes the instructions. The instructions cause the controller to execute a process that results in generating an image registration of a 3-dimensional magnetic resonance imaging volume in the tracking space based on 2- dimensional coordinates of a midsagittal plane of an organ, an image registration of the midsagittal plane of the organ, and a tracking position in the tracking space of an ultrasound image of the midsagittal plane.
A system (200) comprising a light source (220) configured to generate light source radiation (221), wherein the light source radiation (221) at least comprises UV radiation, wherein the system (200) further comprises a luminescent material (400) configured to convert part of the light source radiation (221) into luminescent material radiation (401), wherein the luminescent material radiation (401) comprises one or more of visible light and infrared radiation, wherein the system (200) is configured to generate system light (201) comprising the light source radiation (221) and the luminescent material radiation (401).
B08B 7/00 - Cleaning by methods not provided for in a single other subclass or a single group in this subclass
F21K 9/64 - Optical arrangements integrated in the light source, e.g. for improving the colour rendering index or the light extraction using wavelength conversion means distinct or spaced from the light-generating element, e.g. a remote phosphor layer
F21V 9/30 - Elements containing photoluminescent material distinct from or spaced from the light source
F21V 9/32 - Elements containing photoluminescent material distinct from or spaced from the light source characterised by the arrangement of the photoluminescent material
Disclosed is an in vitro method for assessing whether a human patient suffering from periodontitis has mild periodontitis or advanced periodontitis. The method is based on the insight to determine a selection of two biomarker proteins. Accordingly, in a sample of saliva a patient suffering from periodontitis, the concentrations are measured of the proteins Pyruvate Kinase (PK) and at least one of Matrix metalloproteinase-9 (MMP9), S100 calcium-binding protein A8 (S100A8), and Hemoglobin subunit beta (Hb-beta); or of the proteins Matrix metalloproteinase-9 (MMP9) and at least one of S 100 calcium-binding protein A8 (S100A8) and S100 calcium-binding protein A9 (S100A9). Based on the concentrations as measured, a value is determined reflecting the joint concentrations for said proteins. This value is compared with a threshold value reflecting in the same manner the joint concentrations associated with advanced periodontitis. The comparison allows assessing whether the testing value is indicative of the presence of advanced periodontitis or of mild periodontitis in said patient. Thereby, typically, a testing value reflecting a joint concentration below the joint concentration reflected by the threshold value is indicative for mild periodontitis in said patient, and a testing value reflecting a joint concentration at or above the joint concentration reflected by the threshold value, is indicative for advanced periodontitis in said patient.
Disclosed is an in vitro method for assessing whether a human patient has periodontitis. The method is based on the insight to determine biomarker proteins. Accordingly, in a sample of saliva a patient suffering from periodontitis, the concentrations are measured of the Free Light Chain ? protein and/or the Free Light Chain ?. Based on the concentration(s) as measured, a value is determined reflecting the concentration or joint concentrations for said protein or proteins. This value is compared with a threshold value reflecting in the same manner the concentration or joint concentrations associated with periodontitis. The comparison allows assessing whether the testing value is indicative of the presence of periodontitis in said patient. Thereby, typically, a testing value reflecting a concentration or joint concentration below the concentration or joint concentration reflected by the threshold value is indicative for absence of periodontitis in said patient, and a testing value reflecting a concentration or joint concentration at or above the concentration or joint concentration reflected by the threshold value, is indicative for periodontitis in said patient.
An apparatus for generating view images for a scene comprises a store (101) which stores three dimensional scene data representing the scene from a viewing region. The three dimensional scene data may e.g. be images and depth maps captured from capture positions within the viewing region. A movement processor (105) receives motion data, such as head or eye tracking data, for a user and determines an observer viewing position and an observer viewing orientation from the motion data. A change processor (109) determines an orientation change measure for the observer viewing orientation and an adapter (111) is arranged to reduce a distance from the observer viewing position relative to the viewing region in response to the orientation change measure. An image generator (103) generates view images for the observer viewing position and the observer viewing orientation from the scene data.
An electric current supply system (20) is designed to be at least partially submerged in an electrically conductive liquid during operation thereof, and comprises at least one electrically conductive component (21, 22, 23, 24) enveloped in liquid- tight material (40). The component (21, 22, 23, 24) comprises sacrificial material that is capable of reacting electrochemically with the liquid. Further, the component (21, 22, 23, 24) comprises at least one gas trap portion (50) at which the sacrificial material occupies a space in the liquid-tight material (40) that is thereby defined with a gas trapping shape. If, in case of damage to the system (20) in an actual submerged state thereof, the component (21, 22, 23, 24) gets exposed to the liquid, it is achieved that an electrochemical reaction occurring at the exposed area of the component (21, 22, 23, 24) and an outflow of electric current to the liquid are stopped.
In one embodiment, an apparatus (12) is presented that detects wireless signals from external devices (18) that uniquely identify each of the external devices, records, in memory (30), information about the external devices without access to an external database, and compares information from the external devices to determine a relative location of the wearable device without using additional, power-hungry position location functionality if there is a threshold match in the compared information. In some embodiments, the invention uses the determined relative location to trigger an action at another device. The invention, using self-contained functionality, enables improvements in same location or home location determination accuracy, memory conservation, and power consumption.
G01S 5/02 - Position-fixing by co-ordinating two or more direction or position-line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
G08B 21/02 - Alarms for ensuring the safety of persons
G08B 25/01 - Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
84.
DETERMINING FUNCTIONAL STATUS OF IMMUNE CELLS TYPES AND IMMUNE RESPONSE
A method for determining functional status of at least one immune cell type in at least one sample of a subject comprises determining the functional status of the at least one immune cell type based on activity of at least one signaling pathway in the at least one immune cell type in the at least one sample of the subject; and optionally providing the functional status of the at least one immune cell type in the at least one sample of the subject.
C12Q 1/6883 - Nucleic acid products used in the analysis of nucleic acids, e.g. primers or probes for diseases caused by alterations of genetic material
G01N 33/50 - Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing
85.
A LIGHT EMITTING DEVICE, COMPRISING LIGHT EMITTING UNITS BEING ARRANGED IN A PLANE FILLING PATTERN
A light emitting device (1) is provided that can be used in various contexts, including the context of realizing an anti-fouling action on surfaces. The light emitting device (1) comprises light emitting units (10) being arranged in a plane filling pattern (20) for covering at least a substantial portion of a surface. Individual light emitting units (10) are electrically interconnected through connection areas (12, 13) as present on the light emitting units (10) for providing electrical access to an internal electrical circuit (11) thereof, wherein the light emitting units (10) overlap at the positions of at least portions of the connection areas (12, 13) thereof. Further, it may be so that at least one of the connection areas (12, 13) of the individual light emitting units (10) is electrically connected simultaneously to respective connection areas (12, 13) of at least two other light emitting units (10).
A material for binding to a cell culturing protein is disclosed. The material contains a bulk-modified elastomer comprising a plurality of fatty acid moieties covalently bound to the elastomer bulk, wherein the carboxylic acid groups of said moieties are available to provide said binding. Also disclosed are a fluidic device module, a cell culturing scaffold, a fluidic device, the method of synthesizing such a material and a drug testing method. With such a material, a (monolithic) fluidic device module may be manufactured in as few as a single step injection molding process.
The invention provides a layer stack (500) comprising a first silicone layer (510), wherein the first silicone layer (510) has a first surface (511) and a second surface (512), wherein the first silicone layer (510) is transmissive for UV radiation having one or more wavelengths selected from the range of 200-380 nm, wherein the layer stack (500) further comprises one or more of: - a first layer element configured at a first side of the first surface (511), wherein the first layer element is associated by a chemical binding with the first surface (511) directly or via a first intermediate layer, which is transmissive for UV radiation having one or more wavelengths selected from the range of 200-380 nm, wherein the first layer element at least comprises a first layer differing in composition from the first silicone layer (510), and wherein the first layer element is transmissive for UV radiation having one or more wavelengths selected from the range of 200-380 nm; and - a second layer element (620) configured at a second side of the second surface (512) wherein the second layer element (620) is associated by a chemical binding with the second surface (512) directly or via a second intermediate layer, wherein the second layer element (620) at least comprises a second layer (1220) differing in composition from the first silicone layer (510).
The invention provides an system (200) comprising (i) a light source (220) configured to provide radiation (221), wherein the radiation (221) at least comprises UV radiation, (ii) a waveguide element (1210) comprising a radiation exit window (230), wherein the waveguide element (1210) is configured to receive at least part of the radiation (221) and to radiate at least part of the radiation (221) to the exterior of the waveguide element (1210) via the radiation exit window (230) and configured to internally reflect part of the radiation (221) at the radiation exit window (230), (iii) an optical sensor (310) configured to sense an internal reflection intensity (I) of the internally reflected radiation (221), and (iv) a control system (300), functionally coupled to the optical sensor, and configured to reduce the intensity of the radiation (221) as function of reaching a predetermined first threshold of a reduction of the internal reflection intensity (I) over a time.
The present invention relates to detecting objects in medical images. In order to provide an improved detection of objects in medical images, a medical image detection device (10) is provided that comprises an image data input (12) and a processing unit (14). The image data input is configured to receive image data of a biological sample. The processing unit comprises a detector (16) and a classifier (18). The detector is configured to detect objects of interest in the sample by a detection in the image data of at least one predetermined object feature. The detected objects being candidate objects, wherein the candidate objects comprise true positives and possible false positives. Further, the classifier is configured to classify the possible false positives as false positives or as true positives. The classifier is a trained classifier, trained specifically to recognize the false positives of the detector.
G06V 20/69 - Microscopic objects, e.g. biological cells or cellular parts
G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A first electronic network node (110) is provided configured for a key exchange (KEX) protocol, the first network node is configured to - obtain a shared matrix (A) shared with a second network node, entries in the shared matrix A being selected modulo a first modulus q, generate a private key matrix (S I ), entries in the private key matrix being bounded in absolute value by a bound (s) generate a public key matrix (P I ) by computing a matrix product between the shared matrix (A) and the private key matrix (S I ) modulo the first modulus (q) and scaling the entries in the matrix product down to a second modulus (p).
A base station and a user equipment for a wireless communication network having a plurality of logical radio access networks are described. The base station communicates with a plurality of users to be served by the base station for accessing one or more of the logical radio access networks, and selectively controls the physical resources of the wireless communication network assigned to the logical radio access networks and/or controls access of the users or user groups to one or more of the logical radio access networks. The user equipment, for accessing at least one of the logical radio access networks, receives and processes a control signal from the base station, which indicates the physical resources of the wireless communication network assigned to the logical radio access network and/or includes access control information for the user equipment for accessing the logical radio access network.
A base station and a user equipment for a wireless communication network having a plurality of logical radio access networks are described. The base station communicates with a plurality of users to be served by the base station for accessing one or more of the logical radio access networks, and selectively controls the physical resources of the wireless communication network assigned to the logical radio access networks and/or controls access of the users or user groups to one or more of the logical radio access networks. The user equipment, for accessing at least one of the logical radio access networks, receives and processes a control signal from the base station, which indicates the physical resources of the wireless communication network assigned to the logical radio access network and/or includes access control information for the user equipment for accessing the logical radio access network.
Van Strijp, Dianne Arnoldina Margaretha Wilhelmina
Van Brussel, Anne Godefrida Catharina
Wrobel, Janneke
Van Zon, Joannes Baptist Adrianus Dionisius
Den Biezen, Eveline Catharina Anna Clasina
Alves De Inda, Marcia
Abstract
Methods are described for stratifying patient risk for patients with prostate cancer and for providing a treatment recommendation to a patient based on a phosphodiesterase 4D variant 7 (PDE4D7) risk score. A diagnostic kit and a computer program product for the analysis and determination of the PDE4D7 risk score are also described.
The present invention relates to certain target genes of the FOXO transcription factor family, which are markers for an oxidative stress state and can be used for inferring an oxidative stress state of a FOXO transcription factor element in the body of a medical subject. The invention further relates to methods for inferring an oxidative stress state of a FOXO transcription element and for inferring the activity of the FOXO/PI3K cellular signalling pathway based on expression levels of the target genes as well as products to perform the methods.
C07K 14/47 - Peptides having more than 20 amino acids; Gastrins; Somatostatins; Melanotropins; Derivatives thereof from humans from vertebrates from mammals
A first device and a second device are disclosed for reaching agreement on a secret value. Herein, the second device comprises a receiver configured to receive information indicative of a reconciliation data h from the first device, a processor configured to compute a common secret s based on an integer value b, an equation, and system parameters. The processor is configured to compute b based on a key exchange protocol. The first device has a number a in approximate agreement with the number b. The first device comprises a processor configured to determine a common secret s based on an integer value a an equation, and system parameters, and determine a reconciliation data h. The first device further comprises a transmitter configured to transmit information indicative of the reconciliation data h to the second device.
In the field of wireless communication networks or systems in which a user equipment is configured with semi-persistent scheduling (SPS), a first aspect of the invention provides for continuous or non-interrupted SPS of the user equipment after a handover, and a second aspect of the invention provides an enhanced control signaling for a user equipment configured with SPS to reduce the signaling overhead.
A light emitting arrangement (100) for anti-fouling of a surface (30), comprises an optical medium (10) and at least one light source (20) for emitting anti-fouling light. A first zone (1) of the arrangement (100), which is closest to the light source (20), is arranged and configured to predominantly make the anti-fouling light reflect in a specular manner towards an emission surface (12) of the optical medium (10), through the optical medium (10), a second zone (2) of the arrangement (100) is arranged and configured to predominantly realize propagation of the anti-fouling light through the optical medium (10) by total internal reflection, and a third zone (3) of the arrangement (100), which is furthest away from the light source (20), is arranged and configured to predominantly make the anti- fouling light scatter out of the optical medium (10), through the emission surface (12) of the optical medium (10).
The invention provides a light guide element (1300) comprising a light guide (300), wherein the light guide (300) comprises a first light guide face (301) and a second light guide face (302) with UV radiation transmissive light guide material (305) between the first light guide face (301) and the second light guide face (302), wherein the light guide element (1300) further comprises one or more of: (i) a first layer element (30) in contact with the first light guide face (301), wherein the first layer element (30) is transmissive for UV radiation; and (ii) a second layer element (130) in contact with the second light guide face (301), wherein the second layer element (130) has one or more functionalities selected from the group consisting of (a) reflective for UV radiation, (b) adhesive for adhering the light guide (300) to an object, (c) reinforcing the light guide element (1300), and (d) protective for the light guide (300).
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02B 6/10 - Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
99.
LIGHT GUIDES WITH LOW REFRACTIVE COATING TO BE USED IN WATER
The invention provides a light guide element (1300) comprising a light guide (300) and a layer element (30), wherein the light guide (300) comprises a light guide face (301) and wherein the layer element (30) comprises an optical layer (310), wherein said optical layer (310) is in contact with at least part of the light guide face (301), wherein the optical layer (310) has a first index of refraction (n1) smaller than 1.36 at 280 nm, wherein the light guide (300) comprises a UV radiation transmissive light guide material (305).
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02B 6/10 - Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
A system and method are provided for change detection in medical images. A difference image representing intensity differences between a first medical image and a second medical image is generated. A mixture model is fitted to an intensity distribution of the difference image to identify a plurality of probability distributions which collectively model the intensity distribution. A plurality of intensity ranges is determined as a function of the plurality of probability distributions. Image data of the difference image is labeled by determining into which of the plurality of intensity ranges said labeled image data falls. Accordingly, more accurate change detection is obtained than known systems and method.