There is provided an image processing apparatus and method that make it possible to suppress degradation of the encoding efficiency. In the case where primary transform that is a transform process for a prediction residual that is a difference between an image and a prediction image of the image is to be skipped, also secondary transform, which is a transform process for a primary transform coefficient obtained by the primary transform of the prediction residual, is skipped. The present disclosure can be applied, for example, to an image processing apparatus, an image encoding apparatus, an image decoding apparatus and so forth.
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/517 - Processing of motion vectors by encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
A watermark representing a link to an original video and/or metadata such as haptic metadata associated with the original video is embedded in the original video in such a way that a re-recording to the original video can still preserve the watermark. The watermark can be used to link to the original video or to the metadata related thereto.
H04N 19/467 - Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
An affective gaming system includes: an administrator unit configured to host a session of a video game; a receiving unit configured to receive biometric data associated with a first user of a plurality of users participating in the session of a video game; a first generating unit configured to generate, based on at least part of the biometric data, current emotion data associated with the first user; a second generating unit configured to generate, based at least in part on the current emotion data associated with the first user, target emotion data associated with the first user; and a modifying unit configured to modify, responsive to the difference between the target emotion data associated with the first user and the current emotion data associated with the first user, one or more aspects of the video game that are specific to the first user.
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A lens system provides to a user, a high-definition image in which generation of concentric circles is reduced. The lens system has one or more Fresnel lenses. A lens surface of each of the Fresnel lenses has a plurality of grooves that is concentrically formed. Both a pitch which is the distance between two adjacent grooves and the depth of each of the plurality of grooves vary with the distance from an optical axis that passes through the center of the lens system.
G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
G02B 3/04 - Simple or compound lenses with non-spherical faces with continuous faces that are rotationally symmetrical but deviate from a true sphere
G02B 3/08 - Simple or compound lenses with non-spherical faces with discontinuous faces, e.g. Fresnel lens
Provided is a cradle for supporting input devices having grips and tracked parts extending from the grips, the cradle including rear support parts that are capable of supporting rear parts of the grips, front support parts that are positioned forward away from the rear support parts and are capable of supporting front parts of the grips of the input devices, and, side support parts that are positioned outside in a left-right direction with respect to the rear support parts and the front support parts and are capable of supporting the tracked parts.
Systems and methods are disclosed for determining that a first end-user entity has performed a task within a computer simulation for which a non-fungible token (NFT) is to be provided, where the NFT is associated with a digital asset. Responsive to the determination, the NFT is provided to the first end-user entity so that the digital asset may be used, via the NFT, across plural different computer simulations and/or across plural different computer simulation platforms. Ownership of the NFT may also be subsequently transferred to other end-user entities for their own use across different simulations and/or platforms.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
For stability of a bit rate for groups of pictures (GOPs), a rate buffer bit controller feedback loop and a proportional integral derivative (PID) bit controller feedback loop may be used to maintain at least one video buffer.
An electronic device and method for generation of reflectance maps for relightable 3D models is disclosed. The electronic device acquires multi-view image data that includes a set of images of an object and generates a 3D mesh of the object based on the multi-view image data. The electronic device obtains a set of motion-corrected images based on a minimization of a rigid motion associated with the object between images of the set of images and generates texture maps in a UV space based on the set of motion-corrected images and the 3D mesh. The electronic device obtains specular and diffuse reflectance maps based on a separation of specular and diffuse reflectance components from the texture maps. and obtains a relightable 3D model of the object based on the specular and diffuse reflectance maps.
Computer game developers can implicitly create haptic assets from audio assets. A low pass filter passes (302) only audio assets with frequencies less than a threshold to a mapping module. The audio assets are then mapped (304) to haptic assets that can be output (306) by an ERM (208/700) of a computer game controller (206). The haptic output can be in synchronization with play of the audio assets on speakers.
G05G 9/047 - Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously the controlling member being movable by hand about orthogonal axes, e.g. joysticks
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
A63F 13/20 - Input arrangements for video game devices
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
10.
GENERATION OF REFLECTANCE MAPS FOR RELIGHTABLE 3D MODELS
An electronic device and method for generation of reflectance maps for relightable 3D models is disclosed. The electronic device acquires multi-view image data that includes a set of images of an object and generates a 3D mesh of the object based on the multi-view image data. The electronic device obtains a set of motion-corrected images based on a minimization of a rigid motion associated with the object between images of the set of images and generates texture maps in a UV space based on the set of motion-corrected images and the 3D mesh. The electronic device obtains specular and diffuse reflectance maps based on a separation of specular and diffuse reflectance components from the texture maps. and obtains a relightable 3D model of the object based on the specular and diffuse reflectance maps.
G01B 11/245 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G06T 7/55 - Depth or shape recovery from multiple images
An affective gaming system includes: an administrator unit configured to host a session of a video game; a receiving unit configured to receive biometric data associated with two or more users participating in the session of a video game; a first generating unit configured to generate, based on at least part of the biometric data, current emotion data associated with each of the two or more users; a selecting unit configured to select a first user based on at least part of the current emotion data; a second generating unit configured to generate, based at least in part on the current emotion data that is associated with a second user, target emotion data associated with the first user; and a modifying unit configured to modify, responsive to the difference between the target emotion data and the current emotion data that is associated with the first user, one or more aspects of the video game that are specific to the first user.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
To enhance the sensory experience of voice, in some cases at a later time than the speech was spoken to enable reliving emotions and experiences, vocal sounds captured by a microphone are processed by a computer game controller API. The API plays back the vocal sounds at a later time in haptic format on the controller. The vocal sounds may be computer game dialogue, party chat, or vocal sounds of the user as demanded by the computer game.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
H04R 3/04 - Circuits for transducers for correcting frequency response
13.
SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
A system includes a first image sensor that generates a first image signal by synchronously scanning all pixels at a prescribed timing, a second image sensor including an event-driven type vision sensor that, upon detecting a change in an intensity of incident light on each of the pixels, generates a second image signal asynchronously, an inertial sensor that acquires attitude information on the first image sensor and the second image sensor, a first computation processing device that recognizes a user on the basis of at least the second image signal and calculates coordinate information regarding the user on the basis of at least the second image signal, a second computation processing device that performs coordinate conversion on the coordinate information on the basis of the attitude information, and an image generation device that generates a display image which indicates a condition of the user, on the basis of the converted coordinate information.
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
Methods and systems for reconstructing a game world of a video game includes tracking status of game objects in the game world to detect wear on the one or more game objects exceeding a predefined threshold. An option to rebuild the one or more game objects is provided to a user and tools to rebuild the one or more game objects are provide in response to the user selecting the option to rebuild the game objects. The rebuilt game objects are used during subsequent gameplay of the video game.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
A63F 13/847 - Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
15.
EFFICIENT MAPPING COORDINATE CREATION AND TRANSMISSION
A method is disclosed to generate (u,v) coordinates at the decoder side by using parameters of orthographic projection functions, transmitted via an atlas bitstream. With the parameters for orthographic projection, the decoder is able to efficiently generate (u,v) coordinates and avoid their expensive coding.
The generation of a texture map using orthographic projections is performed in a fast and efficient manner. A method to generate texture maps taking significantly less time and also allowing maps to exploit the correlation between content of different frames in time is described herein. The texture mapping is able to be used for automatic generation of volumetric content or for more efficient compression of dynamic meshes. The texture map generation described herein includes ways to generate a texture atlas using orthographic projections. A novel stretch metric for orthographic projections is described, and a merging algorithm is devised to optimally cluster triangles into a single patch. Additionally, packing techniques are able to be used for mesh patches that try to optimize size and temporal stability.
To enhance the sensory experience of voice, in some cases at a later time than the speech was spoken (300) to enable reliving emotions and experiences, vocal sounds captured by a microphone are processed (304) by a computer game controller API. The API plays back (306) the vocal sounds at a later time in haptic format on the controller. The vocal sounds may be computer game dialogue, party chat, or vocal sounds of the user as demanded by the computer game.
G10L 25/48 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G10L 25/00 - Speech or voice analysis techniques not restricted to a single one of groups
Methods and systems for reconstructing a game world of a video game includes tracking status of game objects in the game world to detect wear on the one or more game objects exceeding a predefined threshold. An option to rebuild the one or more game objects is provided to a user and tools to rebuild the one or more game objects are provide in response to the user selecting the option to rebuild the game objects. The rebuilt game objects are used during subsequent gameplay of the video game.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
19.
ALTERING AUDIO AND/OR PROVIDING NON-AUDIO CUES ACCORDING TO LISTENER'S AUDIO DEPTH PERCEPTION
The 3D audio perception of a listener such as a computer gamer is tested "stereoscopically" and the results input to a source of audio such as a computer game. Audio (802) from the source of audio (such as a head-mounted display of a computer game system or speaker outputting audio from a game console) may be altered (810) to account for the listener's measured 3D audio acuity. In addition, or alternatively, visual or haptic cues may be provided (814) to alert the listener of 3D audio events.
Groups of people control a computer game using teamwork. This can be done by eye tracking (400) of each person to detect where each person is looking on screen at objects such as game control objects. The control action of the object looked at by the most people (404) in a "heat map" style of data collection is implemented (408) by the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
Eye tracking (1100) of the wearer of a virtual reality headset is used to customize/personalize (1102) VR video. Based on eye tracking, the VR scene may present different types of trees (302, 304, 306) for different types of gaze directions. As another example, based on gaze direction, a VR scene can be augmented with additional objects (502) based on gaze direction at a particular related object. A friend's gaze-dependent personalization may be imported (1104) into the wearer's system to increase companionship and user engagement. Customized options can be recorded and sold to other players.
A63F 13/525 - Changing parameters of virtual cameras
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
41 - Education, entertainment, sporting and cultural services
Goods & Services
ENTERTAINMENT SERVICES IN THE NATURE OF AN ON-GOING REALITY TELEVISION SERIES; ENTERTAINMENT SERVICES IN THE NATURE OF AN ON-GOING REALITY TELEVISION SERIES PROVIDED THROUGH CABLE, SATELLITE, AND INTERNET TRANSMISSION; PROVIDING NON-DOWNLOADABLE TELEVISION PROGRAMS VIA VIDEO-ON-DEMAND SERVICE; ENTERTAINMENT SERVICES, NAMELY, DISTRIBUTION OF ONGOING REALITY TELEVISION SERIES; PROVIDING A WEBSITE FEATURING ENTERTAINMENT INFORMATION; PROVIDING INFORMATION ABOUT A TELEVISION SERIES VIA A WEBSITE
23.
INFORMATION PROCESSING DEVICE, CONTROL METHOD OF INFORMATION PROCESSING DEVICE, AND PROGRAM
An information processing device obtains information regarding the position of each fingertip of a user in a real space, and determines contact between a virtual object set within a virtual space and a finger of the user. The information processing device sets the virtual object in a partly deformed state such that a part of the virtual object, the part corresponding to the position of the finger determined to be in contact with the object among the fingers of the user, is located more to a far side from a user side than the finger, and displays the virtual object having the shape set thereto as an image in the virtual space on a display device.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/25 - Output arrangements for video game devices
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
A processing device includes: a detection unit configured to detect input information representative of a sequence of input signals for a video game that are input by a user using one or more input controls; an identification unit configured to identify one or more input signal variations in dependence upon one or more differences between the detected input information and predetermined input information representative of one or more predetermined sequences of input signals for the video game; a generation unit configured to generate assistance information in dependence upon the one or more identified input signal variations; and a provision unit configured to provide the generated assistance information to the user.
An image generation apparatus increases an adjustment amount of a luminance distribution to a target value B at a time t0 at which an amount of light entering the eyes of a user changes to such a degree that the change has an influence on an action of photoreceptor cells, to thereby cause a head-mounted display to display an image 310b having a luminance increased from that of an original image 310a. The image generation apparatus gradually decreases the adjustment amount of the luminance distribution during a restoration period Δt in such a manner that an image 310c having the original luminance distribution is displayed at a later time t1.
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 40/18 - Eye characteristics, e.g. of the iris
H04N 9/69 - Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
26.
CUSTOMIZABLE VIRTUAL REALITY SCENES USING EYE TRACKING
Eye tracking of the wearer of a virtual reality headset is used to customize/personalize VR video. Based on eye tracking, the VR scene may present different types of trees for different types of gaze directions. As another example, based on gaze direction, a VR scene can be augmented with additional objects based on gaze direction at a particular related object. A friend's gaze-dependent personalization may be imported into the wearer's system to increase companionship and user engagement. Customized options can be recorded and sold to other players.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
To avoid startling a computer game player immersed in virtual reality for example, active noise cancelation is gradually introduced. As an alternative, ambient noise is gradually increased to conceal loud external sounds. The noise cancelation or ambient noise generation is established according to sound exceeding a background threshold as detected by a microphone. The noise cancelation or ambient noise generation can be established according to images of a noisy object as imaged by a camera.
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
28.
ALTERING AUDIO AND/OR PROVIDING NON-AUDIO CUES ACCORDING TO LISTENER'S AUDIO DEPTH PERCEPTION
The 3D audio perception of a listener such as a computer gamer is tested “stereoscopically” and the results input to a source of audio such as a computer game. Audio from the source of audio (such as a head-mounted display of a computer game system or speaker outputting audio from a game console) may be altered to account for the listener's measured 3D audio acuity. In addition, or alternatively, visual or haptic cues may be provided to alert the listener of 3D audio events.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
29.
METHOD AND SYSTEM FOR AUTO-PLAYING PORTIONS OF A VIDEO GAME
A method for providing an auto-play mode option to a user during gameplay of a video game includes accessing, by a server, a user play model, which incorporates extracted features related to gameplay by the user and classification of the extracted features. The accessing of the model is triggered at a current time during gameplay. The method also includes identifying, by the server, predicted interactive activity that is predicted to occur ahead of the current time of gameplay. The method further includes identifying, by the server, at least part of the predicted interactive activity to be anticipated grinding content (AGC). The method also includes providing a notification, by the server, to a display screen of a user device, where the notification identifies the AGC in upcoming gameplay and provides the user with an option to use the auto-play mode during gameplay of the AGC.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
30.
SYSTEMS AND METHODS FOR APPLYING A MODIFICATION MICROSERVICE TO A GAME INSTANCE
A method for implementing a modification microservice with a game cloud system is described. The method includes executing a game instance of a game. The game instance is executed using a plurality of microservices assembled for the game instance. The method further includes accessing a modification microservice engineered to be executed with the game instance. The modification microservice adds a compute capability to the game instance. The modification microservice is executed outside of a server system in which the plurality of microservices is assembled for the game instance. Also, the modification microservice is accessed by one or more application programming interface (API) calls that obtain results data from said execution of the modification microservice. The one or more API calls are managed via a modification interface that manages the access to the modification microservice and use of the results data by the game instance.
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
31.
SIGNAL PROCESSING CIRCUIT, SIGNAL PROCESSING METHOD, AND PROGRAM
Provided is a signal processing circuit for processing event signals generated by an event-based vision sensor (EVS), the signal processing circuit comprising a memory for storing program code and a processor for executing operations according to the program code, wherein the operations include: detecting at least one line segment or curve formed by a set of in-block positions of event signals generated in blocks into which an EVS detection area is divided; and correcting at least one of a first line segment or a first curve detected in the first block, or a second line segment or a second curve detected in a second block adjacent to the first block, so that a first end point of the first line segment or the first curve overlaps a second end point of the second line segment or the second curve.
A method for providing an auto-play mode option to a user during gameplay of a video game includes accessing, by a server, a user play model, which incorporates extracted features related to gameplay by the user and classification of the extracted features. The accessing of the model is triggered at a current time during gameplay. The method also includes identifying, by the server, predicted interactive activity that is predicted to occur ahead of the current time of gameplay. The method further includes identifying, by the server, at least part of the predicted interactive activity to be anticipated grinding content (AGC). The method also includes providing a notification, by the server, to a display screen of a user device, where the notification identifies the AGC in upcoming gameplay and provides the user with an option to use the auto-play mode during gameplay of the AGC.
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
A63F 13/497 - Partially or entirely replaying previous game actions
33.
ORTHOATLAS: TEXTURE MAP GENERATION FOR DYNAMIC MESHES USING ORTHOGRAPHIC PROJECTIONS
The generation of a texture map using orthographic projections is performed in a fast and efficient manner. A method to generate texture maps taking significantly less time and also allowing maps to exploit the correlation between content of different frames in time is described herein. The texture mapping is able to be used for automatic generation of volumetric content or for more efficient compression of dynamic meshes. The texture map generation described herein includes ways to generate a texture atlas using orthographic projections. A novel stretch metric for orthographic projections is described, and a merging algorithm is devised to optimally cluster triangles into a single patch. Additionally, packing techniques are able to be used for mesh patches that try to optimize size and temporal stability.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
34.
TEXT MESSAGE OR APP FALLBACK DURING NETWORK FAILURE IN A VIDEO GAME
A method for managing gameplay of a video game is provided, including: executing a session of a video game by a cloud gaming resource; streaming video generated by the session over a network to a client device associated to a player of the video game, to enable gameplay of the session by the player; detecting a loss of network connectivity between the client device and the session; responsive to detecting the loss of network connectivity, then initiating transmission of updates regarding the session, via an alternative communication channel, to a secondary device associated to the player.
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A condition information acquisition section of an image generation device acquires a condition of communication and condition information of a head-mounted display. An image generation section generates a display image including distorted images for a left eye and a right eye. A reduction processing section converts the display image to data in which different regions have different reduction ratios in accordance with the condition of communication, etc., and transmits the data through an output section. An image size restoration section of the head-mounted display restores the transmitted data to the display image in an original size to cause the display image to be displayed by a display section.
A method of improving accessibility for the user operation of a first application on a computer includes the steps of taking one or more measurements of a current user's interaction with an application on the computer, comparing the one or more measurements with expectations derived from measurements from a first corpus of users, characterising one or more needs of the current user based upon the comparison, and modifying at least a first property of the first application responsive to the characterised need or needs.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
37.
GROUP CONTROL OF COMPUTER GAME USING AGGREGATED AREA OF GAZE
Groups of people control a computer game using teamwork. This can be done by eye tracking of each person to detect where each person is looking on screen at objects such as game control objects. The control action of the object looked at by the most people in a “heat map” style of data collection is implemented by the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
38.
HAPTIC ASSET GENERATION FOR ECCENTRIC ROTATING MASS (ERM) FROM LOW FREQUENCY AUDIO CONTENT
Computer game developers can implicitly create haptic assets from audio assets. A low pass filter passes only audio assets with frequencies less than a threshold to a mapping module. The audio assets are then mapped to haptic assets that can be output by an ERM of a computer game controller. The haptic output can be in synchronization with play of the audio assets on speakers.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
39.
SYSTEMS AND METHODS FOR INTEGRATING REAL-WORLD CONTENT IN A GAME
A method for integration of real-world content into a game is described. The method includes receiving a request to play the game and accessing overlay multimodal data generated from a portion of real-world multimodal data received as user generated content (RGC). The overlay multimodal data relates to authored multimodal data generated for the game. The method includes replacing the authored multimodal data in one or more scenes of the game with the overlay multimodal data.
A63F 13/537 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
40.
TEXT MESSAGE OR APP FALLBACK DURING NETWORK FAILURE IN A VIDEO GAME
A method for managing gameplay of a video game is provided, including: executing a session of a video game by a cloud gaming resource; streaming video generated by the session over a network to a client device associated to a player of the video game, to enable gameplay of the session by the player; detecting a loss of network connectivity between the client device and the session; responsive to detecting the loss of network connectivity, then initiating transmission of updates regarding the session, via an alternative communication channel, to a secondary device associated to the player.
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
41.
SYSTEMS AND METHODS FOR APPLYING A MODIFICATION MICROSERVICE TO A GAME INSTANCE
A method for implementing a modification microservice with a game cloud system is described. The method includes executing a game instance of a game. The game instance is executed using a plurality of microservices assembled for the game instance. The method further includes accessing a modification microservice engineered to be executed with the game instance. The modification microservice adds a compute capability to the game instance. The modification microservice is executed outside of a server system in which the plurality of microservices is assembled for the game instance. Also, the modification microservice is accessed by one or more application programming interface (API) calls that obtain results data from said execution of the modification microservice. The one or more API calls are managed via a modification interface that manages the access to the modification microservice and use of the results data by the game instance.
A63F 13/352 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
42.
SYSTEMS AND METHODS FOR INTEGRATING REAL-WORLD CONTENT IN A GAME
A method for integration of real- world content into a game is described. The method includes receiving a request to play the game and accessing overlay multimodal data generated from a portion of real-world multimodal data received as user generated content (RGC). The overlay multimodal data relates to authored multimodal data generated for the game. The method includes replacing the authored multimodal data in one or more scenes of the game with the overlay multimodal data.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
43.
EFFICIENT MAPPING COORDINATE CREATION AND TRANSMISSION
A method is disclosed to generate (u,v) coordinates at the decoder side by using parameters of orthographic projection functions, transmitted via an atlas bitstream. With the parameters for orthographic projection, the decoder is able to efficiently generate (u,v) coordinates and avoid their expensive coding.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/46 - Embedding additional information in the video signal during the compression process
A harassment detection apparatus includes: an executing unit configured to execute a session of a shared environment; an input unit configured to receive biometric data, the biometric data being associated with a plurality of users participating in the executed session of the shared environment; a generating unit configured to generate, based on at least a part of the biometric data, emotion data associated with the plurality of users, the emotion data comprising a valence value and/or an arousal value associated with each of the plurality of users; a detection unit configured to detect, responsive to at least a first part of the emotion data satisfying one or more of a first set of criteria, one or more first users associated with the at least first part of the emotion data; and a modifying unit configured to modify, responsive to the detection of the one or more first users, one or more aspects of the shared environment.
A cloud-based gaming system generates first and second instances of a virtual world of an online game for first and second players, respectively. First and second video streams of the first and second instances of the virtual world, respectively, are transmitted to the first and second players, respectively. The second video stream includes a ghosted version of a feature within the first instance of the virtual world. A request is received from the second player to merge the first and second instances of the virtual world. With the first player's approval, a merged instance of the virtual world is automatically generated by the cloud-gaming system as a combination of the first and second instances of the virtual world. Third and fourth video streams of the merged instance of the virtual world are transmitted to the first and second players, respectively, in lieu of the first and second video streams, respectively.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
An Image Activated Cell Sorting (IACS) classification workflow includes: employing a neural network-based feature encoder (or extractor) to extract features of cell images; automatically clustering cells based on extracted cell features; identifying a cluster to pick which cluster(s) to sort based on the cell images; fine-tuning a classification network based on the cluster(s) selected; and once refined, the classification network is used to sort cells for real-time live sorting.
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
An information processing device includes a control unit that performs control to output information regarding an image of a user viewpoint in a virtual space, in which the control unit performs control to switch a display corresponding to a performer included in the image between a live-action image generated based on a captured image of the performer and a character image corresponding to the performer, according to a result of detecting a state of at least one user in the virtual space or a state of the virtual space.
A cloud-based gaming system generates first and second instances of a virtual world of an online game for first and second players, respectively. First and second video streams of the first and second instances of the virtual world, respectively, are transmitted to the first and second players, respectively. The second video stream includes a ghosted version of a feature within the first instance of the virtual world. A request is received from the second player to merge the first and second instances of the virtual world. With the first player's approval, a merged instance of the virtual world is automatically generated by the cloud-gaming system as a combination of the first and second instances of the virtual world. Third and fourth video streams of the merged instance of the virtual world are transmitted to the first and second players, respectively, in lieu of the first and second video streams, respectively.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/47 - Controlling the progress of the video game involving branching, e.g. choosing one of several possible scenarios at a given point in time
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
49.
RAPID GENERATION OF 3D HEADS WITH NATURAL LANGUAGE
Two dimensional images are converted (302) to a 3D neural radiance field (NeRF), which is modified (402) based on text input to resemble the type of character demanded by the text. An open-source "CLIP" model scores (404) how well an image matches a line of text to produce a final 3D NeRF, which may be converted (408) to a polygonal mesh and imported into a computer simulation such as a computer game.
G06N 3/006 - Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A data processing apparatus includes circuitry configured to: receive a first signal indicative of one or more words communicated from a first user of online content to one or more second users of online content; classify the one or more words using the first signal; receive one or more second signals indicative of one or more physiological characteristics of the first user within a time period of a start of the communication of the one or more words; classify the one or more physiological characteristics using the one or more second signals; based on a classification of the one or more words and a classification of the one or more physiological characteristics, generate an action signal indicating that an action associated with the first user of the online content is to be taken, the action signal indicating a characteristic of the action determined based on a combination of the classification of the one or more words and the classification of the one or more physiological characteristics; and output the action signal.
A data processing apparatus includes circuitry configured to: receive a first signal indicative of one or more words communicated from a first user of online content to one or more second users of online content; classify the one or more words using the first signal; receive one or more second signals indicative of one or more physiological characteristics of the one or more second users in response to the communicated one or more words; classify the one or more physiological characteristics of the one or more second users using the one or more second signals; determine, based on a classification of the one or more words and a classification of the one or more physiological characteristics of the one or more second users, whether to generate an action signal, the action signal indicating that an action associated with the first user of the online content is to be taken; and when it is determined an action signal is to be generated, generate and output the action signal.
A method including receiving from a device over a network at an optimizer server a plurality of game assets of a video game. The method including generating at least one combined game asset to represent the plurality of game assets. The method including sending the at least one combined game asset to the device for use in the video game.
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
Deep learning techniques such as vector graphics (300) are used to create 3D content and assets for metaverse applications. Vector graphics is a scalable format that provides rich 3D content. A vector graphics encoder (302) such as a deep neural network such as a recurrent neural network (RNN) or transformer receives (400) vector graphics and generates (402) an encoded output. The encoded output is decoded (404) by a 3D decoder such as another deep neural network that outputs 2D graphics for comparison with the original image. Loss is computed (408) between the original and the output of the 3D decoder. The loss is back propagated (410) to train the vector graphics encoder to generate 3D content.
Deep learning is used to dynamically adapt virtual humans (300) in metaverse applications. The adaptation can be according to user preferences (400). In addition or alternatively, virtual humans and pets (302) can be adapted for metaverse applications based on demographics (408) of the user. The user's personal demographics may be used to establish (410) the costume, skin color, emotion, voice, and behavior of the virtual humans. Similar considerations may be used to adapt virtual pets to the user's experience of the metaverse.
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
To help a computer game player in understanding a computer game, upon pausing (300, 500) the game, visual subtitles may be presented (304). In addition, or alternatively, Braille representing subtitles may be output (506) as a series of vibrations on a touch pad of the controller. When the person's finger reaches the edge of the touch pad, a new series of Braille subtitles may be presented (510). Depending on where the player is in reading the subtitles and how fast the player reads them, the game video may be slowed down (310) from normal speed.
A method including receiving from a device over a network at an optimizer server a plurality of game assets of a video game. The method including generating at least one combined game asset to represent the plurality of game assets. The method including sending the at least one combined game asset to the device for use in the video game.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
57.
SIGNAL PROCESSING CIRCUIT, SIGNAL PROCESSING METHOD, AND PROGRAM
Provided is a signal processing circuit which processes an event signal generated by an event-based vision sensor (EVS) and which comprises a memory for storing a program code and a processor for executing an operation according to the program code, wherein the operation includes using a first method to detect a relationship between positions within a block from event signals generated in blocks obtained by dividing an EVS detection region if the ratio of the eigenvalues of the variance-covariance matrix of the positions exceeds a threshold value and using a second method different from the first method to detect the relationship between the positions if the ratio of the eigenvalues does not exceed the threshold value.
An Image Activated Cell Sorting (IACS) classification workflow includes: employing a neural network-based feature encoder (or extractor) to extract features of cell images; automatically clustering cells based on extracted cell features; identifying a cluster to pick which cluster(s) to sort based on the cell images; fine-tuning a classification network based on the cluster(s) selected; and once refined, the classification network is used to sort cells for real-time live sorting.
G06V 20/69 - Microscopic objects, e.g. biological cells or cellular parts
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
To provide a game presenting system having a plurality of game systems for each executing a process of a game in which a plurality of users participate, and a game presenting machine for presenting a situation of the game executed by the game system, wherein the game presenting machine obtains a motion image related to the game executed by each of the plurality of game systems, and produces a game presenting screen image showing, as a list, at least some of the plurality of motion images obtained.
A63F 13/5252 - Changing parameters of virtual cameras using two or more virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character changes room or displaying a rear-mirror view in a car-driving game
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
68.
AI Player Model Gameplay Training and Highlight Review
Methods and systems for engaging an AI player of a user to play a video game on behalf of the user includes creating the AI player for the user using at least some of the attributes of the user, training the AI player using inputs provided by the user during game play of the video game, and providing access to the video game for game play to the AI player. The access allows the AI player to provide inputs to the video game that substantially mimics a play style of the user. Control of the game play of the video game can be transitioned to the user at any time during the game play of the AI player. The user can also control the game play of the AI player from a video recording of the game play.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
An apparatus comprising circuitry configured to perform a transport channel processing chain, the transport channel processing chain comprising a sub-carrier puncturing function, the sub-carrier puncturing function comprising puncturing, in each subframe of a composite transmission time interval, a set of subcarriers from at least one mapped physical resource block.
A terminal device for use with a wireless telecommunications network, the terminal device comprising: storage configured to store ancillary information not essential to every connection between the terminal device and the wireless telecommunications network; a controller configured to produce data indicative of the stored ancillary information; a transmitter configured to transmit the produced data to the wireless telecommunication network; and a receiver configured to receive an indication from the wireless telecommunication network to transmit the ancillary information to the wireless telecommunication network, wherein in response to the indication, the transmitter is configured to transmit the ancillary information.
A terminal device for use with a wireless telecommunications network, the terminal device comprising: storage configured to store ancillary information not essential to every connection between the terminal device and the wireless telecommunications network; a controller configured to produce data indicative of the stored ancillary information; a transmitter configured to transmit the produced data to the wireless telecommunication network; and a receiver configured to receive an indication from the wireless telecommunication network to transmit the ancillary information to the wireless telecommunication network, wherein in response to the indication, the transmitter is configured to transmit the ancillary information.
An input device for controlling a computing system includes one or more sensors configured to sense a change in weight distribution of a user positioned on the input device in use, and a transmitter configured to transmit a signal based on the sensed change in weight distribution, for use in a virtual joystick input to a computing system.
Gaze tracking data representing a user's gaze is analyzed to determine one or more regions of interest. One or more gaze tracking parameters are determined from the gaze tracking data. Adjusted foveation data is determined representing an adjusted size and/or shape of one or more regions of interest in one or more images to be subsequently presented to the user based on the one or more gaze tracking parameters. The compression of the one or more transmitted images is adjusted so that fewer bits are needed to transmit data for portions of an image outside the one or more regions of interest than for portions of the image within the one or more regions of interest. Adjusting compression includes eliminating the region(s) of interest from images that are presented to the user during the saccade or blink.
Methods and apparatus provide for a head-mounted display to be worn by a user located within a physical environment and for engaging in an interactive experience; an entertainment device including a processing section that executes an interactive application that is manipulated in part through receiving inputs from the user; a communication partner operating to relay video and audio signals outputted from the entertainment device to the head-mounted display; continuously acquiring information of the environment at a predetermined frame rate; at least one camera on the head-mounted display, which captures images of the physical environment, where the communication partner is connected to the entertainment device with wired communication, and the communication partner is connected to the head-mounted display with wireless communication.
To help a computer game player in understanding a computer game, upon pausing the game, visual subtitles may be presented. In addition, or alternatively, Braille representing subtitles may be output as a series of vibrations on a touch pad of the controller. When the person's finger reaches the edge of the touch pad, a new series of Braille subtitles may be presented. Depending on where the player is in reading the subtitles and how fast the player reads them, the game video may be slowed down from normal speed.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/214 - Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
A63F 13/49 - Saving the game status; Pausing or ending the game
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
G09B 21/00 - Teaching, or communicating with, the blind, deaf or mute
Deep learning is used to dynamically adapt virtual humans in metaverse applications. The adaptation can be according to user preferences. In addition or alternatively, virtual humans and pets can be adapted for metaverse applications based on demographics of the user. The user's personal demographics may be used to establish the costume, skin color, emotion, voice, and behavior of the virtual humans. Similar considerations may be used to adapt virtual pets to the user's experience of the metaverse.
Deep learning techniques such as vector graphics are used to create 3D content and assets for metaverse applications. Vector graphics is a scalable format that provides rich 3D content. A vector graphics encoder such as a deep neural network such as a recurrent neural network (RNN) or transformer receives vector graphics and generates an encoded output. The encoded output is decoded by a 3D decoder such as another deep neural network that outputs 2D graphics for comparison with the original image. Loss is computed between the original and the output of the 3D decoder. The loss is back propagated to train the vector graphics encoder to generate 3D content.
A method for modifying user sentiment is described. The method includes analyzing behavior of a group of players during a play of a game. The behavior of the group of players is indicative of a sentiment of the group of players during the play of the game. The method includes accessing a non-player character (NPC) during the play of the game. The NPC has a characteristic that influences a change in the sentiment of the group of players. The method includes placing the NPC into one or more scenes of the game during the play of the game for a period of time until the change in the sentiment of the group of players is determined. The change in the sentiment of the group of players is determined by analyzing of the behavior of the group of players during said play of the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
79.
AI PLAYER MODEL GAMEPLAY TRAINING AND HIGHLIGHT REVIEW
Methods and systems for engaging an Al player of a user to play a video game on behalf of the user includes creating the Al player for the user using at least some of the attributes of the user, training the Al player using inputs provided by the user during game play of the video game, and providing access to the video game for game play to the Al player. The access allows the Al player to provide inputs to the video game that substantially mimics a play style of the user. Control of the game play of the video game can be transitioned to the user at any time during the game play of the Al player. The user can also control the game play of the Al player from a video recording of the game play.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/86 - Watching games played by other players
Methods and apparatus provide for acquiring position information about a head-mounted display; performing information processing using the position information about the head-mounted display; generating and outputting data of an image to be displayed as a result of the information processing; and generating and outputting data of an image of a user guide indicating position information about a user in a real space using the position information about the head-mounted display, where the image of the user guide represents a state of the real space in which the user is physically located, as viewed obliquely.
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
Ambisonics audio such as may be used for computer simulations such as computer games is improved by using multi-order optimizations that frame an optimization problem that minimizes a cost function across a subset of Ambisonics orders for a chosen Ambisonics order “N”. In a simple form, this cost function minimizes error across all orders (0<=n<=N), and additional weighting is applied to emphasize or de-emphasize particular orders. The cost functions and optimization criteria may be different for binaural and speaker outputs.
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
G10L 19/005 - Correction of errors induced by the transmission channel, if related to the coding algorithm
A technique for encoding Ambisonics audio includes inputting audio to multiple Ambisonics encoders producing respective Ambisonics soundfields. Prior to mixing the soundfields, each soundfield is weighted to mitigate artifacts from order-truncation. After weighting, the soundfields are mixed to produce Ambisonics audio.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
H04R 5/02 - Spatial or constructional arrangements of loudspeakers
H04S 5/00 - Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
84.
SYSTEMS AND METHODS FOR MODIFYING USER SENTIMENT FOR PLAYING A GAME
A method for modifying user sentiment is described. The method includes analyzing behavior of a group of players during a play of a game. The behavior of the group of players is indicative of a sentiment of the group of players during the play of the game. The method includes accessing a non-player character (NPC) during the play of the game. The NPC has a characteristic that influences a change in the sentiment of the group of players. The method includes placing the NPC into one or more scenes of the game during the play of the game for a period of time until the change in the sentiment of the group of players is determined. The change in the sentiment of the group of players is determined by analyzing of the behavior of the group of players during said play of the game.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
Provided is an operation device (10) capable of expressing a pseudo weight. This operation device (10) includes: a plurality of link shafts (SF); a plurality of node mechanism units (ND) that form a grid with the plurality of link shafts (SF), each of the node mechanism units (ND) respectively holding one end of two or more link shafts (SF) among the plurality of link shafts (SF) in a manner that allows changing of the orientations of the two or more link shafts (SF); a placement table (90) on which the plurality of node mechanism units (ND) are placed; and a pulling mechanism (80) that pulls the node mechanism units (ND) in a direction for returning to a predetermined reference position on the placement table (90).
Interactive display of virtual trophies includes scanning a surface for one or more location anchor points. A trophy rack location is determined using the location anchor points. A trophy rack mesh is applied over an image frame of the surface using the determined trophy rack location. One or more trophy models are displayed over the trophy rack mesh with a display device. A trophy rack layout is generated information from the one or more trophy models and the trophy rack mesh and finally the trophy rack layout information is stored or transmitted.
A technique for encoding Ambisonics audio includes inputting audio to multiple Ambisonics encoders producing respective Ambisonics soundfields. Prior to mixing the soundfields, each soundfield is weighted to mitigate artifacts from order-truncation. After weighting, the soundfields are mixed to produce Ambisonics audio. Accordingly, an apparatus includes at least one processor configured with instructions which are executable to receive mono audio sources with direction and target Ambisonics order respectively and send respective mono audio with respective direction to a respective Ambisonics encoder to cause the encoder to output a respective soundfield of respective Ambisonics order.
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
H04S 3/00 - Systems employing more than two channels, e.g. quadraphonic
H04N 21/2368 - Multiplexing of audio and video streams
88.
DISPLAY CONTROL APPARATUS, DISPLAY CONTROL METHOD, AND PROGRAM
Provided is a display control apparatus including a state detection unit configured to detect a state of a user who observes an image, and a display control unit configured to cause a display to display the image in which a plurality of display content items are superimposed on a photographed image, and to control a behavior of each of the display content items according to the state of the user.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06F 1/16 - Constructional details or arrangements
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
A method of controlling the complexity levels of dialogues in a video game, includes the steps of: loading a first dialogue from a game database according to a parameter related to complexity level of dialogue in user settings; outputting the first dialogue; accepting a user operation of in response to the first dialogue; adjusting the parameter related to complexity level of dialogues in user settings based on the user operation; loading a second dialogue from the game database according to the adjusted parameter related to complexity level of dialogue; and outputting the second dialogue.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
Interactive display of virtual trophies includes scanning a surface for one or more location anchor points. A trophy rack location is determined using the location anchor points. A trophy rack mesh is applied over an image frame of the surface using the determined trophy rack location. One or more trophy models are displayed over the trophy rack mesh with a display device. A trophy rack layout is generated information from the one or more trophy models and the trophy rack mesh and finally the trophy rack layout information is stored or transmitted.
Ambisonics audio such as may be used for computer simulations such as computer games is improved by using multi-order optimizations that frame an optimization problem that minimizes a cost function (602) across a subset of Ambisonics orders for a chosen Ambisonics order "N". In a simple form, this cost function minimizes error across all orders (0 <= n <= N), and additional weighting (604) is applied to emphasize or de-emphasize particular orders. The cost functions and optimization criteria may be different for binaural and speaker outputs.
G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
G06F 16/60 - Information retrieval; Database structures therefor; File system structures therefor of audio data
92.
SYSTEMS AND METHODS OF PROTECTING PERSONAL SPACE IN MULTI-USER VIRTUAL ENVIRONMENT
A method for protecting personal space in a multi-user virtual environment includes the steps of generating an avatar for a target user in the multi-user virtual environment, determining a relationship score between the target user and a peer user, creating a personal space around the avatar of the target user, wherein the dimensions of the personal space is computed based on the relationship score with the peer user, detecting the peer user's avatar crossing the boundary of the personal space, and applying rules to the peer user to restrict his/her interactions with the target user.
An input device includes: a plurality of input members; an upper surface having a right region in which a part of the plurality of input members is disposed, a left region in which another part of the plurality of input members is disposed, and a center region that is a region between the right region and the left region; and a light emitting region formed along an outer edge of the center region. The light emitting region includes a first light emitting portion configured to indicate identification information assigned to a plurality of input devices connected to an information processing apparatus, and a second light emitting portion configured to emit light based on information different from the identification information.
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
A63F 13/21 - Input arrangements for video game devices characterised by their sensors, purposes or types
F21V 33/00 - Structural combinations of lighting devices with other articles, not otherwise provided for
G06F 3/0338 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of limited linear or angular displacement of an operating part of the device from a neutral position, e.g. isotonic or isometric joysticks
94.
OVERLAPPING RENDERING, STREAMOUT, AND DISPLAY AT A CLIENT OF RENDERED SLICES OF A VIDEO FRAME
A method of cloud gaming is disclosed. The method including receiving an encoded video frame at a client, wherein a server executes an application to generate a rendered video frame which is then encoded at an encoder at the server as the encoded video frame, wherein the encoded video frame includes one or more encoded slices that are compressed. The method including decoding the one or more encoded slices at a decoder of the client to generate one or more decoded slices. The method including rendering the one or more decoded slices for display at the client. The method including begin displaying the one or more decoded slices that are rendered before fully receiving the one or more encoded slices at the client.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
H04L 67/131 - Protocols for games, networked simulations or virtual reality
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
H04N 21/8547 - Content authoring involving timestamps for synchronizing content
95.
TRACKING HISTORICAL GAME PLAY OF ADULTS TO DETERMINE GAME PLAY ACTIVITY AND COMPARE TO ACTIVITY BY A CHILD, TO IDENTIFY AND PREVENT CHILD FROM PLAYING ON AN ADULT ACCOUNT
Methods and systems for warning misuse of a user account of an adult user includes tracking use of the user account. The interactions at the user account are monitored and when the content accessed by a user is adult content and the user is determined to be a child, providing an alert to the adult user informing the adult user of the child accessing age-inappropriate content.
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
Provided is an operation device (10) capable of expressing haptic perception. The operation device (10) comprises: a plurality of link shafts (SF); a plurality of node mechanism parts (ND) forming a lattice shape with the plurality of link shafts (SF), each of the node mechanism parts (ND) holding one end of at least two or more link shafts (SF) among the plurality of link shafts (SF) so that it is possible to change the orientation of the two or more link shafts (SF); and a vibration unit that vibrates the operation device (10) according to the state of at least one of the plurality of node mechanism parts (ND).
Protocol enhancements for IEEE 802.11 handling of Buffer Status Reports (BSRs) to provide real-time information on Quality of Service (QoS) requirements. A non-AP STA can report QoS status about portions of a buffer, which then allows the AP to schedule transmissions for those buffers toward satisfying their QoS requirements. The non-AP STA can also change parameters of the QoS characteristics element of an existing SCS to allow the AP to schedule transmission for the SCS traffic with the new parameters immediately. The non-AP STA can also report on buffers that should soon arrive to allow the AP to trigger the transmission before those buffers arrive. Additional benefits are also provided.
Sony Interactive Entertainment Europe Limited (United Kingdom)
Inventor
Brislin, Simon Andrew St John
Ryan, Nicholas Anthony Edward
Abstract
A method for identifying a cutscene in gameplay footage, the method comprising: receiving a first video signal and a second video signal each comprising a plurality of images; creating a first video fingerprint comprising a plurality of signatures, each signature of the plurality of signatures based on at least one image of the plurality of images in the first video signal; creating a second video fingerprint comprising a plurality of signatures, each signature of the plurality of signatures based on at least one image of the plurality of images in the second video signal; comparing the first video fingerprint with the second video fingerprint; and identifying a cutscene when at least a portion of the first video fingerprint has at least a threshold level of similarity with at least a portion of the second video fingerprint.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
G06F 17/17 - Function evaluation by approximation methods, e.g. interpolation or extrapolation, smoothing or least mean square method
G06V 20/40 - Scenes; Scene-specific elements in video content
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
99.
IMAGE DISPLAYING SYSTEM, DISPLAY APPARATUS, AND IMAGE DISPLAYING METHOD
A state information acquisition section of an image generation apparatus acquires state information of the head of a user. An image generation section generates a display image corresponding to a visual field. A down-sampling section down-samples image data and transmits the down-sampled image data from a transmission section. A distortion correction section of a head-mounted display performs, after an up-sampling section up samples the data, correction according to aberration of the eyepiece, for each primary color and causes the resulting data to be displayed on a display section.
An information processing device connected to a display device and to a sensor which detects relative positions of a user and the display device is provided. This information processing device acquires information indicating the relative positions of the user and the display device and detected by the sensor, and controls a position or a posture of at least one virtual object as a control target within data of a video on the basis of the acquired information indicating the relative positions of the user and the display device. Thereafter, the information processing device outputs the data of the video generated on the basis of information associated with a virtual space where the virtual object is arranged to the display device, and causes the display device to display the data.