Snap Inc.

United States of America

Back to Profile

1-100 of 4,642 for Snap Inc. and 4 subsidiaries Sort by
Query
Aggregations
IP Type
        Patent 4,282
        Trademark 360
Jurisdiction
        United States 3,502
        World 937
        Canada 111
        Europe 92
Owner / Subsidiary
[Owner] Snap Inc. 4,604
Snapchat, Inc. 35
Bitstrips Inc. 1
Flite, Inc. 1
Verbify Inc. 1
Date
New (last 4 weeks) 66
2024 April (MTD) 47
2024 March 119
2024 February 73
2024 January 71
See more
IPC Class
G06T 19/00 - Manipulating 3D models or images for computer graphics 622
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer 492
G02B 27/01 - Head-up displays 453
H04L 12/58 - Message switching systems 399
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus 350
See more
NICE Class
09 - Scientific and electric apparatus and instruments 245
42 - Scientific, technological and industrial services, research and design 139
41 - Education, entertainment, sporting and cultural services 132
35 - Advertising and business services 102
38 - Telecommunications services 60
See more
Status
Pending 1,073
Registered / In Force 3,569
  1     2     3     ...     47        Next Page

1.

SNAP AI

      
Application Number 1788117
Status Registered
Filing Date 2023-09-27
Registration Date 2023-09-27
Owner Snap Inc. (USA)
NICE Classes  ?
  • 09 - Scientific and electric apparatus and instruments
  • 35 - Advertising and business services
  • 42 - Scientific, technological and industrial services, research and design

Goods & Services

Downloadable computer programs and downloadable computer software using artificial intelligence for natural language processing, generation, understanding and analysis; downloadable computer programs and downloadable computer software for machine learning; downloadable computer programs and downloadable computer software for image recognition and generation; downloadable computer programs and downloadable computer software using artificial intelligence for music generation and suggestions; downloadable computer programs and downloadable computer software for artificial intelligence, namely, computer software for developing, running and analyzing algorithms that are able to learn to analyze, classify, and take actions in response to exposure to data; downloadable computer software using artificial intelligence for image and video editing and retouching; downloadable computer software using artificial intelligence for the generation of text, images, photos, videos, audio, and multimedia content; downloadable computer software using artificial intelligence for connecting consumers with targeted promotional advertisements; downloadable computer software using artificial intelligence for the generation of advertisements and promotional materials; downloadable computer software using artificial intelligence for creating and generating text; downloadable computer software using artificial intelligence for translating words or text from one language to another; downloadable chatbot software for image recognition and generation; downloadable chatbot software for music generation; downloadable chatbot software for image and video editing and retouching; downloadable chatbot software for the generation of text, images, photos, videos, audio and multimedia content; downloadable chatbot software for connecting consumers with promotional messaging; downloadable chatbot software for simulating human conversations; downloadable chatbot software for suggesting image, video, audio, text, and multimedia content; downloadable chatbot software for responding to oral and written prompts. Advertising, marketing, and promotion services; marketing, advertising, and promotional services using artificial intelligence software, chatbot software, and augmented reality software; dissemination of advertising for others via computer and other communication networks; online retail store services featuring a wide variety of consumer goods of others; promoting the goods and services of others by providing an internet website portal featuring links to the websites of others; facilitating the exchange and sale of services and products of third parties via computer and communication networks, namely, operating on-line marketplaces for sellers and buyers of goods and services; consumer profiling for commercial or marketing purposes; providing consumer information and advice for consumers in the selection of products to buy. Research and development in the field of artificial intelligence; providing online non-downloadable software using artificial intelligence for natural language processing, generation, understanding, and analysis; providing online non-downloadable software for developing, running and analyzing algorithms that are able to learn to analyze, classify, and take actions in response to exposure to data; software as a service (saas) services featuring software for using language models; providing online non-downloadable software for machine-learning based language and speech processing; providing online non-downloadable software for the translation text from one language to another; providing on-line non-downloadable software using artificial intelligence for image recognition and generation; providing on-line non-downloadable software using artificial intelligence for text recognition and generation; providing online non-downloadable software for the generation of advertisements and promotional materials; providing on-line non-downloadable software using artificial intelligence for music generation and suggestions; providing on-line non-downloadable software using artificial intelligence for image and video editing and retouching; providing on-line non-downloadable software using artificial intelligence for the generation of text, images, photos, videos, audio, and multimedia content; providing on-line non-downloadable software using artificial intelligence for connecting consumers with promotional advertisements; providing temporary use of online non-downloadable chatbot software using artificial intelligence for image recognition and generation; providing temporary use of online non-downloadable chatbot software using artificial intelligence for text recognition and generation; providing temporary use of online non-downloadable chatbot software using artificial intelligence for music recognition and generation; providing temporary use of online non-downloadable chatbot software using artificial intelligence for the generation of text, images, photos, video, audio, text, and multimedia content; providing temporary use of online non-downloadable chatbot software using artificial intelligence for connecting consumers with advertisements; providing temporary use of online non-downloadable chatbot software using artificial intelligence for simulating human conversations; providing temporary use of online non-downloadable chatbot software using artificial intelligence for responding to oral and written prompts.

2.

PHONE CASE FOR TRACKING AND LOCALIZATION

      
Application Number 17970274
Status Pending
Filing Date 2022-10-19
First Publication Date 2024-04-25
Owner Snap Inc. (USA)
Inventor
  • Canberk, Ilteris Kaan
  • Hallberg, Matthew
  • Zhuang, Richard

Abstract

A case for a portable device like a smartphone includes light sources such as LEDs, which, when illuminated, can be detected and tracked by a head-worn augmented or virtual reality device. The light sources may be located at the corners of the case and may emit infrared light. A relative pose between the smartphone and the head-worn device can be determined based on computer vision techniques performed on images captured by the head-worn device that includes light from the light sources. Relative movement between the smartphone and the head-worn device can be used to provide user input to the head-worn device, as can touch input on the portable device. In some instances, the case is powered inductively from the portable device.

IPC Classes  ?

  • H04M 1/72409 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
  • H04B 1/3888 - Arrangements for carrying or protecting transceivers
  • H04M 1/72454 - User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

3.

3D SPACE CARVING USING HANDS FOR OBJECT CAPTURE

      
Application Number 17973167
Status Pending
Filing Date 2022-10-24
First Publication Date 2024-04-25
Owner Snap Inc. (USA)
Inventor
  • Micusik, Branislav
  • Evangelidis, Georgios
  • Wolf, Daniel

Abstract

A method for carving a 3D space using hands tracking is described. In one aspect, a method includes accessing a first frame from a camera of a display device, tracking, using a hand tracking algorithm operating at the display device, hand pixels corresponding to one or more user hands depicted in the first frame, detecting, using a sensor of the display device, depths of the hand pixels, identifying a 3D region based on the depths of the hand pixels, and applying a 3D reconstruction engine to the 3D region.

IPC Classes  ?

  • G06T 7/292 - Multi-camera tracking
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 7/564 - Depth or shape recovery from multiple images from contours
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06V 20/64 - Three-dimensional objects

4.

PHONE CASE FOR TRACKING AND LOCALIZATION

      
Application Number US2023077192
Publication Number 2024/086645
Status In Force
Filing Date 2023-10-18
Publication Date 2024-04-25
Owner SNAP INC. (USA)
Inventor
  • Canberk, Ilteris Kaan
  • Hallberg, Matthew
  • Zhuang, Richard

Abstract

A case for a portable device like a smartphone includes light sources such as LEDs, which, when illuminated, can be detected and tracked by a head-worn augmented or virtual reality device. The light sources may be located at the corners of the case and may emit infrared light. A relative pose between the smartphone and the head-worn device can be determined based on computer vision techniques performed on images captured by the head- worn device that includes light from the light sources. Relative movement between the smartphone and the head-worn device can be used to provide user input to the head-worn device, as can touch input on the portable device. In some instances, the case is powered inductively from the portable device.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • G02B 27/01 - Head-up displays
  • G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements

5.

MULTIFUNCTIONAL CASE FOR ELECTRONICS-ENABLED EYEWEAR

      
Application Number 18400191
Status Pending
Filing Date 2023-12-29
First Publication Date 2024-04-25
Owner Snap Inc. (USA)
Inventor
  • Steger, Stephen Andrew
  • Tsao, Tiffany Ming
  • Huang, Qiaokun

Abstract

A carry case for an electronics-enabled eyewear device has incorporated therein electronic components for connection to the eyewear device while storing the eyewear device. The case comprises a rigid frame structure defining an openable holding space for the pair of smart glasses, and a compressible shock-resistant protective cover on the frame structure. The exterior of the case may be predominantly defined by the shock resistant protective cover.

IPC Classes  ?

  • A45C 11/04 - Spectacle cases; Pince-nez cases
  • H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
  • H02J 7/34 - Parallel operation in networks using both storage and other dc sources, e.g. providing buffering

6.

SERVICE MANAGER ON A WEARABLE DEVICE

      
Application Number 18049174
Status Pending
Filing Date 2022-10-23
First Publication Date 2024-04-25
Owner Snap Inc. (USA)
Inventor
  • Gajulapally, Adithya
  • Gurgul, Piotr
  • Ly, Andrew
  • Moll, Sharon

Abstract

Systems, methods, and computer readable media for a service manger to manage services on a wearable device are disclosed. The service manager remains active in memory and listens for requests for services. The service manager then determines which services to run and which to stop to respond to the requests for services. After running a service, the service manager calls the service to respond to the request and sends a response to the request to the sender of the request. The service manager may be resident on a different processor than a processor from which the requests for services originate. The service manager maintains priorities of the services to determine which services to stop or remove from memory.

IPC Classes  ?

7.

GENERATING GROUND TRUTH DATASETS FOR VIRTUAL REALITY EXPERIENCES

      
Application Number 18400289
Status Pending
Filing Date 2023-12-29
First Publication Date 2024-04-25
Owner Snap Inc. (USA)
Inventor
  • Zhou, Kai
  • Qi, Qi
  • Hol, Jeroen

Abstract

Systems and methods of generating ground truth datasets for producing virtual reality (VR) experiences, for testing simulated sensor configurations, and for training machine-learning algorithms. In one example, a recording device with one or more cameras and one or more inertial measurement units captures images and motion data along a real path through a physical environment. A SLAM application uses the captured data to calculate the trajectory of the recording device. A polynomial interpolation module uses Chebyshev polynomials to generate a continuous time trajectory (CTT) function. The method includes identifying a virtual environment and assembling a simulated sensor configuration, such as a VR headset. Using the CTT function, the method includes generating a ground truth output dataset that represents the simulated sensor configuration in motion along a virtual path through the virtual environment. The virtual path is closely correlated with the motion along the real path as captured by the recording device. Accordingly, the output dataset produces a realistic and life-like VR experience. In addition, the methods described can be used to generate multiple output datasets, at various sample rates, which are useful for training the machine-learning algorithms which are part of many VR systems.

IPC Classes  ?

8.

HEAD PROPERTY DETECTION IN DISPLAY-ENABLED WEARABLE DEVICES

      
Application Number US2023077092
Publication Number 2024/086580
Status In Force
Filing Date 2023-10-17
Publication Date 2024-04-25
Owner SNAP INC. (USA)
Inventor
  • Olgun, Ugur
  • You, Choonshin
  • Zhang, Bo Ya

Abstract

A display-enabled eyewear device has an integrated head sensor that dynamically and continuously measures or detects various cephalic parameters of a wearer's head. The head sensor includes a loop coupler system integrated in a lens-carrying frame to sense proximate ambient RF absorption influenced by head presence, size, and/or distance. Autonomous device management dynamically adjust or cause adjustment of selected device features based on current detected values for the cephalic parameters, which can include wear status, head size, and frame-head spacing.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

9.

STYLIZING A WHOLE-BODY OF A PERSON

      
Application Number US2023076997
Publication Number 2024/086534
Status In Force
Filing Date 2023-10-16
Publication Date 2024-04-25
Owner SNAP INC. (USA)
Inventor
  • Rami Koujan, Mohammad
  • Kokkinos, Iason

Abstract

Methods and systems are disclosed for performing real-time stylizing operations. The system receives an image that includes a depiction of a whole body of a real-world person. The system applies a machine learning model to the image to generate a stylized version of the whole body of the real-world person corresponding to a given style, the machine learning model being trained using training data to establish a relationship between a plurality of training images depicting synthetically rendered whole bodies of persons and corresponding ground-truth stylized versions of the whole bodies of the persons of the given style. The system replaces the depiction of the whole body of the real-world person in the image with the generated stylized version of the whole body of the real-world person.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation

10.

SIGN LANGUAGE INTERPRETATION WITH COLLABORATIVE AGENTS

      
Application Number US2023077007
Publication Number 2024/086538
Status In Force
Filing Date 2023-10-16
Publication Date 2024-04-25
Owner SNAP INC. (USA)
Inventor
  • Zhou, Kai
  • Pounds, Jennica
  • Robotka, Zsolt
  • Kajtár, Márton Gergely

Abstract

A method for recognizing sign language using collaborative augmented reality devices is described. In one aspect, a method includes accessing a first image generated by a first augmented reality device and a second image generated by a second augmented reality device, the first image and the second image depicting a hand gesture of a user of the first augmented reality device, synchronizing the first augmented reality device with the second augmented reality device, in response to the synchronizing, distributing one or more processes of a sign language recognition system between the first and second augmented reality devices, collecting results from the one or more processes from the first and second augmented reality devices, and displaying, in near real-time in a first display of the first augmented reality device, text indicating a sign language translation of the hand gesture based on the results.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

11.

Providing reduced availability modes in messaging

      
Application Number 16729629
Grant Number 11968157
Status In Force
Filing Date 2019-12-30
First Publication Date 2024-04-23
Grant Date 2024-04-23
Owner Snap Inc. (USA)
Inventor
  • Voss, Jeremy
  • Heikkinen, Christie Marie
  • Rakhamimov, Daniel
  • Desserrey, Laurent
  • Territo, Susan Marie
  • Koai, Edward

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing reduced availability modes in messaging. The program and method provide for maintaining a count of consecutive time periods in which message content has been exchanged between a first user and a second user in a messaging application; receiving, from a device associated with the first user, a request to set an availability mode for the first user to a reduced availability mode with respect to the messaging application; setting, in response to receiving the request, the availability mode for the first user to the reduced availability mode; and refraining from updating the count while the availability mode is set to the reduced availability mode.

IPC Classes  ?

  • H04L 51/043 - Real-time or near real-time messaging, e.g. instant messaging [IM] using or handling presence information
  • H04L 51/224 - Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

12.

HEAD PROPERTY DETECTION IN DISPLAY-ENABLED WEARABLE DEVICES

      
Application Number 17968289
Status Pending
Filing Date 2022-10-18
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Olgun, Ugur
  • You, Choonshin
  • Zhang, Bo Ya

Abstract

A display-enabled eyewear device has an integrated head sensor that dynamically and continuously measures or detects various cephalic parameters of a wearer's head. The head sensor includes a loop coupler system integrated in a lens-carrying frame to sense proximate ambient RF absorption influenced by head presence, size, and/or distance. Autonomous device management dynamically adjust or cause adjustment of selected device features based on current detected values for the cephalic parameters, which can include wear status, head size, and frame-head spacing.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes

13.

SIGN LANGUAGE INTERPRETATION WITH COLLABORATIVE AGENTS

      
Application Number 17967209
Status Pending
Filing Date 2022-10-17
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Zhou, Kai
  • Pounds, Jennica
  • Robotka, Zsolt
  • Kajtár, Márton Gergely

Abstract

A method for recognizing sign language using collaborative augmented reality devices is described. In one aspect, a method includes accessing a first image generated by a first augmented reality device and a second image generated by a second augmented reality device, the first image and the second image depicting a hand gesture of a user of the first augmented reality device, synchronizing the first augmented reality device with the second augmented reality device, in response to the synchronizing, distributing one or more processes of a sign language recognition system between the first and second augmented reality devices, collecting results from the one or more processes from the first and second augmented reality devices, and displaying, in near real-time in a first display of the first augmented reality device, text indicating a sign language translation of the hand gesture based on the results.

IPC Classes  ?

  • G06F 40/58 - Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
  • G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
  • G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

14.

ENERGY-EFFICIENT ADAPTIVE 3D SENSING

      
Application Number 18299923
Status Pending
Filing Date 2023-04-13
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Wang, Jian
  • Ma, Sizhuo
  • Tilmon, Brevin
  • Wu, Yicheng
  • Krishnan Gorumkonda, Gurunandan
  • Zahreddine, Ramzi
  • Evangelidis, Georgios

Abstract

An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

15.

SUSPICIOUS GROUP DETECTION

      
Application Number 18530502
Status Pending
Filing Date 2023-12-06
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Shah, Neil
  • Nilforoshan-Dardashti, Hamed

Abstract

Systems, devices, media, and methods are presented for determining a level of abusive network behavior suspicion for groups of entities and for identifying suspicious entity groups. A suspiciousness metric is developed and used to evaluate a multi-view graph across multiple views where entities are associated with nodes of the graph and attributes of the entities are associated with levels of the graph.

IPC Classes  ?

16.

DATA RETRIEVAL USING REINFORCED CO-LEARNING FOR SEMI-SUPERVISED RANKING

      
Application Number 18543330
Status Pending
Filing Date 2023-12-18
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • He, Shibi
  • Li, Yanen
  • Xu, Ning

Abstract

A computer-implement method comprises: training a classifier with labeled data from a dataset; classifying, by the trained classifier, unlabeled data from the dataset; providing, by the classifier to a policy gradient, a reward signal for each data/query pair; transferring, by the classifier to a ranker, learning; training, by the policy gradient, the ranker; ranking data from the dataset based on a query; and retrieving data from the ranked data in response to the query.

IPC Classes  ?

17.

WEB DOCUMENT ENHANCEMENT

      
Application Number 18535988
Status Pending
Filing Date 2023-12-11
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Rotem, Efrat
  • Krieger, Ariel
  • Merali, Emmanuel

Abstract

A method for enhancing a presentation of a network document by a client terminal with real time social media content. The method comprises analyzing a content in a web document to identify a relation to a first of a plurality of multi participant events documented in an event dataset, each of the plurality of multi participant events is held in a geographical venue which hosts an audience of a plurality of participants, matching a plurality of event indicating tags of each of a plurality of user uploaded media content files with at least one feature of the first multi participant event to identify a group of user uploaded media content files selected from the plurality of user uploaded media content files, and forwarding at least some members of the group to a simultaneous presentation on a browser running on a client terminal and presenting the web document.

IPC Classes  ?

  • G06F 16/9536 - Search customisation based on social or collaborative filtering
  • G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
  • G06F 16/951 - Indexing; Web crawling techniques
  • G06F 16/9535 - Search customisation based on user profiles and personalisation
  • G06F 16/9538 - Presentation of query results
  • G06F 16/958 - Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
  • G06F 40/20 - Natural language analysis
  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
  • H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]

18.

TRACTABLE BODY-BASED AR SYSTEM INPUT

      
Application Number US2023034559
Publication Number 2024/081152
Status In Force
Filing Date 2023-10-05
Publication Date 2024-04-18
Owner SNAP INC. (USA)
Inventor
  • Alvarez, Attila
  • Kajtár, Márton Gergely
  • Pocsi, Peter
  • Pounds, Jennica
  • Retek, David
  • Robotka, Zsolt

Abstract

A hand-tracking platform generates gesture components for use as user inputs into an application of an Augmented Reality (AR) system. In some examples, the hand-tracking platform generates real-world scene environment frame data based on gestures being made by a user of the AR system using a camera component of the AR system. The hand-tracking platform recognizes a gesture component based on the real-world scene environment frame data and generates gesture component data based on the gesture component. The application utilizes the gesture component data as user input in a user interface of the application.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

19.

ENERGY-EFFICIENT ADAPTIVE 3D SENSING

      
Application Number US2023034564
Publication Number 2024/081154
Status In Force
Filing Date 2023-10-05
Publication Date 2024-04-18
Owner SNAP INC. (USA)
Inventor
  • Wang, Jian
  • Ma, Sizhuo
  • Tilmon, Brevin
  • Wu, Yicheng
  • Krishnan Gorumkonda, Gurunandan
  • Zahreddine, Ramzi
  • Evangelidis, Georgios

Abstract

An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.

IPC Classes  ?

  • H04N 5/222 - Studio circuitry; Studio devices; Studio equipment
  • G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
  • H04N 23/56 - Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
  • G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
  • H04N 13/128 - Adjusting depth or disparity

20.

SYSTEMS, METHODS AND DEVICES FOR PROVIDING SEQUENCE BASED DISPLAY DRIVERS

      
Application Number 18393052
Status Pending
Filing Date 2023-12-21
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Goetz, Howard V.
  • Sands, Glen R.

Abstract

A display driver device (210) receives a downloadable “sequence” for dynamically reconfiguring displayed image characteristics in an image system. The display driver device comprises one or more storage devices, for example, memory devices, for storing image data (218) and portions of drive sequences (219) that are downloadable and/or updated in real time depending on various inputs (214).

IPC Classes  ?

  • G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix

21.

HYPEREXTENDING HINGE FOR WEARABLE ELECTRONIC DEVICE

      
Application Number 18399378
Status Pending
Filing Date 2023-12-28
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Ryner, Michael
  • Steger, Stephen

Abstract

Eyewear having a frame, a hinge, and a hyperextendable temple. An extender is coupled to the hinge and the temple, and the extender extends with respect to the hinge allowing hyperextension of the temple with respect to the frame. The extender may include a bushing and a spring that allows the temple hyperextension, and which also creates a bias force to urge the temple against a user's head during use.

IPC Classes  ?

22.

STYLIZING A WHOLE-BODY OF A PERSON

      
Application Number 17967230
Status Pending
Filing Date 2022-10-17
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Koujan, Mohammad Rami
  • Kokkinos, Iason

Abstract

Methods and systems are disclosed for performing real-time stylizing operations. The system receives an image that includes a depiction of a whole body of a real-world person. The system applies a machine learning model to the image to generate a stylized version of the whole body of the real-world person corresponding to a given style, the machine learning model being trained using training data to establish a relationship between a plurality of training images depicting synthetically rendered whole bodies of persons and corresponding ground-truth stylized versions of the whole bodies of the persons of the given style. The system replaces the depiction of the whole body of the real-world person in the image with the generated stylized version of the whole body of the real-world person.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06T 7/11 - Region-based segmentation
  • G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation

23.

VOICE CONTROLLED UIS FOR AR WEARABLE DEVICES

      
Application Number 18397786
Status Pending
Filing Date 2023-12-27
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Moll, Sharon
  • Gurgul, Piotr

Abstract

Systems, methods, and computer readable media for voice-controlled user interfaces (UIs) for augmented reality (AR) wearable devices are disclosed. Embodiments are disclosed that enable a user to interact with the AR wearable device without using physical user interface devices. An application has a non-voice-controlled UI mode and a voice-controlled UI mode. The user selects the mode of the UI. The application running on the AR wearable device displays UI elements on a display of the AR wearable device. The UI elements have types. Predetermined actions are associated with each of the UI element types. The predetermined actions are displayed with other information and used by the user to invoke the corresponding UI element.

IPC Classes  ?

  • G06F 3/16 - Sound input; Sound output
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
  • G10L 15/16 - Speech classification or search using artificial neural networks
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog

24.

REMOTE ANNOTATION AND NAVIGATION USING AN AR WEARABLE DEVICE

      
Application Number 18046367
Status Pending
Filing Date 2022-10-13
First Publication Date 2024-04-18
Owner Snap Inc. (USA)
Inventor
  • Canberk, Ilteris Kaan
  • Hallberg, Matthew
  • Jung, Bernhard

Abstract

Systems, methods, and computer readable media for remote annotations, drawings, and navigation instructions sent to an augmented reality (AR) wearable device from a computing device are disclosed. The AR wearable device captures images and sends them to the remote computing device to provide a real-time view of what the user of the AR wearable device sees. A user of the remote computing device can add navigation instructions and can select an image to annotate or draw on. The AR wearable device provides 3-dimensional (3D) coordinate information within a 3D world of the AR wearable device for the selected image. The user of the remote computing device then annotates or draws on the selected image. The remote computing device determines 3D coordinates for the annotations and drawings within the 3D world of the AR wearable device. The annotations and drawings are sent to the AR wearable device with associated 3D coordinates.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 7/70 - Determining position or orientation of objects or cameras

25.

REMOTE ANNOTATION AND NAVIGATION IN AUGMENTED REALITY

      
Application Number US2023034731
Publication Number 2024/081184
Status In Force
Filing Date 2023-10-09
Publication Date 2024-04-18
Owner SNAP INC. (USA)
Inventor
  • Canberk, Ilteris Kaan
  • Hallberg, Matthew
  • Jung, Bernhard

Abstract

Systems, methods, and computer readable media for remote annotations, drawings, and navigation instructions sent to an augmented reality (AR) wearable device from a computing device are disclosed. The AR wearable device captures images and sends them to the remote computing device to provide a real-time view of what the user of the AR wearable device sees. A user of the remote computing device can add navigation instructions and can select an image to annotate or draw on. The AR wearable device provides 3-dimensional (3D) coordinate information within a 3D world of the AR wearable device for the selected image. The user of the remote computing device then annotates or draws on the selected image. The remote computing device determines 3D coordinates for the annotations and drawings within the 3D world of the AR wearable device. The annotations and drawings are sent to the AR wearable device with associated 3D coordinates.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]

26.

Antenna system for unmanned aerial vehicles with propellers

      
Application Number 17082438
Grant Number 11958603
Status In Force
Filing Date 2020-10-28
First Publication Date 2024-04-16
Grant Date 2024-04-16
Owner Snap Inc. (USA)
Inventor
  • Boals, Justin
  • Olgun, Ugur
  • Shukla, Ashutosh Y.

Abstract

A UAV having a wireless-front end including propellers that are dual purposed to function as ground communication antenna elements. This design reduces weight and size of the UAV, hence enabling a compact design with the capability of handling a heavier payload.

IPC Classes  ?

  • B64C 39/02 - Aircraft not otherwise provided for characterised by special use
  • B64U 10/13 - Flying platforms
  • B64U 101/30 - UAVs specially adapted for particular uses or applications for imaging, photography or videography
  • H01Q 1/28 - Adaptation for use in or on aircraft, missiles, satellites, or balloons
  • H01Q 1/48 - Earthing means; Earth screens; Counterpoises
  • H01Q 15/14 - Reflecting surfaces; Equivalent structures

27.

9-DOF OBJECT TRACKING

      
Application Number 17937153
Status Pending
Filing Date 2022-09-30
First Publication Date 2024-04-11
Owner Snap Inc. (USA)
Inventor
  • Berger, Itamar
  • Dudovitch, Gal
  • Harel, Peleg
  • Mishin Shuvi, Ma'Ayan

Abstract

Aspects of the present disclosure involve a system for presenting AR items. The system receives a video that includes a depiction of a real-world object in a real-world environment. The system generates a three-dimensional (3D) bounding box for the real-world object and stabilizes the 3D bounding box based on one or more sensors of the device. The system determines a position, orientation, and dimensions of the real-world object based on the stabilized 3D bounding box and renders a display of an augmented reality (AR) item within the video based on the position, orientation, and dimensions of the real-world object.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 7/20 - Analysis of motion
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
  • G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes

28.

AUGMENTED REALITY EXPERIENCE EVENT METRICS SYSTEM

      
Application Number 18543475
Status Pending
Filing Date 2023-12-18
First Publication Date 2024-04-11
Owner Snap Inc. (USA)
Inventor
  • Grover, Benjamin Todd
  • Lazarenko, Taras
  • Lewis, Elliot
  • Powell, Michael Aubrey
  • Zhao, Jialu

Abstract

Methods and systems are disclosed for performing generating AR experiences on a messaging platform. The methods and systems receive, from a client device, a request to access an augmented reality (AR) experience and access a list of event types associated with the AR experience used to generate one or more metrics. The methods and systems determine that an interaction associated with the AR experience corresponds to a first event type of the list of event types and generates interaction data for the first event type representing the interaction. In response to receiving a request to terminate the AR experience, the systems and methods transmit the interaction data to a remote server.

IPC Classes  ?

  • H04L 41/50 - Network service management, e.g. ensuring proper service fulfilment according to agreements
  • H04L 67/50 - Network services

29.

AR SYSTEM BENDING CORRECTION

      
Application Number US2023034375
Publication Number 2024/076571
Status In Force
Filing Date 2023-10-03
Publication Date 2024-04-11
Owner SNAP INC. (USA)
Inventor
  • Kalkgruber, Matthias
  • Pereira Torres, Tiago Miguel
  • Welge, Weston
  • Zahreddine, Ramzi

Abstract

A system for deformation or bending correction in an Augmented Reality (AR) system. Sensors are positioned in a frame of a head-worn AR system to sense forces or pressure acting on the frame by temple pieces attached to the frame. The sensed forces or pressure are used in conjunction with a model of the frame to determine a corrected model of the frame. The corrected model is used to correct video data captured by the AR system and to correct a video virtual overlay that is provided to a user wearing the head- worn AR system.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G02B 27/01 - Head-up displays
  • G02C 5/22 - Hinges
  • H04N 13/327 - Calibration thereof
  • H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays

30.

EXTERNAL SCREEN STREAMING FOR AN EYEWEAR DEVICE

      
Application Number 17960627
Status Pending
Filing Date 2022-10-05
First Publication Date 2024-04-11
Owner Snap Inc. (USA)
Inventor
  • Canberk, Ilteris Kaan
  • Jung, Bernhard
  • Kang, Shin Hwun
  • Skrypnyk, Daria
  • Sun, Tianyi
  • Tran, Lien Le Hong

Abstract

Systems and methods are provided for performing operations on an augmented reality (AR) device using an external screen streaming system. The system establishes, by one or more processors of an AR device, a communication with an external client device. The system causes overlay of, by the AR device, a first AR object on a real-world environment being viewed using the AR device. The system receives, by the AR device, a first image from the external client device. The system, in response to receiving the first image from the external client device, overlays the first image on the first AR object by the AR device.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics

31.

SYSTEM AND METHOD FOR IMAGE PROJECTION MAPPING

      
Application Number 18390608
Status Pending
Filing Date 2023-12-20
First Publication Date 2024-04-11
Owner Snap Inc. (USA)
Inventor
  • Nielsen, Simon Saito Haagen
  • Conatser, Zachary Collins

Abstract

A system including a drone having a projector to project an image from a projection origin. The drone also has a navigation unit to determine location information for the drone. A processor coupled to the drone includes a memory. Execution of programming by the processor configures the system to obtain a projection surface architecture for a projection surface. The projection surface architecture includes reference points that correspond to physical locations on the projection surface. Each reference point is associated with relationship data with respect to an architecture origin. The system also receives location information for the drone, adapts the relationship data responsive to change in the location information, adjusts the image using the adapted relationship data, and projects the adjusted image onto the projection surface.

IPC Classes  ?

  • G03B 21/14 - Projectors or projection-type viewers; Accessories therefor - Details
  • G06F 9/30 - Arrangements for executing machine instructions, e.g. instruction decode
  • G06T 7/194 - Segmentation; Edge detection involving foreground-background segmentation

32.

ANIMATED EXPRESSIVE ICON

      
Application Number 18541369
Status Pending
Filing Date 2023-12-15
First Publication Date 2024-04-11
Owner Snap Inc. (USA)
Inventor
  • Al Majid, Newar Husam
  • Boyd, Nathan Kenneth
  • Chang, Sheldon
  • Samaranayake, Chamal
  • Voss, Jeremy

Abstract

Embodiments described herein include an expressive icon system to present an animated graphical icon, wherein the animated graphical icon is generated by capture facial tracking data at a client device. In some embodiments, the system may track and capture facial tracking data of a user via a camera associated with a client device (e.g., a front facing camera, or a paired camera), and process the facial tracking data to animate a graphical icon.

IPC Classes  ?

  • G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • H04L 67/306 - User profiles
  • H04M 1/72427 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting games or graphical animations
  • H04M 1/7243 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
  • H04M 1/72469 - User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons

33.

EMOTION RECOGNITION FOR WORKFORCE ANALYTICS

      
Application Number 18541970
Status Pending
Filing Date 2023-12-15
First Publication Date 2024-04-11
Owner Snap Inc. (USA)
Inventor
  • Shaburov, Victor
  • Monastyrshyn, Yurii

Abstract

Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.

IPC Classes  ?

  • G06Q 10/0639 - Performance analysis of employees; Performance analysis of enterprise or organisation operations
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

34.

REAL-TIME MACHINE LEARNING BASED IN-PAINTING

      
Application Number US2023033907
Publication Number 2024/076486
Status In Force
Filing Date 2023-09-27
Publication Date 2024-04-11
Owner SNAP INC. (USA)
Inventor
  • Dudovitch, Gal
  • Harel, Peleg
  • Mishin Shuvi, Ma'Ayan
  • Sasson, Gal
  • Zohar, Matan

Abstract

Aspects of the present disclosure involve a system for performing real-time in-painting using machine learning techniques. The system receives a video that includes a depiction of a real-world object in a real -world environment. The system accesses a segmentation associated with the real-world object and removes a depiction of the real -world object from a region of a first frame of the video. The system processes, by a machine learning model, the first frame and one or more previous frames of the video that precede the first frame to generate a new frame in which portions of the first frame have been blended into the region from which the depiction of the real-world object has been removed.

IPC Classes  ?

  • G06T 5/50 - Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
  • G06T 5/60 - using machine learning, e.g. neural networks
  • G06T 5/77 - Retouching; Inpainting; Scratch removal

35.

EXTERNAL SCREEN STREAMING FOR AN EYEWEAR DEVICE

      
Application Number US2023034437
Publication Number 2024/076613
Status In Force
Filing Date 2023-10-04
Publication Date 2024-04-11
Owner SNAP INC. (USA)
Inventor
  • Canberk, Ilteris Kaan
  • Jung, Bernhard
  • Kang, Shin Hwun
  • Skrypnyk, Daria
  • Sun, Tianyi
  • Tran, Lien Le Hong

Abstract

Systems and methods are provided for performing operations on an augmented reality (AR) device using an external screen streaming system. The system establishes, by one or more processors of an AR device, a communication with an external client device. The system causes overlay of, by the AR device, a first AR object on a real -world environment being viewed using the AR device. The system receives, by the AR device, a first image from the external client device. The system, in response to receiving the first image from the external client device, overlays the first image on the first AR object by the AR device.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

36.

GENERATING USER INTERFACES IN AUGMENTED REALITY ENVIRONMENTS

      
Application Number US2023075829
Publication Number 2024/076986
Status In Force
Filing Date 2023-10-03
Publication Date 2024-04-11
Owner SNAP INC. (USA)
Inventor
  • Kang, Shin Hwun
  • Tran, Lien Le Hong

Abstract

An augmented reality (AR) content system is provided. The AR content system may analyze audio input obtained from a user to generate a search request. The AR content system may obtain search results in response to the search request and determine a layout by which to display the search results. The search results may be displayed in a user interface within an AR environment according to the layout. The AR content system may also analyze audio input to detect commands to perform with respect to content displayed in the user interface.

IPC Classes  ?

  • G06F 3/16 - Sound input; Sound output
  • G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
  • G02B 27/01 - Head-up displays
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

37.

GENERATING USER INTERFACES IN AUGMENTED REALITY ENVIRONMENTS

      
Application Number 17959985
Status Pending
Filing Date 2022-10-04
First Publication Date 2024-04-04
Owner Snap Inc. (USA)
Inventor
  • Kang, Shin Hwun
  • Tran, Lien Le Hong

Abstract

An augmented reality (AR) content system is provided. The AR content system may analyze audio input obtained from a user to generate a search request. The AR content system may obtain search results in response to the search request and determine a layout by which to display the search results. The search results may be displayed in a user interface within an AR environment according to the layout. The AR content system may also analyze audio input to detect commands to perform with respect to content displayed in the user interface.

IPC Classes  ?

  • G06T 11/60 - Editing figures and text; Combining figures or text
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06T 7/70 - Determining position or orientation of objects or cameras

38.

PERSONAL VEHICLE LINEAR SENSOR DATA FOR AR ODOMETRY SENSOR FUSION

      
Application Number 18536920
Status Pending
Filing Date 2023-12-12
First Publication Date 2024-04-04
Owner Snap Inc. (USA)
Inventor
  • Brown, Edmund Graves
  • Lucas, Benjamin
  • Rodriguez, Ii, Jonathan M.
  • Zhuang, Richard

Abstract

A method of providing an interactive personal mobility system, performed by one or more processors, comprises determining an initial pose by visual-inertial odometry performed on images and inertial measurement unit (IMU) data generated by a wearable augmented reality device. Sensor data transmitted from a personal mobility system is received, and sensor fusion is performed on the data received from the personal mobility system to provide an updated pose. Augmented reality effects are displayed on the wearable augmented reality device based on the updated pose.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • B60L 3/12 - Recording operating variables
  • B60L 15/20 - Methods, circuits or devices for controlling the propulsion of electrically-propelled vehicles, e.g. their traction-motor speed, to achieve a desired performance; Adaptation of control equipment on electrically-propelled vehicles for remote actuation from a stationary place, from alternative parts of the vehicle or from alternative vehicles of the same vehicle train for control of the vehicle or its driving motor to achieve a desired performance, e.g. speed, torque, programmed variation of speed
  • G01C 22/02 - Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers or using pedometers by conversion into electric waveforms and subsequent integration, e.g. using tachometer generator
  • G02B 27/01 - Head-up displays

39.

3D GARMENT GENERATION FROM 2D SCRIBBLE IMAGES

      
Application Number US2023029082
Publication Number 2024/072550
Status In Force
Filing Date 2023-07-31
Publication Date 2024-04-04
Owner SNAP INC. (USA)
Inventor
  • Achlioptas, Panagiotis
  • Chai, Menglei
  • Lee, Hsin-Ying
  • Olszewski, Kyle
  • Ren, Jian
  • Tulyakov, Sergey

Abstract

A system and method are described for generating 3D garments from two-dimensional (2D) scribble images drawn by users. The system includes a conditional 2D generator, a conditional 3D generator, and two intermediate media including dimension-coupling color-density pairs and flat point clouds that bridge the gap between dimensions. Given a scribble image, the 2D generator synthesizes dimension-coupling color-density pairs including the RGB projection and density map from the front and rear views of the scribble image. A density-aware sampling algorithm converts the 2D dimension-coupling color-density pairs into a 3D flat point cloud representation, where the depth information is ignored. The 3D generator predicts the depth information from the flat point cloud. Dynamic variations per garment due to deformations resulting from a wearer's pose as well as irregular wrinkles and folds may be bypassed by taking advantage of 2D generative models to bridge the dimension gap in a non-parametric way.

IPC Classes  ?

40.

MIXED REALITY MEDIA CONTENT

      
Application Number US2023075529
Publication Number 2024/073675
Status In Force
Filing Date 2023-09-29
Publication Date 2024-04-04
Owner SNAP INC. (USA)
Inventor
  • Moll, Sharon
  • Gurgul, Piotr
  • Zhang, Dawei

Abstract

A mixed-reality media content system may be configured to perform operations that include: causing display of image data at a client device, the image data comprising a depiction of an object that includes a graphical code at a position upon the object; detecting the graphical code at the position upon the depiction of the object based on the image data; accessing media content within a media repository based on the graphical code scanned by the client device; and causing display of a presentation of the media content at the position of the graphical code upon the depiction of the object at the client device.

IPC Classes  ?

  • G06Q 50/10 - Services
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
  • G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

41.

REAL-TIME MACHINE LEARNING BASED IN-PAINTING

      
Application Number 17937734
Status Pending
Filing Date 2022-10-03
First Publication Date 2024-04-04
Owner Snap Inc. (USA)
Inventor
  • Dudovitch, Gal
  • Harel, Peleg
  • Mishin Shuvi, Ma'Ayan
  • Sasson, Gal
  • Zohar, Matan

Abstract

Aspects of the present disclosure involve a system for performing real-time in-painting using machine learning techniques. The system receives a video that includes a depiction of a real-world object in a real-world environment. The system accesses a segmentation associated with the real-world object and removes a depiction of the real-world object from a region of a first frame of the video. The system processes, by a machine learning model, the first frame and one or more previous frames of the video that precede the first frame to generate a new frame in which portions of the first frame have been blended into the region from which the depiction of the real-world object has been removed.

IPC Classes  ?

  • G06T 5/00 - Image enhancement or restoration
  • G06T 5/50 - Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
  • G06T 7/11 - Region-based segmentation
  • G06T 7/20 - Analysis of motion
  • G06T 11/00 - 2D [Two Dimensional] image generation

42.

AR SYSTEM BENDING CORRECTION

      
Application Number 17937950
Status Pending
Filing Date 2022-10-04
First Publication Date 2024-04-04
Owner Snap Inc. (USA)
Inventor
  • Kalkgruber, Matthias
  • Pereira Torres, Tiago Miguel
  • Welge, Weston
  • Zahreddine, Ramzi

Abstract

A system for deformation or bending correction in an Augmented Reality (AR) system. Sensors are positioned in a frame of a head-worn AR system to sense forces or pressure acting on the frame by temple pieces attached to the frame. The sensed forces or pressure are used in conjunction with a model of the frame to determine a corrected model of the frame. The corrected model is used to correct video data captured by the AR system and to correct a video virtual overlay that is provided to a user wearing the head-worn AR system.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

43.

Prohibited Content Propagation using a Social Network Data Structure

      
Application Number 18540657
Status Pending
Filing Date 2023-12-14
First Publication Date 2024-04-04
Owner Snap Inc. (USA)
Inventor
  • Hammer, Mette F.M.
  • Harpur, Liam
  • Jaquinta, Joseph M.
  • Nurmenkari, Pauli P.O.

Abstract

A method for prohibiting email content propagation that receives, at a server, an email message. At the server, at least one email address associated with the email message which is designated not to receive a content of the email message is identified. At the server, the email message is modified by selectively removing a content of the email message to be conveyed to the at least one email address. The server conveys the modified email message to the at least one email address. The server conveys the email message to one or more recipient email addresses except the at least one email address. Consequently, the server has sent a submitted message to multiple email addresses, while modifying the content sent to a subset of the addresses that received the email message.

IPC Classes  ?

  • H04L 51/212 - Monitoring or handling of messages using filtering or selective blocking
  • G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
  • G06Q 10/107 - Computer-aided management of electronic mailing [e-mailing]
  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
  • H04L 51/214 - Monitoring or handling of messages using selective forwarding
  • H04L 51/48 - Message addressing, e.g. address format or anonymous messages, aliases
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

44.

9-DOF OBJECT TRACKING

      
Application Number US2023033853
Publication Number 2024/072885
Status In Force
Filing Date 2023-09-27
Publication Date 2024-04-04
Owner SNAP INC. (USA)
Inventor
  • Berger, Itamar
  • Dudovitch, Gal
  • Harel, Peleg
  • Mishin Shuvi, Ma'Ayan

Abstract

Aspects of the present disclosure involve a system for presenting AR items. The system receives a video that includes a depiction of a real-world object in a real-world environment. The system generates a three-dimensional (3D) bounding box for the real-world object and stabilizes the 3D bounding box based on one or more sensors of the device. The system determines a position, orientation, and dimensions of the real-world object based on the stabilized 3D bounding box and renders a display of an augmented reality (AR) item within the video based on the position, orientation, and dimensions of the real -world object.

IPC Classes  ?

  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods

45.

UAV with manual flight mode selector

      
Application Number 16833526
Grant Number 11945579
Status In Force
Filing Date 2020-03-28
First Publication Date 2024-04-02
Grant Date 2024-04-02
Owner Snap Inc. (USA)
Inventor
  • Nielsen, Simon
  • Patton, Russell Douglas

Abstract

A UAV having a manual gimbal including a camera, and a flight mode selector configured to select both a flight mode and manually establish a camera position as a function of the selected fight mode. A controller responds to a position of the gimbal or selector to establish the flight mode. The flight mode is selected from several available modes, for example, a horizontal flight mode, a 45-degree flight mode, and a vertical (aerial) flight mode. The flight mode selector is mechanically coupled to the gimbal and establishes a pitch angle of the gimbal, and thus the camera angle attached to the gimbal.

IPC Classes  ?

  • B64C 39/02 - Aircraft not otherwise provided for characterised by special use
  • B64C 19/00 - Aircraft control not otherwise provided for
  • B64U 101/30 - UAVs specially adapted for particular uses or applications for imaging, photography or videography
  • G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
  • G05D 1/08 - Control of attitude, i.e. control of roll, pitch, or yaw
  • G06F 3/0362 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of 1D translations or rotations of an operating part of the device, e.g. scroll wheels, sliders, knobs, rollers or belts
  • B64U 50/14 - Propulsion using external fans or propellers ducted or shrouded

46.

Mixed reality media content

      
Application Number 17956603
Grant Number 11949969
Status In Force
Filing Date 2022-09-29
First Publication Date 2024-04-02
Grant Date 2024-04-02
Owner Snap Inc. (USA)
Inventor
  • Moll, Sharon
  • Gurgul, Piotr
  • Zhang, Dawei

Abstract

A mixed-reality media content system may be configured to perform operations that include: causing display of image data at a client device, the image data comprising a depiction of an object that includes a graphical code at a position upon the object; detecting the graphical code at the position upon the depiction of the object based on the image data; accessing media content within a media repository based on the graphical code scanned by the client device; and causing display of a presentation of the media content at the position of the graphical code upon the depiction of the object at the client device.

IPC Classes  ?

  • H04N 21/8545 - Content authoring for generating interactive applications
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code

47.

Privacy-preserving multi-touch attribution

      
Application Number 17477383
Grant Number 11949778
Status In Force
Filing Date 2021-09-16
First Publication Date 2024-04-02
Grant Date 2024-04-02
Owner Snap Inc. (USA)
Inventor
  • Chopra, Samarth
  • Datta, Amit
  • Deshpande, Apoorvaa

Abstract

Systems and methods herein describe privacy preserving multi-touch attribution. The described systems access a plurality of impression events and a plurality of conversion events, and for each impression event and each conversion event, wherein each impression event and each conversion event are associated with user identifiers, the described systems generates a hashed user identifier based on the associated user identifier, initiates a key agreement protocol comprising a key, generates an encrypted identifier by encrypting the hashed user identifier with the key, and stores the encrypted identifier.

IPC Classes  ?

  • G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
  • H04L 9/08 - Key distribution
  • H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
  • H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
  • H04L 29/06 - Communication control; Communication processing characterised by a protocol

48.

DYNAMICALLY ASSIGNING PARTICIPANT VIDEO FEEDS WITHIN VIRTUAL CONFERENCING SYSTEM

      
Application Number 17948508
Status Pending
Filing Date 2022-09-20
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Cho, Emily
  • Lin, Andrew Cheng-Min

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for dynamically assigning participant video feeds within a virtual conferencing system. The program and method provide, in association with designing a virtual space for virtual conferencing, an interface for configuring a set of rooms, each room being associated with a different number of participant video elements assignable to respective participant video feeds; receive, via the interface, an indication of user input for setting properties for the set of rooms; determine, in association with virtual conferencing, a first number of participants for a room; select a first room corresponding to the first number of participants; provide display of the first room; and assign, for each of the first number of participants, a participant video feed corresponding to the participant with a respective participant video element in the first room.

IPC Classes  ?

  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

49.

OPACITY CONTROL OF AUGMENTED REALITY DEVICES

      
Application Number 17950923
Status Pending
Filing Date 2022-09-22
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor Zare Seisan, Farid

Abstract

An augmented reality (AR) eyewear device has a lens system which includes an optical screening mechanism that enables switching the lens system between a conventional see-through state and an opaque state in which the lens system screens or functionally blocks out the wearer's view of the external environment. Such a screening mechanism allows for expanded use cases of the AR glasses compared to conventional devices, e.g.: as a sleep mask; to view displayed content like movies or sports events against a visually non-distracting background instead of against the external environment; and/or to enable VR functionality.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
  • G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning

50.

STEERABLE CAMERA FOR AR HAND TRACKING

      
Application Number 18357607
Status Pending
Filing Date 2023-07-24
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Colascione, Daniel
  • Simons, Patrick Timothy Mcsweeney
  • Welge, Weston
  • Zahreddine, Ramzi

Abstract

A system for hand tracking for an Augmented Reality (AR) system. The AR system uses a camera of the AR system to capture tracking video frame data of a hand of a user of the AR system. The AR system generates a skeletal model based on the tracking video frame data and determines a location of the hand of the user based on the skeletal model. The AR system causes a steerable camera of the AR system to focus on the hand of the user.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G02B 27/01 - Head-up displays
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

51.

AUGMENTED EXPRESSION SYSTEM

      
Application Number 18528098
Status Pending
Filing Date 2023-12-04
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Cao, Chen
  • Gao, Yang
  • Xue, Zehao

Abstract

Embodiments described herein relate to an augmented expression system to generate and cause display of a specially configured interface to present an augmented reality perspective. The augmented expression system receives image and video data of a user and tracks facial landmarks of the user based on the image and video data, in real-time to generate and present a 3-dimensional (3D) bitmoji of the user.

IPC Classes  ?

  • G06T 13/20 - 3D [Three Dimensional] animation
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

52.

SOFTWARE APPLICATION MANAGER FOR MESSAGING APPLICATIONS

      
Application Number 18528429
Status Pending
Filing Date 2023-12-04
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Eirinberg, Dylan Shane
  • Son, Aaron Daniel
  • Wu, William

Abstract

Among other things, embodiments of the present disclosure improve the functionality of electronic messaging systems by enabling users in an electronic chat conversation to run applications together. In some embodiments, when one user in a chat launches an application, an icon or other visual representation of the application appears in a portion of the chat window (e.g., in a “chat dock”) for other users in the chat to access.

IPC Classes  ?

  • H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
  • H04L 9/40 - Network security protocols
  • H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
  • H04L 51/046 - Interoperability with other network applications or services
  • H04L 51/18 - Commands or executable codes
  • H04L 65/1089 - In-session procedures by removing media
  • H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
  • H04L 67/00 - Network arrangements or protocols for supporting network services or applications
  • H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

53.

AUGMENTED REALITY SPATIAL AUDIO EXPERIENCE

      
Application Number 18532679
Status Pending
Filing Date 2023-12-07
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Canberk, Ilteris
  • Kang, Shin Hwun

Abstract

Devices, media, and methods are presented for an immersive augmented reality (AR) experience using an eyewear device with spatial audio. The eyewear device has a processor, a memory, an image sensor, and a speaker system. The eyewear device captures image information for an environment surrounding the device and identifies an object location within the same environment. The eyewear device then associates a virtual object with the identified object location. The eyewear device monitors the position of the device with respect to the virtual object and presents audio signals to alert the user that the identified object is in the environment.

IPC Classes  ?

  • H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
  • G02B 27/01 - Head-up displays

54.

FACIAL SYNTHESIS IN AUGMENTED REALITY CONTENT FOR ONLINE COMMUNITIES

      
Application Number 18534258
Status Pending
Filing Date 2023-12-08
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Golobokov, Roman
  • Marinenko, Alexandr
  • Mashrabov, Aleksandr
  • Bromot, Aleksei
  • Tkachenko, Grigoriy

Abstract

The subject technology captures first image data by a computing device, the first image data comprising a target face of a target actor and facial expressions of the target actor, the facial expressions including lip movements. The subject technology generates, based at least in part on frames of a source media content, sets of source pose parameters. The subject technology receives a selection of a particular facial expression from a set of facial expressions. The subject technology generates, based at least in part on sets of source pose parameters and the selection of the particular facial expression, an output media content. The subject technology provides augmented reality content based at least in part on the output media content for display on the computing device.

IPC Classes  ?

  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

55.

PROVIDING BOT PARTICIPANTS WITHIN A VIRTUAL CONFERENCING SYSTEM

      
Application Number 18534341
Status Pending
Filing Date 2023-12-08
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Lin, Andrew Cheng-Min
  • Lin, Walton

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing bot participants for virtual conferencing. The program and method provide, in association with designing a virtual space, a first interface for configuring plural participant video elements, each being assignable to a respective participant; receive, via the first interface, an indication of user input for setting first properties for the plural participant video elements; provide a second interface for configuring a bot participant for simulating an actual participant in association with a participant video element of the plural participant video elements; receive, via the second interface, an indication of second user input for setting second properties for the bot participant; and provide, in association with designing the virtual space, display of the virtual space based on the first and second properties, the bot participant being assigned to the participant video element.

IPC Classes  ?

  • G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G06F 3/16 - Sound input; Sound output
  • G06T 5/00 - Image enhancement or restoration
  • G06T 5/20 - Image enhancement or restoration by the use of local operators
  • H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
  • H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences

56.

MENU HIERARCHY NAVIGATION ON ELECTRONIC MIRRORING DEVICES

      
Application Number 18535771
Status Pending
Filing Date 2023-12-11
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Eirinberg, Dylan Shane
  • Goodrich, Kyle
  • Mcphee, Andrew James
  • Moreno, Daniel

Abstract

Systems and methods are provided for performing operations comprising: capturing, by an electronic mirroring device, a video feed received from a camera of the electronic mirroring device, the video feed depicting a user; displaying, by one or more processors of the electronic mirroring device, one or more menu options on the video feed that depicts the user, the one or more menu options relating to a first level in a hierarchy of levels; detecting a gesture performed by the user in the video feed; and in response to detecting the gesture, displaying a set of options related to a given option of the one or more menu options, the set of options relating to a second level in the hierarchy of levels.

IPC Classes  ?

  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • G02B 27/01 - Head-up displays
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/14 - Digital output to display device

57.

REMOTELESS CONTROL OF DRONE BEHAVIOR

      
Application Number 18536951
Status Pending
Filing Date 2023-12-12
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Meisenholder, David
  • Horowitz, Steven

Abstract

A drone system is configured to capture an audio stream that includes voice commands from an operator, to process the audio stream for identification of the voice commands, and to perform operations based on the identified voice commands. The drone system can identify a particular voice stream in the audio stream as an operator voice, and perform the command recognition with respect to the operator voice to the exclusion of other voice streams present in the audio stream. The drone can include a directional camera that is automatically and continuously focused on the operator to capture a video stream usable in disambiguation of different voice streams captured by the drone.

IPC Classes  ?

  • G05D 1/12 - Target-seeking control
  • G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
  • G05D 1/10 - Simultaneous control of position or course in three dimensions
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
  • G10L 15/24 - Speech recognition using non-acoustical features
  • G10L 15/25 - Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
  • G10L 17/00 - Speaker identification or verification

58.

AR GLASSES AS IOT REMOTE CONTROL

      
Application Number US2023029076
Publication Number 2024/063865
Status In Force
Filing Date 2023-07-31
Publication Date 2024-03-28
Owner SNAP INC. (USA)
Inventor
  • Moll, Sharon
  • Gurgul, Piotr

Abstract

AR-enabled wearable electronic devices such as smart glasses are adapted for use as an (Internet of Things) IoT remote control device where the user can control a pointer on a television screen, computer screen, or other IoT enabled device to select items by looking at them and making selections using gestures. Built-in six-degrees-of-freedom (6DoF) tracking capabilities are used to move the pointer on the screen to facilitate navigation. The display screen is tracked in real-world coordinates to determine the point of intersection of the user's view with the screen using raycasting techniques. Hand and head gesture detection are used to allow the user to execute a variety of control actions by performing different gestures. The techniques are particularly useful for smart displays that offer AR-enhanced content that can be viewed in the displays of the AR-enabled wearable electronic devices.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G02B 27/01 - Head-up displays
  • H04L 67/125 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network

59.

MOBILE DEVICE RESOURCE OPTIMIZED KIOSK MODE

      
Application Number US2023029078
Publication Number 2024/063866
Status In Force
Filing Date 2023-07-31
Publication Date 2024-03-28
Owner SNAP INC. (USA)
Inventor
  • Moll, Sharon
  • Wawruch, Pawel
  • Razafindrabe, Neken Aritia Symphonie

Abstract

A resource optimized kiosk mode that improves the mobile experience for creators and users of mobile devices such as an augmented reality (AR)-enabled wearable eyewear device. An eyewear device enters a kiosk mode by receiving a kiosk mode request for an application and, in response to the request, determining which services and application programming interfaces (APIs) are required to execute the selected application. An identification of the determined services and APIs required to execute the selected application are stored and the eyewear device is rebooted. After reboot, the selected application is started, and only the identified services and APIs are enabled. To determine which services and APIs are required to execute the selected application, metadata may be associated with the selected application specifying the services and/or APIs that the selected application requires to use when in operation.

IPC Classes  ?

60.

VISUAL AND AUDIO WAKE COMMANDS

      
Application Number US2023033063
Publication Number 2024/064094
Status In Force
Filing Date 2023-09-18
Publication Date 2024-03-28
Owner SNAP INC. (USA)
Inventor
  • Colascione, Daniel
  • Hanover, Matthew
  • Korolev, Sergei
  • Marr, Michael David
  • Myers, Scott
  • Powderly, James

Abstract

A gesture-based wake process for an AR system is described herein. The AR system places a hand-tracking input pipeline of the AR system in a suspended mode. A camera component of the hand-tracking input pipeline detects a possible visual wake command being made by a user of the AR system. On the basis of detecting the possible visual wake command, the AR system wakes the hand-tracking input pipeline and places the camera component in a fully operational mode. If the AR system, using the hand¬ tracking input pipeline, verifies the possible visual wake command as an actual wake command, the AR system initiates execution of an AR application.

IPC Classes  ?

61.

INTELLIPIX

      
Serial Number 98473333
Status Pending
Filing Date 2024-03-28
Owner Snap Inc. ()
NICE Classes  ?
  • 09 - Scientific and electric apparatus and instruments
  • 42 - Scientific, technological and industrial services, research and design

Goods & Services

Spatial Light Modulators; Displays, namely, liquid crystal displays and liquid crystal-on-silicon displays; Microdisplays, namely, liquid crystal microdisplays and liquid crystal-on-silicon microdisplays; Emissive displays, namely, OLED (Organic light emitting diode) display panels and OLED microdisplays; Micro Light Emitting Diode displays (microLED displays); Liquid Crystal Devices, namely, liquid crystal displays and liquid crystal microdisplays; Liquid Crystal displays; Display Panels, namely, liquid crystal display panels and microdisplay panels, LED display panels and microdisplay panels; Liquid Crystal Modules, namely, liquid crystal displays; Liquid Crystal-on-Silicon (LCoS) devices, namely, liquid crystal-on-silicon (LCOS) panels and micro panels to project digital images and video; Driver Integrated Circuits; computer hardware, namely, microchips, integrated circuits, semiconductor chips, and circuit boards for modulation of electromagnetic radiation (light) and/or image display; driver software, namely, downloadable and/or recorded computer software for allowing communication with a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display); systems software, namely, downloadable and/or recorded software for managing a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display) via the driver; applications software, namely, downloadable or recorded software for playing of application content on a display and for configuring a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display); control software, namely, downloadable, recorded, and/or embedded software to control a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display) and to interpret commands from the driver; configuration software, namely, downloadable and/or recorded software for creating operation configurations for a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display), and for calibrating the operations performed by a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display) Configuration software, namely, providing temporary use of online non-downloadable software for generating configuration parameters (for generating drive sequences) for a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display)

62.

TEXT-GUIDED CAMEO GENERATION

      
Application Number 17950945
Status Pending
Filing Date 2022-09-22
First Publication Date 2024-03-28
Owner Snap Inc. (USA)
Inventor
  • Ghosh, Arnab
  • Ren, Jian
  • Savchenkov, Pavel
  • Tulyakov, Sergey

Abstract

A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The coordinates of a face in the image are determined, and the face of the user or another person is added to the image at the location. The final image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06F 40/289 - Phrasal analysis, e.g. finite state techniques or chunking
  • G06F 40/35 - Discourse or dialogue representation
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

63.

GENERATING A CONTEXTUAL SEARCH STREAM

      
Application Number 18523697
Status Pending
Filing Date 2023-11-29
First Publication Date 2024-03-28
Owner Snap Inc (USA)
Inventor Lo, Bobby

Abstract

Systems and methods are provided for retrieving first query result data associated with a first user account and rendering the first query result data into a first result item, generating a shareable search result stream comprising the first result item associated with the first user account, retrieving second query result data associated with a second user account and rendering the second query result data into a second result item, adding the second result item to the shareable search result stream associated with the first user account, and providing the sharable search result stream comprising the first result item and the second result item to a first computing device associated with the first user account and a second computing device associated with the second user account.

IPC Classes  ?

64.

AR GRAPHICAL ASSISTANCE WITH TASKS

      
Application Number US2023033058
Publication Number 2024/064092
Status In Force
Filing Date 2023-09-18
Publication Date 2024-03-28
Owner SNAP INC. (USA)
Inventor
  • Gurgul, Piotr
  • Moll, Sharon

Abstract

Systems, methods, and computer readable media for graphical assistance with tasks using an augmented reality (AR) wearable devices are disclosed. Embodiments capture an image of a first user view of a real-world scene and access indications of surfaces and locations of the surfaces detected in the image. The AR wearable device displays indications of the surfaces on a display of the AR wearable device where the locations of the indications are based on the locations of the surfaces and a second user view of the real-world scene. The locations of the surfaces are indicated with 3D world coordinates. The user views are determined based on a location of the user. The AR wearable device enables a user to add graphics to the surfaces and select tasks to perform. Tools such as a bubble level or a measuring tool are available for the user to utilize to perform the task.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

65.

STEERABLE CAMERA FOR AR HAND TRACKING

      
Application Number US2023033134
Publication Number 2024/064130
Status In Force
Filing Date 2023-09-19
Publication Date 2024-03-28
Owner SNAP INC. (USA)
Inventor
  • Colascione, Daniel
  • Simons, Patrick Timothy Mcsweeney
  • Welge, Weston
  • Zahreddine, Ramzi

Abstract

A system for hand tracking for an Augmented Reality (AR) system. The AR system uses a camera of the AR system to capture tracking video frame data of a hand of a user of the AR system. The AR system generates a skeletal model based on the tracking video frame data and determines a location of the hand of the user based on the skeletal model. The AR system causes a steerable camera of the AR system to focus on the hand of the user.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
  • G09B 21/00 - Teaching, or communicating with, the blind, deaf or mute
  • H04N 23/60 - Control of cameras or camera modules
  • G06F 1/16 - Constructional details or arrangements

66.

TEXT-GUIDED CAMEO GENERATION

      
Application Number US2023074762
Publication Number 2024/064806
Status In Force
Filing Date 2023-09-21
Publication Date 2024-03-28
Owner SNAP INC. (USA)
Inventor
  • Ghosh, Arnab
  • Ren, Jian
  • Savchenkov, Pavel
  • Tulyakov, Sergey

Abstract

A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The coordinates of a face in the image are determined, and the face of the user or another person is added to the image at the location. The final image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.

IPC Classes  ?

  • G06T 11/60 - Editing figures and text; Combining figures or text
  • H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
  • G06T 13/80 - 2D animation, e.g. using sprites

67.

OPACITY CONTROL OF AUGMENTED REALITY DEVICES

      
Application Number US2023074771
Publication Number 2024/064812
Status In Force
Filing Date 2023-09-21
Publication Date 2024-03-28
Owner SNAP INC. (USA)
Inventor Zare Seisan, Farid

Abstract

An augmented reality (AR) eyewear device has a lens system which includes an optical screening mechanism that enables switching the lens system between a conventional see- through state and an opaque state in which the lens system screens or functionally blocks out the wearer's view of the external environment. Such a screening mechanism allows for expanded use cases of the AR glasses compared to conventional devices, e.g.: as a sleep mask; to view displayed content like movies or sports events against a visually nondistracting background instead of against the external environment; and/or to enable VR functionality.

IPC Classes  ?

68.

CONFIGURING A 3D MODEL WITHIN A VIRTUAL CONFERENCING SYSTEM

      
Application Number 17948480
Status Pending
Filing Date 2022-09-20
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Chou, William
  • Lin, Andrew Cheng-Min

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for configuring a three-dimensional (3D) model within a virtual conferencing system. The program and method provide, in association with designing a room for virtual conferencing, an interface for configuring a 3D model; receiving, via the interface, an indication of user input for setting properties for the 3D model, the properties specifying image data for projecting onto the 3D model; and in association with virtual conferencing, providing display of the room based on the properties for the 3D model, and causing the image data to be projected onto the 3D model within the room.

IPC Classes  ?

  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

69.

DEVICE AND METHOD FOR COMPENSATING EFFECTS OF PANTOSCOPIC TILT OR WRAP/SWEEP TILT ON AN IMAGE PRESENTED ON AN AUGMENTED REALITY OR VIRTUAL REALITY DISPLAY

      
Application Number 18263837
Status Pending
Filing Date 2021-12-07
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Valera, Mohmed Salim
  • Poussin, David Louis Maxime

Abstract

An optical device is disclosed for use in an augmented reality or virtual reality display, comprising a waveguide (12; 22; 32) and an input diffractive optical element (H0; H3; 34) positioned in or on the waveguide, configured to receive light from a projector and couple it into the waveguide so that it is captured within the waveguide under total internal reflection. The input diffractive optical element has an input grating vector (G0; Gig) in the plane of the waveguide. The device includes a first diffractive optical element (H1; H4) and a second diffractive optical element (H2; H5) having first and second grating vectors (G2, G3; GV1, GV2) respectively in the plane of the waveguide, wherein the first diffractive optical element is configured to receive light from the input diffractive optical element and to couple it towards the second diffractive optical element, and wherein the second diffractive optical element is configured to receive light from the first diffractive optical element and to couple it out of the waveguide towards a viewer. The input grating vector, the first grating vector and the second grating vector have different respective magnitudes, and wherein a vector addition of the input grating vector, the first grating vector and the second grating vector sums to zero.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
  • G02B 27/42 - Diffraction optics

70.

VISUAL AND AUDIO WAKE COMMANDS

      
Application Number 18367278
Status Pending
Filing Date 2023-09-12
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Colascione, Daniel
  • Hanover, Matthew
  • Korolev, Sergei
  • Marr, Michael David
  • Myers, Scott
  • Powderly, James

Abstract

A gesture-based wake process for an AR system is described herein. The AR system places a hand-tracking input pipeline of the AR system in a suspended mode. A camera component of the hand-tracking input pipeline detects a possible visual wake command being made by a user of the AR system. On the basis of detecting the possible visual wake command, the AR system wakes the hand-tracking input pipeline and places the camera component in a fully operational mode. If the AR system, using the hand-tracking input pipeline, verifies the possible visual wake command as an actual wake command, the AR system initiates execution of an AR application.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

71.

AUGMENTING IMAGE CONTENT WITH SOUND

      
Application Number 18519735
Status Pending
Filing Date 2023-11-27
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Brody, Jonathan Dale
  • Cooper, Andrew Grosvenor
  • Francis, Brandon
  • Heikkinen, Christie Marie
  • Lankage, Ranidu

Abstract

Aspects of the present disclosure involve a system and a method for performing operations comprising: receiving, by a messaging application implemented on a client device, input that selects a sound option to add sound to one or more images; in response to receiving the input, presenting a sound editing user interface element that visually indicates a played portion of the sound and separately visually indicates an un-played portion of the sound; receiving an interaction with the sound editing user interface element to modify a start point of the sound; embedding a graphical element representing the sound in the one or more images; playing, by the messaging application, the sound associated with the graphical element starting from the start point together with displaying the one or more images.

IPC Classes  ?

  • G06F 3/16 - Sound input; Sound output
  • G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

72.

CIRCUITS AND METHODS FOR WEARABLE DEVICE CHARGING AND WIRED CONTROL

      
Application Number 18520094
Status Pending
Filing Date 2023-11-27
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Tham, Yu Jiang
  • Larson, Nicolas
  • Brook, Peter
  • Patton, Russell Douglas
  • Alhaideri, Miran
  • Hong, Zhihao

Abstract

Methods and devices for wired charging and communication with a wearable device are described. In one embodiment, a symmetrical contact interface comprises a first contact pad and a second contact pad, and particular wired circuitry is coupled to the first and second contact pads to enable charging as well as receive and transmit communications via the contact pads as part of various device states.

IPC Classes  ?

  • H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
  • G02C 5/14 - Side-members
  • G02C 11/00 - Non-optical adjuncts; Attachment thereof
  • H01L 27/02 - Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including integrated passive circuit elements with at least one potential-jump barrier or surface barrier
  • H01R 13/62 - Means for facilitating engagement or disengagement of coupling parts or for holding them in engagement
  • H02J 7/04 - Regulation of the charging current or voltage
  • H02J 7/34 - Parallel operation in networks using both storage and other dc sources, e.g. providing buffering
  • H03K 19/0185 - Coupling arrangements; Interface arrangements using field-effect transistors only
  • H04B 3/56 - Circuits for coupling, blocking, or by-passing of signals

73.

USER INTERFACE FOR POSE DRIVEN VIRTUAL EFFECTS

      
Application Number 18520255
Status Pending
Filing Date 2023-11-27
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Alavi, Amir
  • Rykhliuk, Olha
  • Shi, Xintong
  • Solichin, Jonathan
  • Voronova, Olesia
  • Yagodin, Artem

Abstract

Systems and methods herein describe a method for capturing a video in real-time by an image capture device. The system provides a plurality of visual pose hints, identifies first pose information in the video while capturing the video, applies a first series of virtual effects to the video, identifies second pose information, and applies a second series of virtual effects to the video, the second series of virtual effects based on the first series of virtual effects.

IPC Classes  ?

  • H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • H04N 23/611 - Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders

74.

MEDIA CONTENT PLAYBACK AND COMMENTS MANAGEMENT

      
Application Number 18521428
Status Pending
Filing Date 2023-11-28
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Voss, Jeremy Baker

Abstract

A method and a system include receiving a request from a client device to view a media content item, determining at least one comment associated with a respective user profile from a set of connected profiles, generating a summary comments selectable item based at least in part on the respective user profile, causing a display of playback of the media content item and the summary comments selectable item in response to the request to view the media content item, and during the playback of the media content item at the particular time, causing a display of at least one comment.

IPC Classes  ?

  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • H04L 67/306 - User profiles

75.

GEO-FENCE AUTHORIZATION PROVISIONING

      
Application Number 18521752
Status Pending
Filing Date 2023-11-28
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Allen, Nicholas Richard
  • Chang, Sheldon

Abstract

A system includes a communication module that receives a request to post content to an event gallery associated with an event. The request in turn includes geo-location data for a device sending the content, and identification data identifying the device or a user of the device. The system further has an event gallery module to perform a first authorization operation that includes determining that the geo-location data corresponds to a geo-location fence associated with an event. The event gallery module also performs a second authorization operation that includes using the identification data to verify an attribute of the user. Finally, based on the first and second authorization operations, the event gallery module may selectively authorize the device to post the content to the event gallery.

IPC Classes  ?

  • H04L 9/40 - Network security protocols
  • H04L 51/222 - Monitoring or handling of messages using geographical location information, e.g. messages transmitted or received in proximity of a certain spot or area
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • H04W 4/02 - Services making use of location information
  • H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
  • H04W 4/029 - Location-based management or tracking services
  • H04W 4/18 - Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
  • H04W 12/06 - Authentication
  • H04W 12/64 - Location-dependent; Proximity-dependent using geofenced areas

76.

SELECTING ITEMS DISPLAYED BY A HEAD-WORN DISPLAY DEVICE

      
Application Number 18523197
Status Pending
Filing Date 2023-11-29
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Stolzenberg, Karen
  • Meisenholder, David
  • Vignau, Mathieu Emmanuel
  • Park, Sana
  • Sun, Tianyi
  • Fortier, Joseph Timothy
  • Anvaripour, Kaveh
  • Moreno, Daniel
  • Goodrich, Kyle

Abstract

Disclosed is a method of receiving and processing content-sending inputs received by a head-worn device system including one or more display devices, one or more cameras and a vertically-arranged touchpad. The method includes displaying a content item on the one or more display devices, receiving a touch input on the touchpad corresponding to a send instruction, displaying a carousel of potential recipients, receiving a horizontal touch input on the touchpad, scrolling the carousel left or right on the one or more display devices in response to the horizontal touch input, receiving a tap touch input on the touchpad to select a particular recipient, receiving a further touch input, and in response to the further touch input, transmitting the content item to the selected recipient.

IPC Classes  ?

  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G02B 27/01 - Head-up displays
  • G06F 3/0485 - Scrolling or panning
  • G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

77.

REAL-TIME UPPER-BODY GARMENT EXCHANGE

      
Application Number 18525285
Status Pending
Filing Date 2023-11-30
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Assouline, Avihay
  • Berger, Itamar
  • Malbin, Nir
  • Sasson, Gal

Abstract

Methods and systems are disclosed for performing operations for transferring garments from one real-world object to another in real time. The operations comprise receiving a first video that includes a depiction of a first person wearing a first upper-body garment in a first pose and obtaining a second video that includes a depiction of a second person wearing a second upper-body garment in a second pose. A pose of the second person depicted in the second video is modified to match the first pose of the first person depicted in the first video. The operations comprise generating an upper-body segmentation of the second upper-body garment which the second person is wearing in the second video in the modified pose and replacing the first upper-body garment worn by the first person in the first video with the second upper-body garment based on the upper-body segmentation.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 7/215 - Motion-based segmentation
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

78.

MULTIPATH OPTICAL DEVICE

      
Application Number EP2023075362
Publication Number 2024/056832
Status In Force
Filing Date 2023-09-14
Publication Date 2024-03-21
Owner
  • SNAP, INC. (USA)
  • SNAP GROUP LIMITED (United Kingdom)
Inventor
  • Crai, Alexandra
  • Webber, Alexander James Lewarne
  • Valera, Mohmed Salim

Abstract

An optical device for use in an augmented reality or virtual reality display, comprising: a waveguide; an input diffractive optical element, DOE, configured to receive light from a projector and to couple the received light into the waveguide along a plurality of optical paths; an output DOE offset from the input DOE along a first direction and configured to couple the received light out of the waveguide and towards a viewer; a first turning DOE offset from the input DOE along a second direction different from the first direction; wherein the input DOE is configured to couple a first portion of the received light in the second direction towards the first turning DOE and the first turning DOE is configured to diffract the first portion of the received light towards the output DOE, and the input DOE is configured to couple a second portion of the received light in the first direction towards the output DOE.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
  • G02B 5/18 - Diffracting gratings

79.

EYEWEAR WITH STRAIN GAUGE WEAR DETECTION

      
Application Number US2023029066
Publication Number 2024/058870
Status In Force
Filing Date 2023-07-31
Publication Date 2024-03-21
Owner SNAP INC. (USA)
Inventor
  • Heger, Jason
  • Kalkgruber, Matthias
  • Mendez, Erick Mendez

Abstract

An eyewear device including a strain gauge sensor to determine when the eyewear device is manipulated by a user, such as being put on, taken off, and interacted with. A processor identifies a signature event based on sensor signals received from the strain gauge sensor and a data table of strain gauge sensor measurements corresponding to signature events. The processor controls the eyewear device as a function of the identified signature event, such as powering on a display of the eyewear device as the eyewear device is being put on a user's head, and then turning of the display when the eyewear device is removed from the user's head.

IPC Classes  ?

  • G02C 11/00 - Non-optical adjuncts; Attachment thereof

80.

WATERPROOF UAV FOR CAPTURING IMAGES

      
Application Number US2023029071
Publication Number 2024/058872
Status In Force
Filing Date 2023-07-31
Publication Date 2024-03-21
Owner SNAP INC. (USA)
Inventor
  • Moll, Sharon
  • Zhang, Dawei

Abstract

A waterproof UAV that records camera footage while traveling through air and while submerged in water. The UAV alters speed and direction of propellers dependent on the medium that the UAV is traveling through to provide control of the UAV. The propellers are capable of spinning in both directions to enable the UAV to change its depth and orientation in water. A machine learning (ML) model is used to identify humans and objects underwater. A housing coupled to the UAV makes the UAV positively buoyant to float in water and to control buoyancy while submerged.

IPC Classes  ?

  • B64U 10/14 - Flying platforms with four distinct rotor axes, e.g. quadcopters
  • B64U 30/26 - Ducted or shrouded rotors
  • B64U 20/70 - Constructional aspects of the UAV body
  • B64U 60/10 - Undercarriages specially adapted for use on water
  • B64U 20/87 - Mounting of imaging devices, e.g. mounting of gimbals
  • B64C 39/02 - Aircraft not otherwise provided for characterised by special use
  • G06N 20/00 - Machine learning
  • B64U 101/30 - UAVs specially adapted for particular uses or applications for imaging, photography or videography

81.

DEFORMING REAL-WORLD OBJECT USING IMAGE WARPING

      
Application Number US2023032181
Publication Number 2024/058966
Status In Force
Filing Date 2023-09-07
Publication Date 2024-03-21
Owner SNAP INC. (USA)
Inventor
  • Guler, Riza Alp
  • Tam, Himmy
  • Wang, Haoyang
  • Kakolyris, Antonios

Abstract

Methods and systems are disclosed for performing real-time deforming operations. The system receives an image that includes a depiction of a real-world object. The system applies a machine learning model to the image to generate a warping field and segmentation mask, the machine learning model trained to establish a relationship between a plurality of training images depicting real -world objects and corresponding ground-truth warping fields and segmentation masks associated with a target shape. The system applies the generated warping field and segmentation mask to the image to warp the real- world object depicted in the image to the target shape.

IPC Classes  ?

  • G06T 3/00 - Geometric image transformation in the plane of the image
  • G06T 7/194 - Segmentation; Edge detection involving foreground-background segmentation
  • G06T 11/00 - 2D [Two Dimensional] image generation

82.

LOW LATENCY HAND-TRACKING IN AUGMENTED REALITY SYSTEMS

      
Application Number 17844541
Status Pending
Filing Date 2022-06-20
First Publication Date 2024-03-21
Owner SNAP INC. (USA)
Inventor
  • Bajana, Jan
  • Jung, Bernhard
  • Wagner, Daniel

Abstract

A method for reducing motion-to-photon latency for hand tracking is described. In one aspect, a method includes accessing a first frame from a camera of an Augmented Reality (AR) device, tracking a first image of a hand in the first frame, rendering virtual content based on the tracking of the first image of the hand in the first frame, accessing a second frame from the camera before the rendering of the virtual content is completed, the second frame immediately following the first frame, tracking, using the computer vision engine of the AR device, a second image of the hand in the second frame, generating an annotation based on tracking the second image of the hand in the second frame, forming an annotated virtual content based on the annotation and the virtual content, and displaying the annotated virtual content in a display of the AR device.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 7/20 - Analysis of motion
  • G06T 15/00 - 3D [Three Dimensional] image rendering

83.

GRAPHICAL ASSISTANCE WITH TASKS USING AN AR WEARABLE DEVICE

      
Application Number 17947889
Status Pending
Filing Date 2022-09-19
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Gurgul, Piotr
  • Moll, Sharon

Abstract

Systems, methods, and computer readable media for graphical assistance with tasks using an augmented reality (AR) wearable devices are disclosed. Embodiments capture an image of a first user view of a real-world scene and access indications of surfaces and locations of the surfaces detected in the image. The AR wearable device displays indications of the surfaces on a display of the AR wearable device where the locations of the indications are based on the locations of the surfaces and a second user view of the real-world scene. The locations of the surfaces are indicated with 3D world coordinates. The user views are determined based on a location of the user. The AR wearable device enables a user to add graphics to the surfaces and select tasks to perform. Tools such as a bubble level or a measuring tool are available for the user to utilize to perform the task.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G02B 27/01 - Head-up displays
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 10/74 - Image or video pattern matching; Proximity measures in feature spaces
  • G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes

84.

VIDEO GENERATION SYSTEM TO RENDER FRAMES ON DEMAND USING A FLEET OF GPUS

      
Application Number 18520203
Status Pending
Filing Date 2023-11-27
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Kotsopoulos, Bradley
  • Semory, Eli
  • Sheth, Rahul Bhupendra

Abstract

A content controller system to render frames on demand comprises a rendering server system that includes a plurality of graphics processing units (GPUs). The GPUs in the rendering server system render a set of media content item segments using a media content identification and a main user identification. Rendering the set of media content item segments includes retrieving metadata from a metadata database associated with the media content identification, rendering the set of media content item segments using the metadata, generating a main user avatar based on the main user identification, and incorporating the main user avatar into the set of media content item segments. The rendering server system then uploads the set of media content item segments to a segment database; and updates segment states in a segment state database to indicate that the set of media content item segments are available. Other embodiments are disclosed herein.

IPC Classes  ?

  • H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
  • G06F 16/23 - Updating
  • G06F 16/43 - Querying
  • G06T 1/20 - Processor architectures; Processor configuration, e.g. pipelining
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
  • H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
  • H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
  • H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors

85.

MEDIA GALLERY SHARING AND MANAGEMENT

      
Application Number 18520365
Status Pending
Filing Date 2023-11-27
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Kennedy, David James
  • Muñoz Escalante, Diego
  • Spool, Arianne
  • Xia, Yinghua David

Abstract

Various embodiments include systems, methods, and non-transitory computer-readable media for sharing and managing media galleries. Consistent with these embodiments, a method includes receiving a request from a first device to share a media gallery that includes a user avatar; generating metadata associated with the media gallery; generating a message associated with the media gallery, the message at least including the media gallery identifier and the identifier of the user avatar; and transmitting the message to a second device of the recipient user.

IPC Classes  ?

  • H04L 51/10 - Multimedia information
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
  • H04L 67/146 - Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding

86.

PUSH NOTIFICATION MANAGEMENT

      
Application Number 18525658
Status Pending
Filing Date 2023-11-30
First Publication Date 2024-03-21
Owner Snap Inc. (USA)
Inventor
  • Castro, Alex Joseph
  • Murray, Michael Brian
  • Wu, William

Abstract

A push notification mechanism at a mobile user device provides for automated limiting of the rate of production of push notification alerts (such as an audible alert or a vibratory alert) and/or push notifications responsive to the occurrence of chat events relevant to a chat application hosted by the user device. Some chat events automatically trigger suppression periods during which push notification alerts are prevented for subsequent chat events that satisfy predefined suppression criteria. Such push notification and/or alert limiting can be performed separately for separate users, chat groups, and/or chat event types.

IPC Classes  ?

  • H04L 67/55 - Push-based network services
  • H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
  • H04L 51/224 - Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

87.

THREE-DIMENSIONAL ASSET RECONSTRUCTION

      
Application Number US2023029068
Publication Number 2024/058871
Status In Force
Filing Date 2023-07-31
Publication Date 2024-03-21
Owner SNAP INC. (USA)
Inventor
  • Vasilkovskii, Mikhail
  • Demyanov, Sergey
  • Shakhrai, Vladislav

Abstract

A three-dimensional asset (3D) reconstruction technique for generating a 3D asset representing an object from images of the object. The images are captured from different viewpoints in a darkroom using one or more light sources having known locations. The system estimates camera poses for each of the captured images and then constructs a 3D surface mesh made up of surfaces using the captured images and their respective estimated camera poses. Texture properties for each of the surfaces of the 3D surface mesh are then refined to generate the 3D asset.

IPC Classes  ?

88.

FINGER GESTURE RECOGNITION VIA ACOUSTIC-OPTIC SENSOR FUSION

      
Application Number US2023032717
Publication Number 2024/059182
Status In Force
Filing Date 2023-09-14
Publication Date 2024-03-21
Owner SNAP INC. (USA)
Inventor
  • Krishnan Gorumkonda, Gurunandan
  • Nayar, Shree K.
  • Xu, Chenhan
  • Zhou, Bing

Abstract

A finger gesture recognition system is provided. The finger gesture recognition system includes one or more audio sensors and one or more optic sensors. The finger gesture recognition system captures, using the one or more audio sensors, audio signal data of a finger gesture being made by a user, and captures, using the one or more optic sensors, optic signal data of the finger gesture. The finger gesture recognition system recognizes the finger gesture based on the audio signal data and the optic signal data and communicates finger gesture data of the recognized finger gesture to an Augmented Reality/Combined Reality/Virtual Reality (XR) application.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

89.

EGOCENTRIC HUMAN BODY POSE TRACKING

      
Application Number US2023032755
Publication Number 2024/059206
Status In Force
Filing Date 2023-09-14
Publication Date 2024-03-21
Owner SNAP INC. (USA)
Inventor
  • Arakawa, Riku
  • Krishnan Gorumkonda, Gurunandan
  • Nayar, Shree K.
  • Zhou, Bing

Abstract

A pose tracking system is provided. The pose tracking system includes an EMF tracking system having a user-worn head-mounted EMF source and one or more user-worn EMF tracking sensors attached to the wrists of the user. The EMF source is associated with a VIO tracking system such as AR glasses or the like. The pose tracking system determines a pose of the user's head and a ground plane using the VIO tracking system and a pose of the user's hands using the EMF tracking system to determine a fullbody pose for the user. Metal interference with the EMF tracking system is minimized using an IMU mounted with the EMF tracking sensors. Long term drift in the IMU and the VIO tracking system are minimized using the EMF tracking system.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

90.

Eyewear tether

      
Application Number 18141661
Grant Number 11934038
Status In Force
Filing Date 2023-05-01
First Publication Date 2024-03-19
Grant Date 2024-03-19
Owner SNAP INC. (USA)
Inventor
  • Ben-Haim, Yoav
  • Sehrawat, Varun
  • Dabov, Teodor
  • Ardisana, John Bernard

Abstract

Eyewear devices including a tether and methods for identifying proper installation of the tether are disclosed. An eyewear device includes transmission lines extending through the temples to electrical and electronic components positioned adjacent to edges of a frame. A tether is attached to the temples to enable power and communication flow between the electrical and electronic components rather than through the frame. Proper installation is identified based on communications passing between the electrical and electronic components via the tether.

IPC Classes  ?

  • G02C 11/00 - Non-optical adjuncts; Attachment thereof
  • G02C 3/00 - Special supporting arrangement for lens assemblies or monocles
  • G02C 5/00 - Constructions of non-optical parts
  • G02C 5/02 - Bridges; Browbars; Intermediate bars
  • G02C 5/14 - Side-members

91.

Controlling brightness based on eye tracking

      
Application Number 18051330
Grant Number 11935442
Status In Force
Filing Date 2022-10-31
First Publication Date 2024-03-19
Grant Date 2024-03-19
Owner SNAP INC. (USA)
Inventor Patton, Russell Douglas

Abstract

Methods and systems are disclosed for performing operations for controlling brightness in an AR device. The operations comprise displaying an image on an eyewear device worn by a user; detecting a gaze direction of a pupil of the user; identifying a first region of the image that corresponds to the gaze direction of the pupil; and modifying a brightness level or value of pixels in the image based on the gaze direction such that pixels in the first region of the image are set to a first brightness value and pixels in a second region of the image are set to a second brightness value that is lower than the first brightness value.

IPC Classes  ?

  • G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
  • G02B 27/01 - Head-up displays
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

92.

DEFORMING REAL-WORLD OBJECT USING IMAGE WARPING

      
Application Number 17973295
Status Pending
Filing Date 2022-10-25
First Publication Date 2024-03-14
Owner Snap Inc. (USA)
Inventor
  • Guler, Riza Alp
  • Tam, Himmy
  • Wang, Haoyang
  • Kakolyris, Antonios

Abstract

Methods and systems are disclosed for performing real-time deforming operations. The system receives an image that includes a depiction of a real-world object. The system applies a machine learning model to the image to generate a warping field and segmentation mask, the machine learning model trained to establish a relationship between a plurality of training images depicting real-world objects and corresponding ground-truth warping fields and segmentation masks associated with a target shape. The system applies the generated warping field and segmentation mask to the image to warp the real-world object depicted in the image to the target shape.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06T 3/00 - Geometric image transformation in the plane of the image
  • G06T 7/10 - Segmentation; Edge detection
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

93.

SCULPTING AUGMENTED REALITY CONTENT USING GESTURES IN A MESSAGING SYSTEM

      
Application Number 17930927
Status Pending
Filing Date 2022-09-09
First Publication Date 2024-03-14
Owner Snap Inc. (USA)
Inventor
  • Goodrich, Kyle
  • Kaminski, Kurt
  • Mcphee, Andrew James
  • Moreno, Daniel

Abstract

The subject technology detects from a set of frames, a first gesture, the first gesture corresponding to a pinch gesture. The subject technology detects a first location and a first position of a first representation of a first finger from the first gesture and a second location and a second position of a second representation of a second finger from the first gesture. The subject technology detects a first collision event corresponding to a first collider and a second collider intersecting with a third collider of a first virtual object. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology modifies the first virtual object to include an additional augmented reality content based at least in part on the first change and the second change.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 7/20 - Analysis of motion
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 10/74 - Image or video pattern matching; Proximity measures in feature spaces
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • H04L 51/046 - Interoperability with other network applications or services

94.

CLUSTERING VIDEOS USING A SELF-SUPERVISED DNN

      
Application Number 17939256
Status Pending
Filing Date 2022-09-07
First Publication Date 2024-03-14
Owner Snap Inc. (USA)
Inventor
  • Coskun, Huseyin
  • Zareian, Alireza
  • Moore, Joshua
  • Wang, Chen

Abstract

Systems and methods are provided for clustering videos. The system accesses a plurality of content items, the plurality of content items comprising a first set of RGB video frames and a second set of optical flow frames corresponding to the first set of RGB video frames. The system processes the first set of RGB video frames by a first machine learning model to generate a first optimal assignment for the first set of RGB video frames, the first optimal assignment representing initial clustering of the first set of RGB video frames. The system generates an updated first optimal assignment for the first set of RGB video frames based on the first optimal assignment for the first set of RGB video frames and a second optimal assignment of the second set of optical flow frames, the second optimal assignment representing initial clustering of the second set of optical flow frames.

IPC Classes  ?

  • G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
  • G06T 5/50 - Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

95.

Virtual object manipulation with gestures in a messaging system

      
Application Number 17941435
Grant Number 11948266
Status In Force
Filing Date 2022-09-09
First Publication Date 2024-03-14
Grant Date 2024-04-02
Owner SNAP INC. (USA)
Inventor
  • Goodrich, Kyle
  • Lazarov, Maxim Maximov
  • Mcphee, Andrew James
  • Moreno, Daniel

Abstract

The subject technology detects a first gesture and a second gesture, each gesture corresponding to an open trigger finger gesture. The subject technology detects a third gesture and a fourth gesture, each gesture corresponding to a closed trigger finger gesture. The subject technology, selects a first virtual object in a first scene. The subject technology detects a first location and a first position of a first representation of a first finger from the third gesture and a second location and a second position of a second representation of a second finger from the fourth gesture. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology modifies a set of dimensions of the first virtual object to a different set of dimensions.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

96.

GESTURES TO ENABLE MENUS USING AUGMENTED REALITY CONTENT IN A MESSAGING SYSTEM

      
Application Number 17941522
Status Pending
Filing Date 2022-09-09
First Publication Date 2024-03-14
Owner Snap Inc. (USA)
Inventor
  • Goodrich, Kyle
  • Lazarov, Maxim Maximov
  • Mcphee, Andrew James
  • Moreno, Daniel

Abstract

The subject technology detects a first location and a first position of a first representation of a first finger and a second location and a second position of a second representation of a second finger. The subject technology detects a first particular location and a first particular position of a first particular representation of a first particular finger and a second particular location and a second particular position of a second particular representation of a second particular finger. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology detects a first particular change in the first particular location and the first particular position and a second particular change in the second particular location and the second particular position. The subject technology generates a set of virtual objects.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 7/20 - Analysis of motion
  • G06T 7/70 - Determining position or orientation of objects or cameras
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 10/74 - Image or video pattern matching; Proximity measures in feature spaces
  • G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • H04L 51/046 - Interoperability with other network applications or services

97.

MUTABLE GEO-FENCING SYSTEM

      
Application Number 18508771
Status Pending
Filing Date 2023-11-14
First Publication Date 2024-03-14
Owner Snap Inc. (USA)
Inventor
  • Azmoodeh, Farnaz
  • Sellis, Peter
  • Yang, Jinlin

Abstract

In various embodiments, boundaries of geo-fences can be made mutable based on principles described herein. The term “mutable” refers to the ability of a thing (in this case, the boundary of a geo-fence) to change and adjust. In a typical embodiment, a mutable geo-fence system is configured to generate and monitor a geo-fence that encompasses a region, in order to dynamically vary the boundary of the geo-fence based on a number of boundary variables. The term “geo-fence” as used herein describes a virtual perimeter (e.g., a boundary) for a real-world geographic area. A geo-fence could be a radius around a point (e.g., a store), or a set of predefined boundaries. Boundary variables, as used herein, refers to a set of variables utilized by the mutable geo-fence system in determining a location of the boundary of the geo-fence.

IPC Classes  ?

  • G06Q 30/0251 - Targeted advertisements
  • G06Q 30/0272 - Period of advertisement exposure
  • H04M 15/00 - Arrangements for metering, time-control or time-indication
  • H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
  • H04W 4/24 - Accounting or billing

98.

Augmented reality guidance that generates guidance markers

      
Application Number 18510286
Status Pending
Filing Date 2023-11-15
First Publication Date 2024-03-14
Owner Snap Inc. (USA)
Inventor
  • Kang, Shin Hwun
  • Kucher, Dmytro
  • Hovorov, Dmytro
  • Canberk, Ilteris

Abstract

Augmented reality guidance for guiding a user through an environment using an eyewear device. The eyewear device includes a display system and a position detection system. A user is guided though an environment by monitoring a current position of the eyewear device within the environment, identifying marker positions within a threshold of the current position, the marker positions defined with respect to the environment and associated with guidance markers, registering the marker positions, generating overlay image including the guidance markers, and presenting the overlay image on a display of the eyewear device.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G02B 27/01 - Head-up displays
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods

99.

SELECTING ADS FOR A VIDEO WITHIN A MESSAGING SYSTEM

      
Application Number 18514929
Status Pending
Filing Date 2023-11-20
First Publication Date 2024-03-14
Owner Snap Inc. (USA)
Inventor
  • Blackwood, John Cain
  • Lonkar, Chinmay
  • Lue, David B.
  • Penner, Kevin Lee

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for selecting ads for a video. The program and method provide for receiving a request for an ad to insert into a video playing on a client device, the request including a first content identifier that identifies a first type of content included in the video; determining a set of content identifiers associated with the first content identifier, the set of content identifiers identifying second types of content to filter with respect to providing the ad in response to the request; selecting an ad from among plural ads, by filtering ads tagged with a second content identifier included in the set of content identifiers; and providing the selected ad as a response to the request.

IPC Classes  ?

  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • G06F 16/245 - Query processing
  • H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors

100.

CARRY CASE FOR RECHARGEABLE EYEWEAR DEVICES

      
Application Number 18515096
Status Pending
Filing Date 2023-11-20
First Publication Date 2024-03-14
Owner Snap Inc. (USA)
Inventor
  • Kim, Jinwoo
  • Lin, Jun

Abstract

A carry case for an electronics-enabled eyewear device, such as smart glasses, has charging contacts that are movable relative to a storage chamber in which the eyewear device is receivable. The charging contacts are connected to a battery carried by the case for charging the eyewear device via contact coupling of the charging contacts to corresponding contact formations on an exterior of the eyewear device. The charging contacts are in some instances mounted on respective flexible walls defining opposite extremities of the storage chamber. The contact formations on the eyewear device are in some instances provided by hinge assemblies that couple respective temples to a frame of the eyewear device.

IPC Classes  ?

  • A45C 11/04 - Spectacle cases; Pince-nez cases
  • H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
  • H02J 7/34 - Parallel operation in networks using both storage and other dc sources, e.g. providing buffering
  1     2     3     ...     47        Next Page