Adobe Inc.

États‑Unis d’Amérique

Retour au propriétaire

1-100 de 6 967 pour Adobe Inc. et 1 filiale Trier par
Recheche Texte
Affiner par
Type PI
        Brevet 6 420
        Marque 547
Juridiction
        États-Unis 6 590
        Europe 160
        International 126
        Canada 91
Propriétaire / Filiale
[Owner] Adobe Inc. 6 764
Adobe Systems Incorporated 203
Date
Nouveautés (dernières 4 semaines) 64
2024 juin (MACJ) 10
2024 mai 88
2024 avril 61
2024 mars 24
Voir plus
Classe IPC
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales 547
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques 547
G06N 3/08 - Méthodes d'apprentissage 475
G06F 17/30 - Recherche documentaire; Structures de bases de données à cet effet 461
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte 424
Voir plus
Classe NICE
09 - Appareils et instruments scientifiques et électriques 404
42 - Services scientifiques, technologiques et industriels, recherche et conception 266
35 - Publicité; Affaires commerciales 102
16 - Papier, carton et produits en ces matières 91
41 - Éducation, divertissements, activités sportives et culturelles 65
Voir plus
Statut
En Instance 721
Enregistré / En vigueur 6 246
  1     2     3     ...     70        Prochaine page

1.

SURFACE NORMAL PREDICTION USING PAIR-WISE ANGULAR TRAINING

      
Numéro d'application 18076855
Statut En instance
Date de dépôt 2022-12-07
Date de la première publication 2024-06-13
Propriétaire ADOBE INC. (USA)
Inventeur(s) Zhang, Jianming

Abrégé

A surface normal model is trained to predict normal maps from single images using pair-wise angular losses. A training dataset comprising a training image and a ground truth normal map for the training image is received. To train the surface normal model using the training dataset, a predicted normal map is generated for the training image using the surface normal model. A loss is determined as a function of angular values between pairs of normal vectors for the predicted normal map and corresponding angular values between pairs of normal vectors for the ground truth normal map. The surface normal model is updated based on the loss.

Classes IPC  ?

  • G06T 7/60 - Analyse des attributs géométriques
  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques

2.

EDITING NEURAL RADIANCE FIELDS WITH NEURAL BASIS DECOMPOSITION

      
Numéro d'application 18065456
Statut En instance
Date de dépôt 2022-12-13
Date de la première publication 2024-06-13
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kuang, Zhengfei
  • Luan, Fujun
  • Bi, Sai
  • Shu, Zhixin
  • Sunkavalli, Kalyan K.

Abrégé

Embodiments of the present disclosure provide systems, methods, and computer storage media for generating editable synthesized views of scenes by inputting image rays into neural networks using neural basis decomposition. In embodiments, a set of input images of a scene depicting at least one object are collected and used to generate a plurality of rays of the scene. The rays each correspond to three-dimensional coordinates and viewing angles taken from the images. A volume density of the scene is determined by inputting the three-dimensional coordinates from the neural radiance fields into a first neural network to generate a 3D geometric representation of the object. An appearance decomposition is produced by inputting the three-dimensional coordinates and the viewing angles of the rays into a second neural network.

Classes IPC  ?

  • G06T 15/08 - Rendu de volume
  • G06T 15/06 - Lancer de rayon
  • G06T 15/50 - Effets de lumière
  • G06T 15/80 - Ombrage
  • G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties

3.

ANIMATED DISPLAY CHARACTERISTIC CONTROL FOR DIGITAL IMAGES

      
Numéro d'application 18064750
Statut En instance
Date de dépôt 2022-12-12
Date de la première publication 2024-06-13
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Gammon, Christopher James
  • Kroupa, Brandon

Abrégé

Animated display characteristic control for digital images is described. In an implementation, a control is animated in a user interface as progressing through a plurality of values of a display characteristic. A digital image is displayed in the user interface as progressing through the plurality of values of the display characteristic as specified by the animating of the control. An input is received via the user interface and a particular value of the plurality of values is detected as indicated by the animating of the control. The digital image is displayed as having the particular value of the display characteristic as set by the input.

Classes IPC  ?

  • G06T 13/80 - Animation bidimensionnelle [2D], p.ex. utilisant des motifs graphiques programmables
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p.ex. interaction avec des règles ou des cadrans
  • G06F 3/04883 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels pour l’entrée de données par calligraphie, p.ex. sous forme de gestes ou de texte

4.

PARAMETRIC COMPOSITE IMAGE HARMONIZATION

      
Numéro d'application 18076868
Statut En instance
Date de dépôt 2022-12-07
Date de la première publication 2024-06-13
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Gharbi, Michael Yanis
  • Xia, Zhihao
  • Shechtman, Elya
  • Wang, Ke
  • Zhang, He

Abrégé

An image processing system employs a parametric model for image harmonization of composite images. The parametric model employs a two-stage approach to harmonize an input composite image. At a first stage, a color curve prediction model predicts color curve parameters for the composite image. At a second stage, the composite image with the color curve parameters are input to a shadow map prediction model, which predicts a shadow map. The predicted color curve parameters and shadow map are applied to the composite image to provide a harmonized composite image. In some aspects, the color curve parameters and shadow map are predicted using a lower-resolution composite image and up-sampled to apply to a higher-resolution version of the composite image. The harmonized composite image can be output with the predicted color curve parameters and/or shadow map, which can be modified by a user to further enhance the harmonized composite image.

Classes IPC  ?

  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 7/90 - Détermination de caractéristiques de couleur
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

5.

GENERATION OF A 360-DEGREE OBJECT VIEW BY LEVERAGING AVAILABLE IMAGES ON AN ONLINE PLATFORM

      
Numéro d'application 18079579
Statut En instance
Date de dépôt 2022-12-12
Date de la première publication 2024-06-13
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Singhal, Gourav
  • Das, Tridib
  • Gupta, Sourabh

Abrégé

In some embodiments, a computing system generates a 360-degree view of a target object based on available images from an online platform. The computing system identifies a plurality of images having the same target object from one or more image sources on the online platform. The computing system categorizes the plurality of images into multiple view categories. The computing system then determines a representative image for each view category and generates a processed object image of the target object from the representative image for each view category. The computing system then stitches multiple processed object images to create a 360-degree view of the target object.

Classes IPC  ?

  • G06T 15/20 - Calcul de perspectives
  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 7/00 - Analyse d'image
  • G06V 10/10 - Acquisition d’images
  • G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 20/50 - Contexte ou environnement de l’image

6.

DISPLACEMENT-CENTRIC ACCELERATION FOR RAY TRACING

      
Numéro d'application 18439182
Statut En instance
Date de dépôt 2024-02-12
Date de la première publication 2024-06-06
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Thonat, Theo
  • Sun, Xin
  • Boubekeur, Tamy
  • Carr, Nathan
  • Beaune, Francois

Abrégé

Aspects and features of the present disclosure provide a direct ray tracing operator with a low memory footprint for surfaces enriched with displacement maps. A graphics editing application can be used to manipulate displayed representations of a 3D object that include surfaces with displacement textures. The application creates an independent map of a displaced surface. The application ray-traces bounding volumes on the fly and uses the intersection of a query ray with a bounding volume to produce rendering information for a displaced surface. The rendering information can be used to generate displaced surfaces for various base surfaces without significant re-computation so that updated images can be rendered quickly, in real time or near real time.

Classes IPC  ?

7.

Job Modification To Present A User Interface Based On A User Interface Update Rate

      
Numéro d'application 18442995
Statut En instance
Date de dépôt 2024-02-15
Date de la première publication 2024-06-06
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Jain, Mayuri
  • Mukul, Reetesh

Abrégé

A job scheduling system determines a rate at which a user is providing user inputs to a user interface of a computing device. A set of jobs that is to be performed to display or otherwise present a current view of the user interface is identified in response to a user input. This set of jobs is modified by excluding from the set of jobs at least one job that is not estimated to run prior to the next user input. The user interface is displayed or otherwise presented as the modified set of jobs is performed.

Classes IPC  ?

  • G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
  • G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
  • G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
  • G06F 3/0485 - Défilement ou défilement panoramique
  • G06F 9/30 - Dispositions pour exécuter des instructions machines, p.ex. décodage d'instructions
  • G06F 9/38 - Exécution simultanée d'instructions
  • G06F 9/48 - Lancement de programmes; Commutation de programmes, p.ex. par interruption

8.

EFFICIENT RENDERING OF CLIPPING OBJECTS

      
Numéro d'application 18058120
Statut En instance
Date de dépôt 2022-11-22
Date de la première publication 2024-06-06
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kumar, Harish
  • Kumar, Apurva

Abrégé

In implementations of systems for efficient rendering of clipping objects, a computing device implements a clipping system to generate a clipping tree that includes a root node and a node for each clipping group included in a layer of an input render tree. The clipping system generates a segment buffer having rows that each represent coverage of a branch of the clipping tree and columns that each represent coverage of a level of the clipping tree. The segment buffer is mapped to two-dimensional array, and the clipping system computes coverage for a clipping object of a clipping group included in the layer of the input render tree based on an identifier of a row of the two-dimensional array.

Classes IPC  ?

9.

FINE-TUNING AND CONTROLLING DIFFUSION MODELS

      
Numéro d'application 18062314
Statut En instance
Date de dépôt 2022-12-06
Date de la première publication 2024-06-06
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Kumari, Nupur
  • Zhang, Richard
  • Zhu, Junyan
  • Shechtman, Elya

Abrégé

Systems and methods for fine-tuning diffusion models are described. Embodiments of the present disclosure obtain an input text indicating an element to be included in an image; generate a synthetic image depicting the element based on the input text using a diffusion model trained by comparing synthetic images depicting the element to training images depicting elements similar to the element and updating selected parameters corresponding to an attention layer of the diffusion model based on the comparison.

Classes IPC  ?

  • G06V 10/778 - Apprentissage de profils actif, p.ex. apprentissage en ligne des caractéristiques d’images ou de vidéos
  • G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”

10.

LEARNING PARAMETERS FOR AN IMAGE HARMONIZATION NEURAL NETWORK TO GENERATE DEEP HARMONIZED DIGITAL IMAGES

      
Numéro d'application 18440248
Statut En instance
Date de dépôt 2024-02-13
Date de la première publication 2024-06-06
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhang, He
  • Jiang, Yifan
  • Wang, Yilin
  • Zhang, Jianming
  • Sunkavalli, Kalyan
  • Kong, Sarah
  • Chen, Su
  • Amirghodsi, Sohrab
  • Lin, Zhe

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating harmonized digital images utilizing a self-supervised image harmonization neural network. In particular, the disclosed systems can implement, and learn parameters for, a self-supervised image harmonization neural network to extract content from one digital image (disentangled from its appearance) and appearance from another from another digital image (disentangled from its content). For example, the disclosed systems can utilize a dual data augmentation method to generate diverse triplets for parameter learning (including input digital images, reference digital images, and pseudo ground truth digital images), via cropping a digital image with perturbations using three-dimensional color lookup tables (“LUTs”). Additionally, the disclosed systems can utilize the self-supervised image harmonization neural network to generate harmonized digital images that depict content from one digital image having the appearance of another digital image.

Classes IPC  ?

  • G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06N 3/08 - Méthodes d'apprentissage
  • G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte

11.

LEARNING 2D TEXTURE MAPPING IN VOLUMETRIC NEURAL RENDERING

      
Numéro d'application 18426084
Statut En instance
Date de dépôt 2024-01-29
Date de la première publication 2024-05-30
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Xu, Zexiang
  • Hold-Geoffroy, Yannick
  • Hasan, Milos
  • Sunkavalli, Kalyan
  • Xiang, Fanbo

Abrégé

Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.

Classes IPC  ?

  • G06T 15/04 - Mappage de texture
  • G06N 3/045 - Combinaisons de réseaux
  • G06N 3/08 - Méthodes d'apprentissage
  • G06T 15/20 - Calcul de perspectives
  • G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties

12.

S

      
Numéro de série 98574510
Statut En instance
Date de dépôt 2024-05-29
Propriétaire Adobe Inc. ()
Classes de Nice  ? 09 - Appareils et instruments scientifiques et électriques

Produits et services

Downloadable computer graphics software; downloadable software for creating, processing, editing, manipulating, and designing images, graphics, and text; downloadable computer-aided design software for creating, processing, editing, manipulating, and designing images, graphics, and text; downloadable software for digitizing photographic or cartoon images; generation and modification software for textures, particularly procedural textures, namely, downloadable software for generating and modifying procedural textures; downloadable software for the 3D modeling and creation of computer-generated graphics; downloadable computer software for designing animated films; downloadable software for creating digital animations; downloadable software for creating digital lighting; downloadable software for creating digital netting; downloadable software for image bank management; downloadable software for creating and planning 2D and 3D images, graphics, and text; Downloadable design software for creating, processing, editing, manipulating, and designing industrial designs; downloadable software for design of architectures, interiors, and interior layouts; downloadable computer software for planning architectural and interior design layouts; downloadable software for creating, processing, editing, manipulating, and designing 3D space layout; downloadable fashion design software for graphic design of fashion, planning and designing clothing; downloadable instruction books and user manuals for computer software; downloadable graphic art reproductions; downloadable graphic design templates; downloadable graphics featuring 3D models, textures, digital materials, digital lights, for use in digital graphics, videos, movies, video games, and digital materials

13.

S

      
Numéro de série 98574890
Statut En instance
Date de dépôt 2024-05-29
Propriétaire Adobe Inc. ()
Classes de Nice  ? 42 - Services scientifiques, technologiques et industriels, recherche et conception

Produits et services

Software as a service (SaaS) featuring software for use in database management in the field of computer graphics, digital imaging; Rental of computer software for image modeling, editing, and creation of computer-generated graphics, document and image digitization, and providing design services; Electronic data storage; software as a service (SaaS), namely, software for generating and modifying textures in the field of graphics, animation, and computer games; Software as a service (SaaS) services for the 3D modeling and creation of computer-generated graphics in the field of graphics, animation, and computer games; Electronic storage of digital images, graphics and text; Software as a service (SAAS) featuring computer software platforms for downloading computer programs via a global computer network; Software as a service (SAAS) featuring collaborative software platforms for managing digital assets, sharing and downloading of digital and computer files, including but not limited to photos, videos and computer-generated graphics; providing temporary use of non-downloadable computer graphics software; providing temporary use of non-downloadable software for creating, processing, editing, manipulating, and designing images, graphics, and text; providing temporary use of non-downloadable computer-aided design software for creating, processing, editing, manipulating, and designing images, graphics, and text; providing temporary use of non-downloadable software for generating and modifying procedural textures; providing temporary use of non-downloadable software for creating and planning 2D and 3D images, graphics, and text; providing temporary use of non-downloadable design software for creating, processing, editing, manipulating, and designing industrial designs; providing temporary use of non-downloadable software for design of architectures, interiors, and interior layouts; providing temporary use of non-downloadable computer software for planning architectural and interior design layouts; providing temporary use of non-downloadable software for creating, processing, editing, manipulating, and designing 3D space layout; providing temporary use of non-downloadable fashion design software for graphic design of fashion, planning and designing clothing

14.

Language-guided document editing

      
Numéro d'application 18165579
Numéro de brevet 11995394
Statut Délivré - en vigueur
Date de dépôt 2023-02-07
Date de la première publication 2024-05-28
Date d'octroi 2024-05-28
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Morariu, Vlad Ion
  • Mathur, Puneet
  • Jain, Rajiv Bhawanji
  • Gu, Jiuxiang
  • Dernoncourt, Franck

Abrégé

Systems and methods for document editing are provided. One aspect of the systems and methods includes obtaining a document and a natural language edit request. Another aspect of the systems and methods includes generating a structured edit command using a machine learning model based on the document and the natural language edit request. Yet another aspect of the systems and methods includes generating a modified document based on the document and the structured edit command, where the modified document includes a revision of the document that incorporates the natural language edit request.

Classes IPC  ?

  • G06F 40/166 - Traitement de texte Édition, p.ex. insertion ou suppression
  • G06F 3/16 - Entrée acoustique; Sortie acoustique
  • G06F 40/284 - Analyse lexicale, p.ex. segmentation en unités ou cooccurrence
  • G06N 20/00 - Apprentissage automatique
  • G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine 
  • G10L 15/26 - Systèmes de synthèse de texte à partir de la parole

15.

MOVING OBJECTS CASTING A SHADOW AND GENERATING PORXY SHADOWS WITHIN A DIGITAL IMAGE

      
Numéro d'application 18460150
Statut En instance
Date de dépôt 2023-09-01
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kim, Soo Ye
  • Lin, Zhe
  • Cohen, Scott
  • Zhang, Jianming
  • Figueroa, Luis
  • Ding, Zhihong

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that provides a graphical user interface experience to move objects and generate new shadows within a digital image scene. For instance, in one or more embodiments, the disclosed systems receive a digital image depicting a scene. The disclosed systems receive a selection to position an object in a first location within the scene. Further, the disclosed systems composite an image by placing the object at the first location within the scene of the digital image. Moreover, the disclosed systems generate a modified digital image having a shadow of the object where the shadow is consistent with the scene and provides the modified digital image to the client device.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06F 3/0486 - Glisser-déposer
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

16.

PREDICTIVE AGENTS FOR MULTI-ROUND CONVERSATIONAL RECOMMENDATIONS OF BUNDLED ITEMS

      
Numéro d'application 17980790
Statut En instance
Date de dépôt 2022-11-04
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhao, Handong
  • He, Zhankui
  • Yu, Tong
  • Du, Fan
  • Kim, Sungchul

Abrégé

Techniques for predicting and recommending item bundles in a multi-round conversation to discover a target item bundle that would be accepted by a client. An example method includes receiving an input response in reply to a first item bundle that includes one or more items. A state model is updated to reflect the input response to the first item bundle. A machine-learning (ML) conversation module is applied to the state model to determine an action type as a follow-up to the input response to the first item bundle. Based on selection of a recommendation action as the action type, an ML bundling module is applied to the state model to generate a second item bundle different than the first item bundle. The second item bundle is then recommended.

Classes IPC  ?

  • G06Q 30/06 - Transactions d’achat, de vente ou de crédit-bail

17.

MODELING INTERFACES WITH PROGRESSIVE CLOTH SIMULATION

      
Numéro d'application 17989239
Statut En instance
Date de dépôt 2022-11-17
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kaufman, Daniel
  • Fei, Yun
  • Dumas, Jeremie
  • Jacobson, Alec
  • Zhang, Jiayi

Abrégé

A system accesses a virtual scene including a three-dimensional (3D) object and a mesh object that models a cloth object. The system performs a refinement simulation to model a drape of the cloth object over the 3D object. Performing the refinement simulation includes, for each of a sequence of mesh resolutions: determining a configuration of the mesh model that minimizes a proxy energy function of a finest mesh resolution of the sequence of mesh resolutions. The system generates, for display via a user interface during the refinement simulation, an editable preview object comprising the mesh object at a coarsest level mesh resolution. The system receives a modification to the editable preview object and displays the modified editable preview object. A configuration of a finest level mesh resolution of the mesh object in the refinement simulation is geometrically consistent with a configuration of the modified editable preview object.

Classes IPC  ?

  • G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation

18.

NEURAL NETWORKS TO RENDER TEXTURED MATERIALS ON CURVED SURFACES

      
Numéro d'application 17993854
Statut En instance
Date de dépôt 2022-11-23
Date de la première publication 2024-05-23
Propriétaire
  • Adobe Inc. (USA)
  • The Regents of the University of California (USA)
Inventeur(s)
  • Lakshminarayana, Krishna Bhargava Mullia
  • Xu, Zexiang
  • Hasan, Milos
  • Luan, Fujun
  • Kuznetsov, Alexandr
  • Wang, Xuezheng
  • Ramamoorthi, Ravi

Abrégé

A scene modeling system accesses a three-dimensional (3D) scene including a 3D object. The scene modeling system applies a silhouette bidirectional texture function (SBTF) model to the 3D object to generate an output image of a textured material rendered as a surface of the 3D object. Applying the SBTF model includes determining a bounding geometry for the surface of the 3D object. Applying the SBTF model includes determining, for each pixel of the output image, a pixel value based on the bounding geometry. The scene modeling system displays, via a user interface, the output image based on the determined pixel values.

Classes IPC  ?

19.

PRODUCT OF VARIATIONS IN IMAGE GENERATIVE MODELS

      
Numéro d'application 18056579
Statut En instance
Date de dépôt 2022-11-17
Date de la première publication 2024-05-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Nitzan, Yotam
  • Park, Taesung
  • Gharbi, Michaël
  • Zhang, Richard
  • Zhu, Junyan
  • Shechtman, Elya

Abrégé

Systems and methods for image generation include obtaining an input image and an attribute value representing an attribute of the input image to be modified; computing a modified latent vector for the input image by applying the attribute value to a basis vector corresponding to the attribute in a latent space of an image generation network; and generating a modified image based on the modified latent vector using the image generation network, wherein the modified image includes the attribute based on the attribute value.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”

20.

WARPING ARTWORK TO PERSPECTIVE CYLINDERS WITH BICUBIC PATCHES

      
Numéro d'application 18057374
Statut En instance
Date de dépôt 2022-11-21
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s) Peterson, John

Abrégé

Embodiments are disclosed for warping artwork. The artwork is warped to fit an existing image of a cylinder. A method of warping artwork may include receiving a request to wrap an image onto a cylindrical surface. In response to the request, a set of adjacent warping patches are generated. The set of adjacent warping patches includes a first zero-width patch corresponding to a left edge of the image and a second zero-width patch corresponding to a right edge of the image. The image is mapped to the set of adjacent warping patches based on a viewing perspective of the cylindrical surface. User control handles are provided to adjust the warp to fit the existing image of the cylinder, including its position, curvature, and perspective.

Classes IPC  ?

  • G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
  • G06T 15/04 - Mappage de texture
  • G06T 17/30 - Description de surfaces, p.ex. description de surfaces polynomiales
  • G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties

21.

MULTI-MODAL IMAGE GENERATION

      
Numéro d'application 18057857
Statut En instance
Date de dépôt 2022-11-22
Date de la première publication 2024-05-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Zeng, Yu
  • Lin, Zhe
  • Zhang, Jianming
  • Liu, Qing
  • Kuen, Jason Wen Yong
  • Collomosse, John Philip

Abrégé

Systems and methods for multi-modal image generation are provided. One or more aspects of the systems and methods includes obtaining a text prompt and layout information indicating a target location for an element of the text prompt within an image to be generated and computing a text feature map including a plurality of values corresponding to the element of the text prompt at pixel locations corresponding to the target location. Then the image is generated based on the text feature map using a diffusion model. The generated image includes the element of the text prompt at the target location.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06F 40/295 - Reconnaissance de noms propres
  • G06T 7/11 - Découpage basé sur les zones
  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
  • G06V 10/776 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source Évaluation des performances

22.

TIME-SERIES ANOMALY DETECTION

      
Numéro d'application 18057883
Statut En instance
Date de dépôt 2022-11-22
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhang, Wei
  • Arbour, David Thomas

Abrégé

In implementations of systems for time-series anomaly detection, a computing device implements an anomaly system to receive, via a network, time-series data describing continuously observed values separated by a period of time. The anomaly system computes updated estimated parameters of a predictive model for the time-series data by performing a rank one update on previously estimated parameters of the predictive model. An uncertainty interval for a future observed value is generated using the predictive model with the updated estimated parameters. The anomaly system determines that an observed value corresponding to the future observed value is outside of the uncertainty interval. An indication is generated that the observed value is an anomaly.

Classes IPC  ?

23.

AFFORDANCE-BASED REPOSING OF AN OBJECT IN A SCENE

      
Numéro d'application 18058528
Statut En instance
Date de dépôt 2022-11-23
Date de la première publication 2024-05-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Kulal, Sumith
  • Singh, Krishna Kumar
  • Yang, Jimel
  • Lu, Jingwan
  • Efros, Alexei

Abrégé

Systems and methods for inserting an object into a background are described. Examples of the systems and methods include obtaining a background image including a region for inserting the object, and encoding the background image to obtain an encoded background. A modified image is then generated based on the encoded background using a diffusion model. The modified image depicts the object within the region.

Classes IPC  ?

  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
  • G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
  • G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras

24.

REMOVING DISTRACTING OBJECTS FROM DIGITAL IMAGES

      
Numéro d'application 18058554
Statut En instance
Date de dépôt 2022-11-23
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Figueroa, Luis
  • Ding, Zhihong
  • Cohen, Scott
  • Lin, Zhe
  • Liu, Qing

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems provide, for display within a graphical user interface of a client device, a digital image displaying a plurality of objects, the plurality of objects comprising a plurality of different types of objects. The disclosed systems generate, utilizing a segmentation neural network and without user input, an object mask for objects of the plurality of objects. The disclosed systems determine, utilizing a distractor detection neural network, a classification for the objects of the plurality of objects. The disclosed systems remove at least one object from the digital image, based on classifying the at least one object as a distracting object, by deleting the object mask for the at least one object.

Classes IPC  ?

  • H04N 23/63 - Commande des caméras ou des modules de caméras en utilisant des viseurs électroniques
  • G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
  • G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
  • H04N 5/262 - Circuits de studio, p.ex. pour mélanger, commuter, changer le caractère de l'image, pour d'autres effets spéciaux

25.

DETECTING OBJECT RELATIONSHIPS AND EDITING DIGITAL IMAGES BASED ON THE OBJECT RELATIONSHIPS

      
Numéro d'application 18058630
Statut En instance
Date de dépôt 2022-11-23
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Cohen, Scott
  • Lin, Zhe
  • Ding, Zhihong
  • Figueroa, Luis
  • Kafle, Kushal

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems detect, via a graphical user interface of a client device, a user selection of an object portrayed within a digital image. The disclosed systems determine, in response to detecting the user selection of the object, a relationship between the object and an additional object portrayed within the digital image. The disclosed systems receive one or more user interactions for modifying the object. The disclosed systems modify the digital image in response to the one or more user interactions by modifying the object and the additional object based on the relationship between the object and the additional object.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06T 3/20 - Translation linéaire d'une image entière ou d'une partie d'image, p.ex. décalage
  • G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
  • G06V 10/86 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant des correspondances graphiques

26.

ESTIMATING TEMPORAL OCCURRENCE OF A BINARY STATE CHANGE

      
Numéro d'application 17989362
Statut En instance
Date de dépôt 2022-11-17
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhang, Luwan
  • Yan, Zhenyu
  • He, Jun
  • Yang, Hsiang-Yu
  • Zhong, Cheng

Abrégé

In implementations of systems for estimating temporal occurrence of a binary state change, a computing device implements an occurrence system to compute a posterior probability distribution for temporal occurrences of binary state changes associated with client computing devices included in a group of client computing devices. The occurrence system determines probabilities of a binary state change associated with a client computing device included in the group of client computing devices based on the posterior probability distribution, and the probabilities correspond to future periods of time. A future period of time is identified based on a probability of the binary state change associated with the client computing device. The occurrence system generates a communication based on a communications protocol for transmission to the client computing device via a network at a period of time that correspond to the future period of time.

Classes IPC  ?

27.

WAVELET-DRIVEN IMAGE SYNTHESIS WITH DIFFUSION MODELS

      
Numéro d'application 18056405
Statut En instance
Date de dépôt 2022-11-17
Date de la première publication 2024-05-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Liu, Nan
  • Li, Yijun
  • Gharbi, Michaël Yanis
  • Lu, Jingwan

Abrégé

Systems and methods for synthesizing images with increased high-frequency detail are described. Embodiments are configured to identify an input image including a noise level and encode the input image to obtain image features. A diffusion model reduces a resolution of the image features at an intermediate stage of the model using a wavelet transform to obtain reduced image features at a reduced resolution, and generates an output image based on the reduced image features using the diffusion model. In some cases, the output image comprises a version of the input image that has a reduced noise level compared to the noise level of the input image.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image

28.

AMODAL INSTANCE SEGMENTATION USING DIFFUSION MODELS

      
Numéro d'application 18056987
Statut En instance
Date de dépôt 2022-11-18
Date de la première publication 2024-05-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Zhang, Jianming
  • Liu, Qing
  • Wang, Yilin
  • Lin, Zhe
  • Zhang, Bowen

Abrégé

Systems and methods for instance segmentation are described. Embodiments include identifying an input image comprising an object that includes a visible region and an occluded region that is concealed in the input image. A mask network generates an instance mask for the input image that indicates the visible region of the object. A diffusion model then generates a segmentation mask for the input image based on the instance mask. The segmentation mask indicates a completed region of the object that includes the visible region and the occluded region.

Classes IPC  ?

  • G06T 7/10 - Découpage; Détection de bords

29.

MODELING SECONDARY MOTION BASED ON THREE-DIMENSIONAL MODELS

      
Numéro d'application 18057436
Statut En instance
Date de dépôt 2022-11-21
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Yoon, Jae Shin
  • Shu, Zhixin
  • Wang, Yangtuanfeng
  • Lu, Jingwan
  • Yang, Jimei
  • Aksit, Duygu Ceylan

Abrégé

Techniques for modeling secondary motion based on three-dimensional models are described as implemented by a secondary motion modeling system, which is configured to receive a plurality of three-dimensional object models representing an object. Based on the three-dimensional object models, the secondary motion modeling system determines three-dimensional motion descriptors of a particular three-dimensional object model using one or more machine learning models. Based on the three-dimensional motion descriptors, the secondary motion modeling system models at least one feature subjected to secondary motion using the one or more machine learning models. The particular three-dimensional object model having the at least one feature is rendered by the secondary motion modeling system.

Classes IPC  ?

  • G06T 7/20 - Analyse du mouvement
  • G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
  • G06T 15/04 - Mappage de texture
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie

30.

TEXT AND COLOR-GUIDED LAYOUT CONTROL WITH A DIFFUSION MODEL

      
Numéro d'application 18057453
Statut En instance
Date de dépôt 2022-11-21
Date de la première publication 2024-05-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Gandelsman, Yosef
  • Park, Taesung
  • Zhang, Richard
  • Shechtman, Elya
  • Efros, Alexei A.

Abrégé

Systems and methods for image generation are described. Embodiments of the present disclosure obtain user input that indicates a target color and a semantic label for a region of an image to be generated. The system also generates of obtains a noise map including noise biased towards the target color in the region indicated by the user input. A diffusion model generates the image based on the noise map and the semantic label for the region. The image can include an object in the designated region that is described by the semantic label and that has the target color.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles

31.

STYLIZING DIGITAL CONTENT

      
Numéro d'application 18057834
Statut En instance
Date de dépôt 2022-11-22
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Jain, Sanyam
  • Agarwal, Rishav
  • Purwar, Rishabh
  • Gaurav, Prateek
  • Agrawal, Palak
  • Kedia, Nikhil
  • Kumar, Ankit

Abrégé

In implementations of systems for stylizing digital content, a computing device implements a style system to receive input data describing digital content to be stylized based on visual styles of example content included in a digital template. The style system generates embeddings for content entities included in the digital content using a machine learning model. Classified content entities are determined based on the embeddings using the machine learning model. The style system generates an output digital template that includes portions of the digital content having the visual styles of example content included in the digital template based on the classified content entities.

Classes IPC  ?

  • G06F 40/186 - Gabarits
  • G06F 40/109 - Maniement des polices de caractères; Typographie cinétique ou temporelle
  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06V 30/19 - Reconnaissance utilisant des moyens électroniques
  • G06V 30/413 - Classification de contenu, p.ex. de textes, de photographies ou de tableaux

32.

MULTI-MODAL IMAGE EDITING

      
Numéro d'application 18057851
Statut En instance
Date de dépôt 2022-11-22
Date de la première publication 2024-05-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Xie, Shaoan
  • Zhang, Zhifei
  • Lin, Zhe
  • Hinz, Tobias

Abrégé

Systems and methods for multi-modal image editing are provided. In one aspect, a system and method for multi-modal image editing includes identifying an image, a prompt identifying an element to be added to the image, and a mask indicating a first region of the image for depicting the element. The system then generates a partially noisy image map that includes noise in the first region and image features from the image in a second region outside the first region. A diffusion model generates a composite image map based on the partially noisy image map and the prompt. In some cases, the composite image map includes the target element in the first region that corresponds to the mask.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

33.

REPAIRING IRREGULARITIES IN COMPUTER-GENERATED IMAGES

      
Numéro d'application 18057930
Statut En instance
Date de dépôt 2022-11-22
Date de la première publication 2024-05-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Agarwal, Anjali
  • Khodadadeh, Siavash
  • Kalarot, Ratheesh
  • Qu, Hui
  • Olsen, Sven C.
  • Ghadar, Shabnam

Abrégé

Systems and methods for image processing are provided. Embodiments include identifying an image of a face that includes an artifact in a part of the face. A machine learning model generates an intermediate image based on the original image. The intermediate image depicts the part of the face in a closed position. Then the model generates a corrected image based on the intermediate image. The corrected image depicts the face with the part of the face in an open position and without the artifact.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image

34.

IMAGE AND OBJECT INPAINTING WITH DIFFUSION MODELS

      
Numéro d'application 18058027
Statut En instance
Date de dépôt 2022-11-22
Date de la première publication 2024-05-23
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Zheng, Haitian
  • Lin, Zhe
  • Zhang, Jianming
  • Barnes, Connelly Stuart
  • Shechtman, Elya
  • Lu, Jingwan
  • Liu, Qing
  • Amirghodsi, Sohrab
  • Zhou, Yuqian
  • Cohen, Scott

Abrégé

Systems and methods for image processing are described. Embodiments of the present disclosure receive an image comprising a first region that includes content and a second region to be inpainted. Noise is then added to the image to obtain a noisy image, and a plurality of intermediate output images are generated based on the noisy image using a diffusion model trained using a perceptual loss. The intermediate output images predict a final output image based on a corresponding intermediate noise level of the diffusion model. The diffusion model then generates the final output image based on the intermediate output image. The final output image includes inpainted content in the second region that is consistent with the content in the first region.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image

35.

MODIFYING DIGITAL IMAGES VIA SCENE-BASED EDITING USING IMAGE UNDERSTANDING FACILITATED BY ARTIFICIAL INTELLIGENCE

      
Numéro d'application 18058538
Statut En instance
Date de dépôt 2022-11-23
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Brandt, Jonathan
  • Cohen, Scott
  • Lin, Zhe
  • Ding, Zhihong
  • Prasad, Darshan
  • Joss, Matthew
  • Gomes, Celso
  • Zhang, Jianming
  • Soroka, Olena
  • Stoeckmann, Klaas
  • Zimmermann, Michael
  • Muehrke, Thomas

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems generate utilizing a segmentation neural network, an object mask for each object of a plurality of objects of a digital image. The disclosed systems detect a first user interaction with an object in the digital image displayed via a graphical user interface. The disclosed systems surface, via the graphical user interface, the object mask for the object in response to the first user interaction. The disclosed systems perform an object-aware modification of the digital image in response to a second user interaction with the object mask for the object.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06T 11/40 - Remplissage d'une surface plane par addition d'attributs de surface, p.ex. de couleur ou de texture

36.

DETECTING SHADOWS AND CORRESPONDING OBJECTS IN DIGITAL IMAGES

      
Numéro d'application 18058575
Statut En instance
Date de dépôt 2022-11-23
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Figueroa, Luis
  • Lin, Zhe
  • Ding, Zhihong
  • Cohen, Scott

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems receive a digital image from a client device. The disclosed systems detect, utilizing a shadow detection neural network, an object portrayed in the digital image. The disclosed systems detect, utilizing the shadow detection neural network, a shadow portrayed in the digital image. The disclosed systems generate, utilizing the shadow detection neural network, an object-shadow pair prediction that associates the shadow with the object.

Classes IPC  ?

  • G06V 10/20 - Prétraitement de l’image
  • G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux

37.

DILATING OBJECT MASKS TO REDUCE ARTIFACTS DURING INPAINTING

      
Numéro d'application 18058601
Statut En instance
Date de dépôt 2022-11-23
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Liu, Qing
  • Lin, Zhe
  • Figueroa, Luis
  • Cohen, Scott

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems generate, utilizing a segmentation neural network and without user input, object masks for objects in a digital image. The disclosed systems determine foreground and background abutting an object mask. The disclosed systems generate an expanded object mask by expanding the object mask into the foreground abutting the object mask by a first amount and expanding the object mask into the background abutting the object mask by a second amount that differs from the first amount. The disclosed systems inpaint a hole corresponding to the expanded object mask utilizing an inpainting neural network.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan

38.

DETECTING AND MODIFYING OBJECT ATTRIBUTES

      
Numéro d'application 18058622
Statut En instance
Date de dépôt 2022-11-23
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Lin, Zhe
  • Cohen, Scott
  • Kafle, Kushal

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For instance, in one or more embodiments, the disclosed systems detect a selection of an object portrayed in a digital image displayed within a graphical user interface of a client device. The disclosed systems provide, for display within the graphical user interface in response to detecting the selection of the object, an interactive window displaying one or more attributes of the object. The disclosed systems receive, via the interactive window, a user interaction to change an attribute from the one or more attributes. The disclosed systems modify the digital image by changing the attribute of the object in accordance with the user interaction.

Classes IPC  ?

  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
  • G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p.ex. interaction avec des règles ou des cadrans
  • G06F 3/04886 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels par partition en zones à commande indépendante de la surface d’affichage de l’écran tactile ou de la tablette numérique, p.ex. claviers virtuels ou menus
  • G06F 40/166 - Traitement de texte Édition, p.ex. insertion ou suppression
  • G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
  • G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques

39.

SIMULATED HANDWRITING IMAGE GENERATOR

      
Numéro d'application 18420444
Statut En instance
Date de dépôt 2024-01-23
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Tensmeyer, Christopher Alan
  • Jain, Rajiv
  • Wigington, Curtis Michael
  • Price, Brian Lynn
  • Davis, Brian Lafayette

Abrégé

Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.

Classes IPC  ?

  • G06F 3/04883 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels pour l’entrée de données par calligraphie, p.ex. sous forme de gestes ou de texte
  • G06N 3/045 - Combinaisons de réseaux
  • G06N 3/08 - Méthodes d'apprentissage
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 30/226 - Reconnaissance de caractères caractérisés par le type d’écriture de l’écriture cursive
  • G06V 30/228 - Reconnaissance de caractères caractérisés par le type d’écriture de l’écriture à la main tridimensionnelle, p.ex. en écrivant en l’air
  • G06V 30/32 - Encre numérique

40.

SYNTHESIZING SHADOWS IN DIGITAL IMAGES UTILIZING DIFFUSION MODELS

      
Numéro d'application 18532457
Statut En instance
Date de dépôt 2023-12-07
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kim, Soo Ye
  • Lin, Zhe
  • Cohen, Scott
  • Zhang, Jianming
  • Figueroa, Luis

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing to synthesize shadows for object(s). For instance, in one or more embodiments, the disclosed systems receive a digital image depicting a scene. The disclosed systems access an object mask of the object depicting in the digital image. The disclosed systems further combine the object mask, the digital image, and a noise representation to generate a combined representation. Moreover, the disclosed systems generate a shadow for the object from the combined representation and further generates the modified digital image by combining the shadow with the digital image.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06F 3/0486 - Glisser-déposer
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

41.

TEXTURE-PRESERVING SHADOW REMOVAL IN DIGITAL IMAGES UTILIZING GENERATING INPAINTING MODELS

      
Numéro d'application 18532485
Statut En instance
Date de dépôt 2023-12-07
Date de la première publication 2024-05-23
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kim, Soo Ye
  • Lin, Zhe
  • Cohen, Scott
  • Zhang, Jianming
  • Figueroa, Luis
  • Ding, Zhihong

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing to remove a shadow for an object. For instance, in one or more embodiments, the disclosed systems receive a digital image depicting a scene. The disclosed systems access a shadow mask of the shadow in a first location. Further, the disclosed systems generate the modified digital image without the shadow by generating a fill for the first location that preserves a visible location of the first location. Moreover, the disclosed systems generate the digital image without the shadow for the object by combining the fill with the digital image.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06F 3/0486 - Glisser-déposer
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

42.

CURVE OFFSET OPERATIONS

      
Numéro d'application 17984967
Statut En instance
Date de dépôt 2022-11-10
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Jain, Arushi
  • Dhanuka, Praveen Kumar

Abrégé

Curve offset operations as implemented by a digital image editing system are described. Input points are received via a user interface and a determination is made that a first set of the input points satisfy a condition for use as part of a curve offset operation with respect to a curve. A first segment is added to a path using the curve offset operation. The first segment is generated by aligning the first set of input points using an offset value based on the curve. A determination is then made that a second set of the input points do not satisfy the condition for use as part of the curve offset operation and a second segment is added to the first segment of the path. The second segment is generated using the second set of points and the path is displayed.

Classes IPC  ?

  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles

43.

GLYPH EDIT WITH ADORNMENT OBJECT

      
Numéro d'application 17985431
Statut En instance
Date de dépôt 2022-11-11
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Dhanuka, Praveen Kumar
  • Pal, Shivi
  • Jain, Arushi

Abrégé

Glyph editing techniques through use of an adornment object are described. In one example, an input is received identifying a glyph and an adornment object in digital content displayed in a user interface. Glyph anchor points are obtained based on the glyph and adornment anchor points based on the adornment object. A link is generated between at least one said glyph anchor point and at least one said adornment anchor point. An edit input is received specifying an edit to a spatial property the glyph. The spatial property of the edit is propagated to a spatial property of the adornment object based on the link.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles

44.

DIFFUSION MODELS HAVING CONTINUOUS SCALING THROUGH PATCH-WISE IMAGE GENERATION

      
Numéro d'application 18052658
Statut En instance
Date de dépôt 2022-11-04
Date de la première publication 2024-05-16
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Chen, Yinbo
  • Gharbi, Michaël
  • Wang, Oliver
  • Zhang, Richard
  • Shechtman, Elya

Abrégé

Aspects of the methods, apparatus, non-transitory computer readable medium, and systems include obtaining a noise map and a global image code encoded from an original image and representing semantic content of the original image; generating a plurality of image patches based on the noise map and the global image code using a diffusion model; and combining the plurality of image patches to produce an output image including the semantic content.

Classes IPC  ?

  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 7/10 - Découpage; Détection de bords

45.

HARMONIZING COMPOSITE IMAGES UTILIZING A SEMANTIC-GUIDED TRANSFORMER NEURAL NETWORK

      
Numéro d'application 18053027
Statut En instance
Date de dépôt 2022-11-07
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhang, He
  • Jung, Hyun Joon

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a multi-branch harmonization neural network architecture to harmonize composite images. For example, in one or more implementations, the semantic-guided transformer-based harmonization system uses a convolutional branch, a transformer branch, and a semantic branch to generate a harmonized composite image based on an input composite image and a corresponding segmentation mask. More particularly, the convolutional branch comprises a series of convolutional neural network layers followed by a style normalization layer to extract localized information from the input composite image. Further, the transformer branch comprises a series of transformer neural network layers to extract global information based on different resolutions of the input composite image. The semantic branch includes a visual neural network that generates semantic features that inform the harmonization of the composite images.

Classes IPC  ?

  • G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan
  • G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
  • G06V 10/42 - Extraction de caractéristiques globales par l’analyse du motif entier, p.ex. utilisant des transformations dans le domaine de fréquence ou d’autocorrélation
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux

46.

GENERATING IMAGE MATTES WITHOUT TRIMAP SEGMENETATIONS VIA A MULTI-BRANCH NEURAL NETWORK

      
Numéro d'application 18053646
Statut En instance
Date de dépôt 2022-11-08
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Liu, Zichuan
  • Lu, Xin
  • Wang, Ke

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for generating image mattes for detected objects in digital images without trimap segmentation via a multi-branch neural network. The disclosed system utilizes a first neural network branch of a generative neural network to extract a coarse semantic mask from a digital image. The disclosed system utilizes a second neural network branch of the generative neural network to extract a detail mask based on the coarse semantic mask. Additionally, the disclosed system utilizes a third neural network branch of the generative neural network to fuse the coarse semantic mask and the detail mask to generate an image matte. In one or more embodiments, the disclosed system also utilizes a refinement neural network to generate a final image matte by refining selected portions of the image matte generated by the generative neural network.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06T 7/13 - Détection de bords
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées

47.

APPLYING VECTOR-BASED DECALS ON THREE-DIMENSIONAL OBJECTS

      
Numéro d'application 18054248
Statut En instance
Date de dépôt 2022-11-10
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Dhingra, Sumit
  • Chaudhuri, Siddhartha
  • Batra, Vineet

Abrégé

This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that apply a resolution independent, vector-based decal on a 3D object. In one or more implementations, the disclosed systems apply piecewise non-linear transformation on an input decal vector geometry to align the decal with a surface of an underlying 3D object. To apply a vector-based decal on a 3D object, in certain embodiments, the disclosed systems parameterize a 3D mesh of the 3D object to create a mesh map. Moreover, in some instances, the disclosed systems determine intersections between edges of a decal geometry and edges of the mesh map to add vertices to the decal geometry at the intersections. Additionally, in some implementations, the disclosed systems lift and project vertices of the decal geometry into three dimensions to align the vertices with faces of the 3D mesh of the 3D object.

Classes IPC  ?

  • G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
  • G06T 7/13 - Détection de bords

48.

MODIFYING TWO-DIMENSIONAL IMAGES UTILIZING THREE-DIMENSIONAL MESHES OF THE TWO-DIMENSIONAL IMAGES

      
Numéro d'application 18055584
Statut En instance
Date de dépôt 2022-11-15
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Mech, Radomir
  • Carr, Nathan
  • Gadelha, Matheus

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input. Specifically, the disclosed system maps the three-dimensional mesh to the two-dimensional image, modifies the three-dimensional mesh in response to a displacement input, and updates the two-dimensional image.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
  • G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties

49.

MODIFYING TWO-DIMENSIONAL IMAGES UTILIZING ITERATIVE THREE-DIMENSIONAL MESHES OF THE TWO-DIMENSIONAL IMAGES

      
Numéro d'application 18055590
Statut En instance
Date de dépôt 2022-11-15
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Mech, Radomir
  • Carr, Nathan
  • Gadelha, Matheus

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input. Specifically, the disclosed system maps the three-dimensional mesh to the two-dimensional image, modifies the three-dimensional mesh in response to a displacement input, and updates the two-dimensional image.

Classes IPC  ?

  • G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
  • G06T 5/00 - Amélioration ou restauration d'image

50.

EXTRACTING DOCUMENT HIERARCHY USING A MULTIMODAL, LAYER-WISE LINK PREDICTION NEURAL NETWORK

      
Numéro d'application 18055752
Statut En instance
Date de dépôt 2022-11-15
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Morariu, Vlad
  • Mathur, Puneet
  • Jain, Rajiv
  • Mehra, Ashutosh
  • Gu, Jiuxiang
  • Dernoncourt, Franck
  • N, Anandhavelu
  • Tran, Quan
  • Kaynig-Fittkau, Verena
  • Lipka, Nedim
  • Nenkova, Ani

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate a digital document hierarchy comprising layers of parent-child element relationships from the visual elements. For example, for a layer of the layers, the disclosed systems determine, from the visual elements, candidate parent visual elements and child visual elements. In addition, for the layer of the layers, the disclosed systems generate, from the feature embeddings utilizing a neural network, element classifications for the candidate parent visual elements and parent-child element link probabilities for the candidate parent visual elements and the child visual elements. Moreover, for the layer, the disclosed systems select parent visual elements from the candidate parent visual elements based on the parent-child element link probabilities. Further, the disclosed systems utilize the digital document hierarchy to generate an interactive digital document from the digital document image.

Classes IPC  ?

  • G06V 30/413 - Classification de contenu, p.ex. de textes, de photographies ou de tableaux
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux

51.

TARGET-AUGMENTED MATERIAL MAPS

      
Numéro d'application 17985579
Statut En instance
Date de dépôt 2022-11-11
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Deschaintre, Valentin
  • Hu, Yiwei
  • Guerrero, Paul
  • Hasan, Milos

Abrégé

Certain aspects and features of this disclosure relate to rendering images using target-augmented material maps. In one example, a graphics imaging application is loaded with a scene and an input material map, as well as a file for a target image. A stored, material generation prior is accessed by the graphics imaging application. This prior, as an example, is based on a pre-trained, generative adversarial network (GAN). An input material appearance from the input material map is encoded to produce a projected latent vector. The value for the projected latent vector is optimized to produce the material map that is used to render the scene, producing a material map augmented by a realistic target material appearance.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
  • G06T 9/00 - Codage d'image
  • G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques

52.

LOCALIZED SEAM CARVING AND EXPANSION WITH CONFIGURABLE LOCALIZATION THRESHOLD

      
Numéro d'application 17986156
Statut En instance
Date de dépôt 2022-11-14
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s) Gilra, Anant

Abrégé

Embodiments are disclosed for performing localized seam carving, the image including multiple low activity regions. The method includes obtaining an image. A selection of a point of interest and a requested adjustment is received. A set of seams that are present in the low activity region are selected, with each seam including a connected path of pixels and each pixel having low activity. An adjusted set of seams is generated by duplicating or removing one or more seams from the set of seams using the adjustment. The set of seams in the low activity region is replaced by the adjusted set of seams. An updated image including the low activity region with the adjusted set of seams is output.

Classes IPC  ?

53.

Image-Based Searches for Templates

      
Numéro d'application 17988377
Statut En instance
Date de dépôt 2022-11-16
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Eriksson, Brian
  • Hsu, Wei-Ting
  • Pombo, Santiago
  • Bhamidipati, Sandilya
  • Khan, Rida
  • Devarapalli, Ravali
  • Davis, Maya Christmas
  • Chan, Lam Wing
  • Blank, Konstantin
  • Kafil, Jason Omid
  • Ni, Di

Abrégé

In implementations of image-based searches for templates, a computing device implements a search system to generate an embedding vector that represents an input digital image using a machine learning model. The search system identifies templates that include a candidate digital image to be replaced by the input digital image based on distances between embedding vector representations of the templates and the embedding vector that represents the input digital image. A template of the templates is determined based on a distance between an embedding vector representation of the candidate digital image included in the template and the embedding vector that represents the input digital image. The search system generates an output digital image for display in a user interface that depicts the template with the candidate digital image replaced by the input digital image.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique

54.

SYSTEMS AND METHODS FOR CONTRASTIVE GRAPHING

      
Numéro d'application 18052463
Statut En instance
Date de dépôt 2022-11-03
Date de la première publication 2024-05-16
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Park, Namyong
  • Rossi, Ryan A.
  • Koh, Eunyee
  • Burhanuddin, Iftikhar Ahamath
  • Kim, Sungchul
  • Du, Fan

Abrégé

Systems and methods for contrastive graphing are provided. One aspect of the systems and methods includes receiving a graph including a node; generating a node embedding for the node based on the graph using a graph neural network (GNN); computing a contrastive learning loss based on the node embedding; and updating parameters of the GNN based on the contrastive learning loss.

Classes IPC  ?

  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06N 3/08 - Méthodes d'apprentissage

55.

RECOVERING GAMUT COLOR LOSS UTILIZING LIGHTWEIGHT NEURAL NETWORKS

      
Numéro d'application 18053111
Statut En instance
Date de dépôt 2022-11-07
Date de la première publication 2024-05-16
Propriétaire
  • Adobe Inc. (USA)
  • York University (Canada)
Inventeur(s)
  • Le, Hoang M.
  • Brown, Michael S.
  • Price, Brian
  • Cohen, Scott

Abrégé

Systems, methods, and non-transitory computer-readable media embed a trained neural network within a digital image. For instance, in one or more embodiments, the systems identify out-of-gamut pixel values of a digital image in a first gamut, where the digital image is converted to the first gamut from a second gamut. Furthermore, the systems determine target pixel values of a target version of the digital image in the first gamut that correspond to the out-of-gamut pixel values. The systems train a neural network to predict the target pixel values in the first gamut based on the out-of-gamut pixel values. The systems embed the neural network within the digital image in the second gamut to allow for extraction of the embedded neural network from the digital image to restore the digital image to a larger gamut digital image.

Classes IPC  ?

  • G06T 7/90 - Détermination de caractéristiques de couleur
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]

56.

EMBEDDING AN INPUT IMAGE TO A DIFFUSION MODEL

      
Numéro d'application 18053556
Statut En instance
Date de dépôt 2022-11-08
Date de la première publication 2024-05-16
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Gandelsman, Yosef
  • Park, Taesung
  • Zhang, Richard
  • Shechtman, Elya

Abrégé

Systems and methods for image editing are described. Embodiments of the present disclosure include obtaining an image and a prompt for editing the image. A diffusion model is tuned based on the image to generate different versions of the image. The prompt is then encoded to obtain a guidance vector, and the diffusion model generates a modified image based on the image and the encoded text prompt.

Classes IPC  ?

  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06V 10/776 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source Évaluation des performances
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos

57.

GENERATING EDITABLE EMAIL COMPONENTS UTILIZING A CONSTRAINT-BASED KNOWLEDGE REPRESENTATION

      
Numéro d'application 18055238
Statut En instance
Date de dépôt 2022-11-14
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Chan, Yeuk-Yin
  • Thomson, Andrew
  • Kim, Caroline
  • Connelly, Cole
  • Koh, Eunyee
  • Lee, Michelle
  • Guo, Shunan

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that generates editable email components by utilizing an Answer Set Programming (ASP) model with hard and soft constraints. For instance, in one or more embodiments, the disclosed systems generate editable email components from email fragments of an email file utilizing an Answer Set Programming (ASP) model. In particular, the disclosed systems extract facts for the ASP model from the email file. In addition, the disclosed systems determine rows or columns defining cells of the email file utilizing ASP hard constraints defined by a first set of ASP atoms corresponding to the facts. Moreover, the disclosed systems determine editable email component classes for the email fragments utilizing ASP soft constraints defined by ASP classification weights and a second set of ASP atoms corresponding to the facts.

Classes IPC  ?

  • H04L 51/07 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel caractérisée par l'inclusion de contenus spécifiques
  • G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés

58.

PREDICTING VIDEO EDITS FROM TEXT-BASED CONVERSATIONS USING NEURAL NETWORKS

      
Numéro d'application 18055301
Statut En instance
Date de dépôt 2022-11-14
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Bhattacharya, Uttaran
  • Wu, Gang
  • Swaminathan, Viswanathan
  • Petrangeli, Stefano

Abrégé

Embodiments are disclosed for predicting, using neural networks, editing operations for application to a video sequence based on processing conversational messages by a video editing system. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including a video sequence and text sentences, the text sentences describing a modification to the video sequence, mapping, by a first neural network content of the text sentences describing the modification to the video sequence to a candidate editing operation, processing, by a second neural network, the video sequence to predict parameter values for the candidate editing operation, and generating a modified video sequence by applying the candidate editing operation with the predicted parameter values to the video sequence.

Classes IPC  ?

  • H04N 7/00 - Systèmes de télévision
  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte

59.

GENERATING GESTURE REENACTMENT VIDEO FROM VIDEO MOTION GRAPHS USING MACHINE LEARNING

      
Numéro d'application 18055310
Statut En instance
Date de dépôt 2022-11-14
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhou, Yang
  • Yang, Jimei
  • Saito, Jun
  • Li, Dingzeyu
  • Aneja, Deepali

Abrégé

Embodiments are disclosed for generating a gesture reenactment video sequence corresponding to a target audio sequence using a trained network based on a video motion graph generated from a reference speech video. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a first input including a reference speech video and generating a video motion graph representing the reference speech video, where each node is associated with a frame of the reference video sequence and reference audio features of the reference audio sequence. The disclosed systems and methods further comprise receiving a second input including a target audio sequence, generating target audio features, identifying a node path through the video motion graph based on the target audio features and the reference audio features, and generating an output media sequence based on the identified node path through the video motion graph paired with the target audio sequence.

Classes IPC  ?

  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
  • G06F 16/683 - Recherche de données caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
  • G06F 40/242 - Dictionnaires
  • G06T 7/207 - Analyse du mouvement pour l’estimation de mouvement sur une hiérarchie des résolutions

60.

MODIFYING TWO-DIMENSIONAL IMAGES UTILIZING SEGMENTED THREE-DIMENSIONAL OBJECT MESHES OF THE TWO-DIMENSIONAL IMAGES

      
Numéro d'application 18055585
Statut En instance
Date de dépôt 2022-11-15
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Mech, Radomir
  • Carr, Nathan
  • Gadelha, Matheus

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input. Specifically, the disclosed system maps the three-dimensional mesh to the two-dimensional image, modifies the three-dimensional mesh in response to a displacement input, and updates the two-dimensional image.

Classes IPC  ?

  • G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/50 - Récupération de la profondeur ou de la forme
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques

61.

GENERATING ADAPTIVE THREE-DIMENSIONAL MESHES OF TWO-DIMENSIONAL IMAGES

      
Numéro d'application 18055594
Statut En instance
Date de dépôt 2022-11-15
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Gadelha, Matheus
  • Mech, Radomir

Abrégé

Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input. Specifically, the disclosed system maps the three-dimensional mesh to the two-dimensional image, modifies the three-dimensional mesh in response to a displacement input, and updates the two-dimensional image.

Classes IPC  ?

  • G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images

62.

DOCUMENT SIGNING AND STORAGE USING DATA MODELS AND DISTRIBUTED LEDGERS

      
Numéro d'application 18055673
Statut En instance
Date de dépôt 2022-11-15
Date de la première publication 2024-05-16
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • He, Songlin
  • Sun, Tong
  • Jain, Rajiv
  • Lipka, Nedim
  • Wigington, Curtis
  • Roy, Anindo

Abrégé

A method includes populating a template database with templates associated with template identifiers (IDs) identifying the templates. The method also includes generating a data model that references a template within the template database, where the data model includes a template ID referencing the template in the template database, and where the template includes a parameter field. The data model further includes a template parameter to apply to the parameter field and a digital signature for at least the template ID and the template parameter. The method also includes deploying the data model within a distributed ledger.

Classes IPC  ?

  • G06F 21/64 - Protection de l’intégrité des données, p.ex. par sommes de contrôle, certificats ou signatures

63.

DETECTING AND CLASSIFYING FILLER WORDS IN AUDIO USING NEURAL NETWORKS

      
Numéro d'application 18055739
Statut En instance
Date de dépôt 2022-11-15
Date de la première publication 2024-05-16
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Salamon, Justin
  • Caceres Chomali, Juan-Pablo
  • Zhu, Ge
  • Bryan, Nicholas J.

Abrégé

Embodiments are disclosed for performing a filler word detection process on input audio by a media editing system using trained neural networks. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including an audio sequence, analyzing the audio sequence to determine filler word candidates, classifying, by a filler word classification model, each filler word candidate of the filler word candidates into one of a set of categories, and generating an output audio sequence, the output audio sequence including an identification of a subset of the filler word candidates in a filler words category of the set of categories as identified filler words.

Classes IPC  ?

  • G10L 15/16 - Classement ou recherche de la parole utilisant des réseaux neuronaux artificiels
  • G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine 
  • G10L 25/78 - Détection de la présence ou de l’absence de signaux de voix

64.

GENERATION OF STYLIZED DRAWING OF THREE-DIMENSIONAL SHAPES USING NEURAL NETWORKS

      
Numéro d'application 18419287
Statut En instance
Date de dépôt 2024-01-22
Date de la première publication 2024-05-16
Propriétaire
  • Adobe Inc. (USA)
  • University of Massachusetts (USA)
Inventeur(s)
  • Hertzmann, Aaron
  • Fisher, Matthew
  • Liu, Difan
  • Kalogerakis, Evangelos

Abrégé

Techniques for generating a stylized drawing of three-dimensional (3D) shapes using neural networks are disclosed. A processing device generates a set of vector curve paths from a viewpoint of a 3D shape; extracts, using a first neural network of a plurality of neural networks of a machine learning model, surface geometry features of the 3D shape based on geometric properties of surface points of the 3D shape; determines, using a second neural network of the plurality of neural networks of the machine learning model, a set of at least one predicted stroke attribute based on the surface geometry features and a predetermined drawing style; generates, based on the at least one predicted stroke attribute, a set of vector stroke paths corresponding to the set of vector curve paths; and outputs a two-dimensional (2D) stylized stroke drawing of the 3D shape based at least on the set of vector stroke paths.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06N 3/045 - Combinaisons de réseaux
  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles

65.

MODIFYING DIGITAL IMAGES UTILIZING INTENT DETERMINISTIC USER INTERFACE TOOLS

      
Numéro d'application 18404648
Statut En instance
Date de dépôt 2024-01-04
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Smith, Kevin Gary
  • Joss, Matthew
  • Cohen, Scott

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images using an intelligent user interface tool that determines the intent of a user interaction. For instance, in some embodiments, the disclosed systems receive, via a graphical user interface of a client device, a user interaction with a set of pixels within a digital image. The disclosed systems determine, based on the user interaction, a user intent for targeting one or more portions of the digital image for deletion, the one or more portions including an additional set of pixels that differs from the set of pixels. Based on the user intent, the disclosed systems modify the digital image to delete the one or more portions from the digital image

Classes IPC  ?

  • G06T 5/77 - Retouche; Restauration; Suppression des rayures
  • G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
  • G06T 5/70 - Débruitage; Lissage
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras

66.

TABULAR DATA MACHINE-LEARNING MODELS

      
Numéro d'application 17979843
Statut En instance
Date de dépôt 2022-11-03
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Qin, Can
  • Kim, Sungchul
  • Yu, Tong
  • Rossi, Ryan A.
  • Zhao, Handong

Abrégé

Tabular data machine-learning model techniques and systems are described. In one example, common-sense knowledge is infused into training data through use of a knowledge graph to provide external knowledge to supplement a tabular data corpus. In another example, a dual-path architecture is employed to configure an adapter module. In an implementation, the adapter module is added as part of a pre-trained machine-learning model for general purpose tabular models. Specifically, dual-path adapters are trained using the knowledge graphs and semantically augmented trained data. A path-wise attention layer is applied to fuse a cross-modality representation of the two paths for a final result.

Classes IPC  ?

  • G06N 5/02 - Représentation de la connaissance; Représentation symbolique

67.

CHANGING COORDINATE SYSTEMS FOR DATA BOUND OBJECTS

      
Numéro d'application 17980476
Statut En instance
Date de dépôt 2022-11-03
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kerr, Bernard
  • Baranovskiy, Dmytro

Abrégé

Embodiments are disclosed for changing coordinate systems for data bound objects. In some embodiments, a method of changing coordinate systems for data bound objects includes receiving a selection of at least one graphic object associated with a data visualization on a canvas of a graphic design application, wherein the data visualization includes a plurality of graphic objects. Additionally, a request is received to convert the data visualization from a first coordinate space to a second coordinate space. A subset of the plurality of graphic objects to convert to the second coordinate space is identified, the subset of the plurality of graphic objects having a same object type. A view of the plurality of graphic objects is generated by converting the subset of the plurality of graphic objects to the second coordinate space.

Classes IPC  ?

  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte

68.

MANAGING MULTIPLE DATASETS FOR DATA BOUND OBJECTS

      
Numéro d'application 17980479
Statut En instance
Date de dépôt 2022-11-03
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kerr, Bernard
  • Baranovskiy, Dmytro
  • Farrell, Benjamin

Abrégé

Embodiments are disclosed for managing multiple data visualizations on a digital canvas. In some embodiments, a method of managing multiple data visualizations includes generating a first graphic object on a digital canvas. A first dataset is received and used to generate a first chart based on the first dataset and a visual property of the first graphic object. The first chart comprises a first plurality of graphic objects including the first graphic object. A second dataset is then received and used to generate a second chart on the digital canvas based on the second dataset. The second chart includes a second plurality of graphic objects. An axis of the first chart and an axis of the second chart are merged such that the axis the first chart and the axis of the second chart share a scale attribute.

Classes IPC  ?

  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image

69.

VECTOR OBJECT BLENDING

      
Numéro d'application 17980881
Statut En instance
Date de dépôt 2022-11-04
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Beri, Tarun
  • Fisher, Matthew David

Abrégé

Techniques for vector object blending are described to generate a transformed vector object based on a first vector object and a second vector object. A transformation module, for instance, receives a first vector object that includes a plurality of first paths and a second vector object that includes a plurality of second paths. The transformation module computes morphing costs based on a correspondence within candidate path pairs that include one of the first paths and one of the second paths. Based on the morphing costs, the transformation module generates a low-cost mapping of paths between the first paths and the second paths. To generate the transformed vector object, the transformation module adjusts one or more properties of at least one of the first paths based on the mapping, such as geometry, appearance, and z-order.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte

70.

BOT ACTIVITY DETECTION FOR EMAIL TRACKING

      
Numéro d'application 17983687
Statut En instance
Date de dépôt 2022-11-09
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Chen, Xiang
  • Zheng, Yifu
  • Swaminathan, Viswanathan
  • Reddy, Sreekanth
  • Mitra, Saayan
  • Sinha, Ritwik
  • Kumbi, Niranjan
  • Lai, Alan

Abrégé

In some embodiments, techniques for identifying email events generated by bot activity are provided. For example, a process may involve applying bot detection patterns to identify bot activity among email response events.

Classes IPC  ?

  • G06F 21/55 - Détection d’intrusion locale ou mise en œuvre de contre-mesures

71.

AUTOMATIC TEMPLATE RECOMMENDATION

      
Numéro d'application 17984058
Statut En instance
Date de dépôt 2022-11-09
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Mondal, Prasenjit
  • Soni, Sachin
  • Malik, Anshul

Abrégé

Embodiments are disclosed for providing customizable, visually aesthetic color diverse template recommendations derived from a source image. A method may include receiving a source image and determining a source image background by separating a foreground of the source image from a background of the source image. The method separates a foreground from the background by identifying portions of the image that belong to the background and stripping out the rest of the image. The method includes identifying a text region of the source image using a machine learning model and identifying font type using the identified text region. The method includes generating an editable template image using the source image background, the text region, and the font type.

Classes IPC  ?

  • G06V 30/19 - Reconnaissance utilisant des moyens électroniques
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan
  • G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p.ex. utilisant un modèle de réflectance ou d’éclairage
  • G06V 20/62 - Texte, p.ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
  • G06V 30/244 - Division des suites de caractères en groupes avant la reconnaissance; Sélection des dictionnaires utilisant des propriétés graphiques, p.ex. le type d’alphabet ou la police

72.

DYNAMIC COPYFITTING PARAMETER ESTIMATION

      
Numéro d'application 17984143
Statut En instance
Date de dépôt 2022-11-09
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Agarwal, Rishav
  • Hegde, Vidisha Rama
  • Gupta, Vasu
  • Jain, Sanyam

Abrégé

Embodiments are disclosed for real-time copyfitting using a shape of a content area and input text. A content area and an input text for performing copyfitting using a trained classifier is received. A number of remaining characters in the content area is computed in real-time using the input, the computing performed in response to receiving additional input text, wherein computing, in real-time, the number of remaining characters in the content area using the input text includes generating, by the trained classifier, a set of weights including a first set of one or more weights for the input text and a second set of one or more weights for the content area. The first set of one or more weights, the second set of one or more weights, the input text, and the additional input text, and a copyfitting parameter indicating a number of additional characters to be fitted into the content area are determined based on the content area. The copyfitting parameter and the number of remaining characters are presented in real-time.

Classes IPC  ?

  • G06F 40/103 - Mise en forme, c. à d. modification de l’apparence des documents

73.

AUTOMATIC FORECASTING USING META-LEARNING

      
Numéro d'application 18050607
Statut En instance
Date de dépôt 2022-10-28
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Rossi, Ryan A.
  • Mahadik, Kanak
  • Abdallah, Mustafa Abdallah Elhosiny
  • Kim, Sungchul
  • Zhao, Handong

Abrégé

Systems and methods for automatic forecasting are described. Embodiments of the present disclosure receive a time-series dataset; compute a time-series meta-feature vector based on the time-series dataset; generate a performance score for a forecasting model using a meta-learner machine learning model that takes the time-series meta-feature vector as input; select the forecasting model from a plurality of forecasting models based on the performance score; and generate predicted time-series data based on the time-series dataset using the selected forecasting model.

Classes IPC  ?

  • G06N 3/0985 - Optimisation d’hyperparamètres; Meta-apprentissage; Apprendre à apprendre
  • G06Q 10/04 - Prévision ou optimisation spécialement adaptées à des fins administratives ou de gestion, p. ex. programmation linéaire ou "problème d’optimisation des stocks"

74.

GENERATIVE GRAPH MODELING FRAMEWORK

      
Numéro d'application 18051364
Statut En instance
Date de dépôt 2022-10-31
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Chanpuriya, Sudhanshu
  • Rossi, Ryan A.
  • Lipka, Nedim
  • Rao, Anup Bandigadi
  • Mai, Tung
  • Song, Zhao

Abrégé

Systems and methods for data augmentation are described. Embodiments of the present disclosure receive a dataset that includes a plurality of nodes and a plurality of edges, wherein each of the plurality of edges connects two of the plurality of nodes; compute a first nonnegative matrix representing a homophilous cluster affinity; compute a second nonnegative matrix representing a heterophilous cluster affinity; compute a probability of an additional edge based on the dataset using a machine learning model that represents a homophilous cluster and a heterophilous cluster based on the first nonnegative matrix and the second nonnegative matrix; and generate an augmented dataset including the plurality of nodes, the plurality of edges, and the additional edge.

Classes IPC  ?

  • G06N 20/00 - Apprentissage automatique
  • G06F 7/78 - Dispositions pour le réagencement, la permutation ou la sélection de données selon des règles prédéterminées, indépendamment du contenu des données pour changer l'ordre du flux des données, p.ex. transposition matricielle ou tampons du type pile d'assiettes [LIFO]; Gestion des occurrences du dépassement de la capacité du système ou de sa sous-alimentation à cet effet

75.

SYSTEMS AND METHODS FOR CONTENT CUSTOMIZATION

      
Numéro d'application 18051736
Statut En instance
Date de dépôt 2022-11-01
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Addanki, Raghavendra Kiran
  • Arbour, David
  • Mai, Tung
  • Rao, Anup Bandigadi
  • Musco, Cameron N.

Abrégé

Systems and methods for content customization are described. According to one aspect, a content customization apparatus is provided. The apparatus includes a processor; a memory storing instructions executable by the processor; a user feature component configured to generate user feature vectors representing user features for a plurality of users, respectively; a group selection component configured to select a treatment group and a control group based on the user feature vectors; a machine learning model configured to train a treatment effect estimator based on the user feature vectors and outcome data for the treatment group and the control group; and a content component configured to provide customized content based on the treatment effect estimator.

Classes IPC  ?

  • G16H 10/20 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des essais ou des questionnaires cliniques électroniques

76.

CONSISTENT DOCUMENT MODIFICATION

      
Numéro d'application 18054028
Statut En instance
Date de dépôt 2022-11-09
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Jain, Sanyam
  • Jain, Ashish

Abrégé

Systems and methods for consistent document modification are provided. Embodiments include accessing a first document that comprises a first document object, where the first document object has a first document object style. The embodiments further comprise accessing a second document that comprises a second document object, where the second document object has a second document object style. The first document object style is to be modified based on the second document object style. The embodiments include hashing the first document object style to generate a first document object style hash, and hashing the second document object style to generate a second document object style hash. Based on determining the first document object style hash is different from the second document object style hash, the first document object is modified, within the first document, to comprise a modified first document object style that corresponds to the second document object style.

Classes IPC  ?

  • H04L 9/06 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité l'appareil de chiffrement utilisant des registres à décalage ou des mémoires pour le codage par blocs, p.ex. système DES

77.

USING INTRINSIC MULTIMODAL FEATURES OF IMAGE FOR DOMAIN GENERALIZED

      
Numéro d'application 17976541
Statut En instance
Date de dépôt 2022-10-28
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Mangla, Puneet
  • Aggarwal, Milan
  • Krishnamurthy, Balaji

Abrégé

Various embodiments classify one or more portions of an image based on deriving an “intrinsic” modality. Such intrinsic modality acts as a substitute to a “text” modality in a multi-modal network. A text modality in image processing is typically a natural language text that describes one or more portions of an image. However, explicit natural language text may not be available across one or more domains for training a multi-modal network. Accordingly, various embodiments described herein generate an intrinsic modality, which is also a description of one or more portions of an image, except that such description is not an explicit natural language description, but rather a machine learning model representation. Some embodiments additionally leverage a visual modality obtained from a vision-only model or branch, which may learn domain characteristics that are not present in the multi-modal network. Some embodiments additionally fuse or integrate the intrinsic modality with the visual modality for better generalization.

Classes IPC  ?

  • G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
  • G06F 40/40 - Traitement ou traduction du langage naturel
  • G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
  • G06V 10/77 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source
  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 10/86 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant des correspondances graphiques

78.

BINDING DATA TO GRAPHIC OBJECTS USING A VISUAL INDICATOR

      
Numéro d'application 17980480
Statut En instance
Date de dépôt 2022-11-03
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kerr, Bernard
  • Baranovskiy, Dmytro
  • Farrell, Benjamin

Abrégé

Embodiments are disclosed for generating a data visualization. In some embodiments, a method of generating a data visualization includes generating a first graphic object on a digital canvas. A data set including data associated with a plurality of variables is added to a data panel of the digital canvas. A selection of a variable from the plurality of variables on the data panel is received and a second graphic object connecting the variable and a cursor position on the digital canvas is generated. A selection of a visual property of the first graphic object is received using the cursor. Upon selection of the visual property, the first graphic object is linked to the data panel via the second graphic object. A chart is then generated comprising the first graphic object and one or more new graphic objects, based on the variable and the visual property of the first graphic object.

Classes IPC  ?

  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte

79.

MAPPING COLOR TO DATA FOR DATA BOUND OBJECTS

      
Numéro d'application 17980481
Statut En instance
Date de dépôt 2022-11-03
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kerr, Bernard
  • Baranovskiy, Dmytro
  • Farrell, Benjamin

Abrégé

Embodiments are disclosed for binding colors to data visualizations on a digital canvas. In some embodiments, a method of binding colors to data visualizations includes receiving a data set including data associated with a variable. A chart, including a plurality of graphic objects, is generated based on the variable of the data set and a visual property of the plurality of graphic objects. A data type associated with the variable determined and first colors are assigned to the plurality of graphic objects based on the data type using a color binding. A selection of second colors to be assigned to the plurality of graphic objects is received and the chart is updated using the second colors.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles

80.

AUTOMATICALLY GENERATING AXES FOR DATA VISUALIZATIONS INCLUDING DATA BOUND OBJECTS

      
Numéro d'application 17980485
Statut En instance
Date de dépôt 2022-11-03
Date de la première publication 2024-05-09
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kerr, Bernard
  • Baranovskiy, Dmytro
  • Lucier, Corey

Abrégé

Embodiments are disclosed for generating a data-bound axis for a data visualization. In some embodiments, a method of generating a data-bound axis for a data visualization includes receiving a data set and generating a chart including a plurality of graphic objects based on the data set and a visual property of the plurality of graphic objects. A scale associated with the chart is determined based on the data set and the plurality of graphic objects. At least one axis graphic object is generated based on the scale. The at least one axis graphic object is added to the plurality of graphic objects of the chart.

Classes IPC  ?

  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image

81.

TRACKING UNIQUE FACE IDENTITIES IN VIDEOS

      
Numéro d'application 17981985
Statut En instance
Date de dépôt 2022-11-07
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Aminian, Ali
  • Misraa, Aashish Kumar
  • Garg, Kshitiz
  • Agarwala, Aseem

Abrégé

Some aspects of the technology described herein perform identity identification on faces in a video. Object tracking is performed on detected faces in frames of a video to generate tracklets. Each tracklet comprises a sequence of consecutive frames in which each frame includes a detected face for a person. The tracklets are clustered using face feature vectors for detected faces of each tracklet to generate a plurality of clusters. Information is stored in an identity datastore, including a first identifier for a first identity in association with an indication of frames from tracklets in a first cluster from the plurality of clusters.

Classes IPC  ?

  • G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
  • G06T 7/20 - Analyse du mouvement
  • G06V 10/30 - Filtrage de bruit
  • G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p.ex. de visages similaires sur les réseaux sociaux
  • G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo

82.

RECONSTRUCTING GENERAL RADIAL GRADIENTS

      
Numéro d'application 18051648
Statut En instance
Date de dépôt 2022-11-01
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Lukac, Michal
  • Chakraborty, Souymodip
  • Fisher, Matthew David
  • Batra, Vineet
  • Phogat, Ankit

Abrégé

Systems and methods for image processing are described. Embodiments of the present disclosure include receiving a raster image depicting a radial color gradient; compute a radial disk model for the radial color gradient, wherein the radial disk model defines a plurality of disks with centers aligned in a same direction; construct a vector graphics representation of the radial color gradient based on the radial disk model; and generate a vector graphics image depicting the radial color gradient based on the vector graphics representation.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06T 7/90 - Détermination de caractéristiques de couleur
  • G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
  • G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques

83.

AUTOMATICALLY GENERATING GRAPHIC DESIGN VARIANTS FROM INPUT TEXT

      
Numéro d'application 18052693
Statut En instance
Date de dépôt 2022-11-04
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Shukla, Tripti
  • Vagolu, Khyathi
  • Rout, Sarthak
  • Neeraje, Nakula
  • Amarnath, Akhash Nakkonda
  • Srinivasan, Balaji Vasan

Abrégé

Systems and methods for automatically generating graphic design documents are described. Embodiments include identifying an input text that includes a plurality of phrases; obtaining one or more images based on the input text; encoding an image of the one or more images in a vector space using a multimodal encoder to obtain a vector image representation; encoding a phrase from the plurality of phrases in the vector space using the multimodal encoder to obtain a vector text representation; selecting an image text combination including the image and the phrase by comparing the vector image representation and the vector text representation; selecting a design template from a plurality of candidate design templates based on the image text combination; and generating a document based on the design template, wherein the document includes the at least one image and the at least one phrase.

Classes IPC  ?

  • G06F 40/186 - Gabarits
  • G06F 16/56 - Recherche d’informations; Structures de bases de données à cet effet; Structures de systèmes de fichiers à cet effet de données d’images fixes en format vectoriel
  • G06F 40/295 - Reconnaissance de noms propres
  • G06F 40/56 - Génération de langage naturel

84.

SINGLE IMAGE CONCEPT ENCODER FOR PERSONALIZATION USING A PRETRAINED DIFFUSION MODEL

      
Numéro d'application 18053450
Statut En instance
Date de dépôt 2022-11-08
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Motiian, Saeid
  • Ghadar, Shabnam

Abrégé

Systems and methods for image processing are provided. One aspect of the systems and methods includes identifying a style image including a target style. A style encoder network generates a style vector representing the target style based on the style image. The style encoder can be trained based on a style loss that encourages the network to match a desired style. A a diffusion model generates a synthetic image that includes the target style based on the style vector. The diffusion model is trained independently of the style encoder network.

Classes IPC  ?

  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
  • G06V 10/771 - Sélection de caractéristiques, p.ex. sélection des caractéristiques représentatives à partir d’un espace multidimensionnel de caractéristiques

85.

GUIDED COMODGAN OPTIMIZATION

      
Numéro d'application 18053641
Statut En instance
Date de dépôt 2022-11-08
Date de la première publication 2024-05-09
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Azizi, Zohreh
  • Sinha, Surabhi
  • Khodadadeh, Siavash

Abrégé

Methods for image processing are described. Embodiments of the present disclosure identifies an image generation network that includes an encoder and a decoder; prunes channels of a block of the encoder; prunes channels of a block of the decoder that is connected to the block of the encoder by a skip connection, wherein the channels of the block of the decoder are pruned based on the pruned channels of the block of the encoder; and generates an image using the image generation network based on the pruned channels of the block of the encoder and the pruned channels of the block of the decoder.

Classes IPC  ?

  • G06N 3/08 - Méthodes d'apprentissage
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions

86.

GENERATING THREE-DIMENSIONAL HUMAN MODELS REPRESENTING TWO-DIMENSIONAL HUMANS IN TWO-DIMENSIONAL IMAGES

      
Numéro d'application 18304144
Statut En instance
Date de dépôt 2023-04-20
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Gori, Giorgio
  • Zhou, Yi
  • Wang, Yangtuanfeng
  • Zhou, Yang
  • Singh, Krishna Kumar
  • Yoon, Jae Shin
  • Aksit, Duygu Ceylan

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.

Classes IPC  ?

  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques

87.

MODIFYING POSES OF TWO-DIMENSIONAL HUMANS IN TWO-DIMENSIONAL IMAGES BY REPOSING THREE-DIMENSIONAL HUMAN MODELS REPRESENTING THE TWO-DIMENSIONAL HUMANS

      
Numéro d'application 18304147
Statut En instance
Date de dépôt 2023-04-20
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Gori, Giorgio
  • Zhou, Yi
  • Wang, Yangtuanfeng
  • Zhou, Yang
  • Singh, Krishna Kumar
  • Yoon, Jae Shin
  • Aksit, Duygu Ceylan

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.

Classes IPC  ?

  • G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
  • G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
  • G06T 15/00 - Rendu d'images tridimensionnelles [3D]
  • G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie

88.

UPDATING ZOOM PROPERTIES OF CORRESPONDING SALIENT OBJECTS

      
Numéro d'application 17974041
Statut En instance
Date de dépôt 2022-10-26
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s) Murarka, Ankur

Abrégé

Techniques for updating zoom properties of corresponding salient objects are described that support parallel zooming for image comparison. In an implementation, a zoom input is received involving a salient object in a digital image in a user interface. An identification module identifies the salient object in the digital image and zoom properties for the salient object. A detection module identifies a corresponding salient object in at least one additional digital image and zoom properties for the corresponding salient object in the at least one additional digital image. An adjustment module then updates the zoom properties for the corresponding salient object in the at least one additional digital image based on the zoom properties for the salient object in the digital image.

Classes IPC  ?

  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
  • G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une forme; Localisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
  • G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
  • G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
  • G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions

89.

OFFLINE EVALUATION OF RANKED LISTS USING PARAMETRIC ESTIMATION OF PROPENSITIES

      
Numéro d'application 17978477
Statut En instance
Date de dépôt 2022-11-01
Date de la première publication 2024-05-02
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Vinay, Vishwa
  • Kilaru, Manoj
  • Arbour, David Thomas

Abrégé

In various examples, an offline evaluation system obtains log data from a recommendation system and trains an imitation ranker using the log data. The imitation ranker generates a first result including a set of scores associated with document and rank pairs based on a query. The offline evaluation system may then determine a rank distribution indicating propensities associated with the document and rank pairs for a set of impressions which can be used to determine a value associated with the performance of the new recommendation system.

Classes IPC  ?

  • G06F 16/2457 - Traitement des requêtes avec adaptation aux besoins de l’utilisateur

90.

GENERATING SUBJECT LINES FROM KEYWORDS UTILIZING A MACHINE-LEARNING MODEL

      
Numéro d'application 18050285
Statut En instance
Date de dépôt 2022-10-27
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Wu, Suofei
  • He, Jun
  • Yan, Zhenyu

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize machine learning to generate subject lines from subject line keywords. In one or more embodiments, the disclosed systems receive, from a client device, one or more subject line keywords. Additionally, the disclosed systems generate, utilizing a subject generation machine-learning model having learned parameters, a subject line by selecting one or more words for the subject line from a word distribution based on the one or more subject line keywords. The disclosed systems further provide, for display on the client device, the subject line.

Classes IPC  ?

91.

AUTOMATIC DEFERRED EDGE AUTHENTICATION FOR PROTECTED MULTI-TENANT RESOURCE MANAGEMENT SYSTEMS

      
Numéro d'application 18051424
Statut En instance
Date de dépôt 2022-10-31
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Alvarez, Tobias Bocanegra
  • Nuescheler, David

Abrégé

The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing deferred edge authentication to validate requests for resources of a content delivery network. In one or more embodiments, the disclosed systems receive, at an edge server from a client device, a request for a content item. In some embodiments, in response to receiving the request, the disclosed systems determine that the content item is stored at the edge server with a corresponding response header received from an origin server and validate the request, at the edge server, utilizing security information from the response header. In some embodiments, in response to receiving the request, the disclosed systems determine that the content item is not available at the edge server, request the content item from the origin server, and receive the content item with the corresponding response header from the origin server.

Classes IPC  ?

  • H04N 21/258 - Gestion de données liées aux clients ou aux utilisateurs finaux, p.ex. gestion des capacités des clients, préférences ou données démographiques des utilisateurs, traitement des multiples préférences des utilisateurs finaux pour générer des données co
  • H04N 21/442 - Surveillance de procédés ou de ressources, p.ex. détection de la défaillance d'un dispositif d'enregistrement, surveillance de la bande passante sur la voie descendante, du nombre de visualisations d'un film, de l'espace de stockage disponible dans l
  • H04N 21/835 - Génération de données de protection, p.ex. certificats

92.

SEGMENT SIZE ESTIMATION

      
Numéro d'application 18047421
Statut En instance
Date de dépôt 2022-10-18
Date de la première publication 2024-05-02
Propriétaire ADOBE INC. (USA)
Inventeur(s)
  • Mai, Tung
  • Sinha, Ritwik
  • Paulsen, Trevor Hyrum
  • Chen, Xiang
  • George, William Brandon
  • Purser, Nate
  • Song, Zhao

Abrégé

One aspect of systems and methods for segment size estimation includes identifying a segment of users for a first time period based on time series data, wherein the time series data includes a series of interactions between users and a content channel and wherein the segment includes a portion of the users interacting with the content channel during the first time period; computing a segment return value for a second time period based on the time series data by computing a first subset and a second subset of the segment, wherein the first subset includes users that interact with the content channel greater than a threshold number of times during a range of the time series data and the second subset comprises a complement of the first subset with respect to the segment; and providing customized content to a user in the segment based on the segment return value.

Classes IPC  ?

  • G06Q 30/02 - Marketing; Estimation ou détermination des prix; Collecte de fonds

93.

GENERATING CONSOLIDATED VISUAL REPRESENTATIONS FOR USER JOURNEYS VIA PROFILE TRACING

      
Numéro d'application 18049883
Statut En instance
Date de dépôt 2022-10-26
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Singh, Mandeep
  • Bose, Shiladitya
  • Garg, Saurabh
  • Lamba, Mukul
  • Mishra, Kaushal

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize a consolidated graphical user interface for visually presenting the state of a user profile with respect to a workflow journey. For instance, in one or more embodiments, the disclosed systems provide, for display within a graphical user interface of a client device, a visual representation of a workflow journey comprising a plurality of nodes and one or more edges connecting the plurality of nodes. Additionally, the disclosed systems receive, via the graphical user interface of the client device, an identifier associated with a user profile. The disclosed systems further modify, within the graphical user interface of the client device, the visual representation of the workflow journey to reflect a state of the user profile with respect to the workflow journey.

Classes IPC  ?

  • G06F 16/2457 - Traitement des requêtes avec adaptation aux besoins de l’utilisateur
  • G06F 16/22 - Indexation; Structures de données à cet effet; Structures de stockage
  • G06F 16/23 - Mise à jour

94.

GENERATING A MULTI-MODAL VECTOR REPRESENTING A SOURCE FONT AND IDENTIFYING A RECOMMENDED FONT UTILIZING A MULTI-MODAL FONT MACHINE-LEARNING MODEL

      
Numéro d'application 18051720
Statut En instance
Date de dépôt 2022-11-01
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Kumar, Pranay
  • Jindal, Nipun

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that generates a multi-modal vector and identifies a recommended font corresponding to the source font based on the multi-modal vector. For instance, in one or more embodiments, the disclosed systems receive an indication of a source font and determines font embeddings and glyph metrics embedding. Furthermore, the disclosed system generates, utilizing a multi-modal font machine-learning model, a multi-modal vector representing the source font based on the font embeddings and the glyph metrics embedding.

Classes IPC  ?

  • G06F 40/109 - Maniement des polices de caractères; Typographie cinétique ou temporelle

95.

ANONYMIZING DIGITAL IMAGES UTILIZING A GENERATIVE ADVERSARIAL NEURAL NETWORK

      
Numéro d'application 18052121
Statut En instance
Date de dépôt 2022-11-02
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Khodadadeh, Siavash
  • Kalarot, Ratheesh
  • Ghadar, Shabnam
  • Hold-Geoffroy, Yannick

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer readable media for generating anonymized digital images utilizing a face anonymization neural network. In some embodiments, the disclosed systems utilize a face anonymization neural network to extract or encode a face anonymization guide that encodes face attribute features, such as gender, ethnicity, age, and expression. In some cases, the disclosed systems utilize the face anonymization guide to inform the face anonymization neural network in generating synthetic face pixels for anonymizing a digital image while retaining attributes, such as gender, ethnicity, age, and expression. The disclosed systems learn parameters for a face anonymization neural network for preserving face attributes, accounting for multiple faces in digital images, and generating synthetic face pixels for faces in profile poses.

Classes IPC  ?

  • G06F 21/62 - Protection de l’accès à des données via une plate-forme, p.ex. par clés ou règles de contrôle de l’accès
  • G06N 3/0455 - Réseaux auto-encodeurs; Réseaux encodeurs-décodeurs
  • G06N 3/0475 - Réseaux génératifs

96.

GENERATING SHADOWS FOR OBJECTS IN TWO-DIMENSIONAL IMAGES UTILIZING A PLURALITY OF SHADOW MAPS

      
Numéro d'application 18304179
Statut En instance
Date de dépôt 2023-04-20
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Hold-Geoffroy, Yannick
  • Krs, Vojtech
  • Mech, Radomir
  • Carr, Nathan
  • Gadelha, Matheus

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.

Classes IPC  ?

97.

Digital Object Animation Using Control Points

      
Numéro d'application 18397413
Statut En instance
Date de dépôt 2023-12-27
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Saito, Jun
  • Yang, Jimei
  • Aksit, Duygu Ceylan

Abrégé

Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.

Classes IPC  ?

  • G06T 13/80 - Animation bidimensionnelle [2D], p.ex. utilisant des motifs graphiques programmables
  • G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
  • G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur les caractéristiques

98.

FACILITATING EFFICIENT IDENTIFICATION OF RELEVANT DATA

      
Numéro d'application 18406426
Statut En instance
Date de dépôt 2024-01-08
Date de la première publication 2024-05-02
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Zhang, Wei
  • Challis, Christopher

Abrégé

The present technology provides for facilitating efficient identification of relevant metrics. In one embodiment, a set of candidate metrics for which to determine relevance to a user is identified. For each candidate metric, a set of distribution parameters is determined, including a first distribution parameter based on implicit positive feedback associated with the metric and usage data associated with the metric and a second distribution parameter based on the usage data associated with the metric. Such usage data can efficiently facilitate identifying relevance even with an absence of negative feedback. Using the set of distribution parameters, a corresponding distribution is generated. Each distribution can then be sampled to identify a relevance score for each candidate metric indicating an extent of relevance of the corresponding metric. Based on the relevance scores for each candidate metric, a candidate metric is designated as relevant to the user.

Classes IPC  ?

  • G06F 16/2457 - Traitement des requêtes avec adaptation aux besoins de l’utilisateur
  • G06F 17/18 - Opérations mathématiques complexes pour l'évaluation de données statistiques
  • H04L 67/306 - Profils des utilisateurs
  • H04L 67/50 - Services réseau

99.

GENERATING SHADOWS FOR PLACED OBJECTS IN DEPTH ESTIMATED SCENES OF TWO-DIMENSIONAL IMAGES

      
Numéro d'application 18304113
Statut En instance
Date de dépôt 2023-04-20
Date de la première publication 2024-04-25
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Hold-Geoffroy, Yannick
  • Krs, Vojtech
  • Mech, Radomir
  • Carr, Nathan
  • Gadelha, Matheus

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images. The disclosed systems further use three-dimensional representations of two-dimensional images to customize focal points for the two-dimensional images.

Classes IPC  ?

  • G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
  • G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan
  • G06T 7/50 - Récupération de la profondeur ou de la forme
  • G06T 7/68 - Analyse des attributs géométriques de la symétrie
  • G06T 15/60 - Génération d'ombres

100.

MODIFYING DIGITAL IMAGES VIA DEPTH-AWARE OBJECT MOVE

      
Numéro d'application 18320714
Statut En instance
Date de dépôt 2023-05-19
Date de la première publication 2024-04-25
Propriétaire Adobe Inc. (USA)
Inventeur(s)
  • Ding, Zhihong
  • Cohen, Scott
  • Joss, Matthew
  • Zhang, Jianming
  • Prasad, Darshan
  • Gomes, Celso
  • Brandt, Jonathan

Abrégé

The present disclosure relates to systems, methods, and non-transitory computer-readable media that implement depth-aware object move operations for digital image editing. For instance, in some embodiments, the disclosed systems determine a first object depth for a first object portrayed within a digital image and a second object depth for a second object portrayed within the digital image. Additionally, the disclosed systems move the first object to create an overlap area between the first object and the second object within the digital image. Based on the first object depth and the second object depth, the disclosed systems modify the digital image to occlude the first object or the second object within the overlap area.

Classes IPC  ?

  • G06T 7/50 - Récupération de la profondeur ou de la forme
  • G06T 5/00 - Amélioration ou restauration d'image
  • G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
  1     2     3     ...     70        Prochaine page