Techniques for securing displayed data on computing devices are disclosed. One example technique includes upon determining that the computing device is unlocked, capturing and analyzing an image in a field of view of the camera of the computing device to determine whether the image includes a human face. In response to determining that the image includes a human face, the technique includes determining facial attributes of the human face in the image via facial recognition and whether the human face is that of an authorized user of the computing device. In response to determining that the human face is not one of an authorized user of the computing device, the technique includes converting user data on the computing device from an original language to a new language to output on a display of the computing device, thereby securing the displayed user data even when the computing device is unlocked.
Generally discussed herein are devices, systems, and methods for secure cryptographic masking. A method can include generating a first random number, determining a result of the first random number modulo a prime number resulting in a second random number, subtracting the second random number from the prime number resulting in a first subtraction result, adding a private key value to the first subtraction result resulting in a first split, and responsive to determining the private key value is less than the random number, providing the first split and the second random number as splits of the private key.
H04L 9/30 - Clé publique, c. à d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
Methods, systems and computer program products are provided for improving performance (e.g., reducing power consumption) of a hardware accelerator (e.g., neural processor) comprising hybrid or analog multiply and accumulate (MAC) processing elements (PEs). Selective variation of the precision of an array of MAC PEs may reduce power consumption of a neural processor. Power may be conserved by dynamically controlling the precision of analog to digital (ADC) output bits for one or more MAC PEs. Dynamic control of ADC output bit precision may be based on precision information determined during training and/or post-training (e.g., quantization) of an artificial intelligence (AI) neural network (NN) model implemented by the neural processor. Precision information may include a range of dynamic precision for each of a plurality of nodes of a computation graph for the AI NN model.
G06F 1/3234 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise
G06F 7/544 - Méthodes ou dispositions pour effectuer des calculs en utilisant exclusivement une représentation numérique codée, p.ex. en utilisant une représentation binaire, ternaire, décimale utilisant des dispositifs non spécifiés pour l'évaluation de fonctions par calcul
G06F 13/10 - Commande par programme pour dispositifs périphériques
A function processing service may receive a request to execute source code. The source code may include instructions to perform a function. The function processing service may determine whether at least one hardware acceleration condition has been satisfied for the function. If at least one hardware acceleration condition has been satisfied, the instructions in the source code may be translated into hardware-specific code corresponding to a hardware circuit. The hardware circuit may be configured based on the hardware-specific code, and the hardware circuit may perform the function. The function processing service may then provide the result obtained from the hardware circuit to the requesting entity.
A data processing system implements receiving an access request from the client device of a content requestor to access a content item for which access to the content item is managed by a content access management platform and obtaining access control information. The access control information comprising information associated with a content owner associated with the content item, information associated with the content requestor, and information associated with the content item. The system further implements analyzing the access control information using a first machine learning model trained to analyze the access control information and to output an access determination score, the access determination score representing a level of certainty that the content requestor should be granted access to the content item, determining an automatic access decision to grant or deny the access request based on the access determination score, and notifying the content requestor whether the access request has granted or denied based on the automatic access decision.
Systems and methods are provided for accessing a factorized neural transducer comprising a first set of layers for predicting blank tokens and a second set of layers for predicting vocabulary tokens, the second set of layers comprising a language model that includes a vocabulary predictor which is a separate predictor from the blank predictor, wherein a vocabulary predictor output from the vocabulary predictor and the encoder output are used for predicting a vocabulary token. The second set of layers is selectively modified to facilitate an improvement in an accuracy of the factorized neural transducer in performing automatic speech recognition, the selectively modifying comprising applying a particular modification to the second set of layers while refraining from applying the particular modification to the first set of layers.
G10L 15/16 - Classement ou recherche de la parole utilisant des réseaux neuronaux artificiels
G10L 15/06 - Création de gabarits de référence; Entraînement des systèmes de reconnaissance de la parole, p.ex. adaptation aux caractéristiques de la voix du locuteur
G10L 15/197 - Grammaires probabilistes, p.ex. n-grammes de mots
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine
7.
DEVELOPMENT-TIME CONFIGURATION CHANGE RECOMMENDATION USING DEPLOYMENT TEMPLATES
Techniques are described herein that are capable of providing a development-time configuration change recommendation using deployment templates. During development of a software program, a proposed configuration of the software program is identified. A reference configuration defined by a reference template is determined based on a similarity between the proposed configuration and the reference configuration. A determination is made that the proposed configuration has an attribute having a first value corresponding to a first cost. A determination is made that the reference configuration has the attribute having a second value corresponding to a second cost. During the development of the software program, an action is performed, including causing a recommendation to be provided via a user interface, based at least on the second cost being less than the first cost. The recommendation recommends changing the attribute of the proposed configuration to have the second value in lieu of the first value.
Embodiments of the disclosed technologies include generating a reward score for an entity. A rate distribution is determined using the reward score and a number of times the entity has been selected for ranking. A sampled rate value is generated by sampling the rate distribution. A probability score is generated for a pair of the entity and a user based on the sampled rate value. A probability distribution is determined using the probability score. A sampled probability value is generated by sampling the probability distribution. A machine learning model is trained using the sampled probability value.
A method and a network for routing data packet in a unified wide area network (WAN) is provided. The method includes encapsulating a data packet by an ingress aggregation router and forwarding the encapsulated data packet to an ingress backbone router. The encapsulated data packet includes a first label. The ingress backbone router selects an optimized traffic engineered tunnel and replaces the first label with the optimized traffic engineered tunnel and forwards the encapsulated data packet along the optimized traffic engineered tunnel.
H04L 47/125 - Prévention de la congestion; Récupération de la congestion en équilibrant la charge, p.ex. par ingénierie de trafic
H04L 45/00 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données
H04L 45/50 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données utilisant l'échange d'étiquettes, p.ex. des commutateurs d'étiquette multi protocole [MPLS]
A computing device configured to removably attach a component comprises a housing comprising first and second device electromagnets. A wireless charging transmitting antenna is between the electromagnets. Instructions are executable by a processor to synchronize a first device current through the first device electromagnet with a first component current through a first component electromagnet of the component to attract the electromagnets, and to synchronize a second device current through the second device electromagnet with a second component current through a second component electromagnet of the component to attract the electromagnets.
H01F 7/06 - Electro-aimants; Actionneurs comportant des électro-aimants
H02J 7/00 - Circuits pour la charge ou la dépolarisation des batteries ou pour alimenter des charges par des batteries
H02J 50/10 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique utilisant un couplage inductif
H02J 50/20 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique utilisant des micro-ondes ou des ondes radio fréquence
H02J 50/90 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique mettant en œuvre la détection ou l'optimisation de la position, p.ex. de l'alignement
11.
DATA VISIBILITY FOR NESTED TRANSACTIONS IN DISTRIBUTED SYSTEMS
Methods for data visibility in nested transactions in distributed systems are performed by systems and devices. Distributed executions of queries are performed in processing systems according to isolation level protocols with unique nested transaction identifiers for data management and versioning across one or more data sets, one or more compute pools, etc., within a logical server via a single transaction manager that oversees the isolation semantics and data versioning. A distributed query processor of the systems and devices performs nested transaction versioning for distributed tasks by generating nested transaction identifiers, encoded in data rows, which are used to enforce correct data visibility. Data visibility is restricted to previously committed data from distributed transactions and tasks, and is blocked for distributed transactions and tasks that run concurrently. Local commits for completed transactions and tasks are used to minimize transaction manager interactions, and instant rollbacks are enabled for aborted transactions and tasks.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
G06F 9/46 - Dispositions pour la multiprogrammation
G06F 16/901 - Indexation; Structures de données à cet effet; Structures de stockage
A computer implemented method includes loading a first kernel layer having a first privilege level onto a hosting environment. A second kernel layer having a second privilege level different from the first privilege level is also loaded onto the hosting environment. The first kernel layer is isolated from the second kernel layer and access to a hosting environment memory protection table is controlled via the first kernel layer.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
According to implementations of the subject matter described herein, a solution is proposed for three-dimensional (3D) object detection. In this solution, feature representations of a plurality of points are extracted from point cloud data related to a 3D object. Initial feature representations of a set of candidate 3D objects are determined based on the feature representations of the plurality of points. Based on the feature representations of the plurality of points and the initial feature representations of the set of candidate 3D objects, a detection result for the 3D object is generated by determining self-correlations between the set of candidate 3D objects and cross-correlations between the plurality of points and the set of candidate 3D objects. In this way, without grouping points into candidate 3D objects, the 3D object in a 3D scene can be localized and recognized based on the self-correlations and cross-correlations.
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
G06T 7/90 - Détermination de caractéristiques de couleur
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
A generation identifier is employed with various systems and methods in order to identify situations where a workload has been reassigned to a new node and where a workload is still being processed by an old node during a failure between nodes. A master node may assign a workload to a worker node. The worker node sends a request to access target data. The request may be associated with a generation identifier and workload identifier that identifies the node and workload. At some point, a failure occurs between the master node and worker node. The master node reassigns the workload to another worker node. The new worker node accesses the target data with a different generation identifier, indicating to the storage system that the workload has been reassigned. The old worker node receives an indication from the storage system that the workload has been reassigned and stops processing the workload.
H04L 43/00 - Dispositions pour la surveillance ou le test de réseaux de commutation de données
G06F 3/06 - Entrée numérique à partir de, ou sortie numérique vers des supports d'enregistrement
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
G06F 9/52 - Synchronisation de programmes; Exclusion mutuelle, p.ex. au moyen de sémaphores
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p.ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
G06F 16/176 - Support d’accès partagé aux fichiers; Support de partage de fichiers
H04L 67/1001 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour accéder à un serveur parmi une pluralité de serveurs répliqués
H04L 67/61 - Ordonnancement ou organisation du service des demandes d'application, p.ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises en tenant compte de la qualité de service [QoS] ou des exigences de priorité
H04L 69/40 - Dispositions, protocoles ou services de réseau indépendants de la charge utile de l'application et non couverts dans un des autres groupes de la présente sous-classe pour se remettre d'une défaillance d'une instance de protocole ou d'une entité, p.ex. protocoles de redondance de service, état de redondance de protocole ou redirection de service de protocole
15.
SYSTEMS AND METHODS FOR RETIRING IN MULTI-STREAM DATA MOVEMENT
A hardware retire circuit includes: one or more input queues, each queue corresponding to an input stream of tasks and being configured to store input task identifiers corresponding to tasks of the input stream; and processing logic configured to: receive a completed task event; determine whether a completed task queue identifier and a completed task identifier of the completed task event match an input task identifier of an input task at a head of an input queue having an input queue identifier corresponding to the completed task queue identifier; and in response to determining a match, pop the task at the head of the input queue and output a task retirement event corresponding to the input task.
A collection of data that is extremely large can be difficult to search and/or analyze. Relevance may be dramatically improved by automatically classifying queries and web pages in useful categories, and using these classification scores as relevance features. A thorough approach may require building a large number of classifiers, corresponding to the various types of information, activities, and products. Creation of classifiers and schematizers is provided on large data sets. Exercising the classifiers and schematizers on hundreds of millions of items may expose value that is inherent to the data by adding usable meta-data. Some aspects include active labeling exploration, automatic regularization and cold start, scaling with the number of items and the number of classifiers, active featuring, and segmentation and schematization.
Techniques are disclosed for capturing network traffic in a computing environment comprising a plurality of computing devices. Data packets to be captured are encapsulated within a Virtual Extensible Local Area Network (VXLAN) session. A reserved bit in a header of the encapsulated packet is set to indicate the encapsulated packet includes metadata pertaining to the data traffic to be captured.
The disclosure herein describes a system for measuring probability of influence in digital communications to determine if communication content originated in a person's own prior knowledge or new information more recently obtained from interaction with communications of others. An estimated probability a new communication by a first user comes from the same distribution as prior communications of the first user are generated using multidimensional statistics on embeddings representing the communications. A second estimated probability that the new communication comes from the same distribution as communication(s) of a second user that were accessible to the first user are generated. If the second probability is greater than the first probability, the new communication is more likely influenced by exposure of the first user to the second user's communications rather than the first user's own historical knowledge. An influence attribution recommendation is generated, including an influence attribution or other recommended action.
Embodiments of the disclosed technologies receive a request including a user identifier and metadata associated with a slot available at a user system, remove the user identifier from the request to produce anonymized request data, receive, from a machine learning model, superposition data that correlates with the anonymized request data, send the superposition data for the anonymized request data to a real-time content-to-request matching process, receive, from the real-time content-to-request matching process, an identifier that identifies a content distribution selected based on the superposition data, and initiate the selected content distribution through the network to the slot in response to the request.
H04L 67/63 - Ordonnancement ou organisation du service des demandes d'application, p.ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises en acheminant une demande de service en fonction du contenu ou du contexte de la demande
A thermal management system for cooling electronics includes an immersion tank, a working fluid in the immersion tank, a heat exchanger, a first fluid conduit, and a second fluid conduit. The heat exchanger is configured to transfer thermal energy from the working fluid to ambient air to cool the working fluid. The first fluid conduit provides fluid communication from the immersion tank to the heat exchanger, and the second fluid conduit provides fluid communication from the heat exchanger to a spray nozzle to spray working fluid into the immersion tank.
A query-processing technique includes an operation of matching the input query against a plurality of candidate target items, to produce a set of candidate query-item pairings. The matching is applicable to different classes of matching, but is performed by a computer processing architecture that uses a class-agnostic instance of query-processing logic and a class-agnostic target item index. After the matching operation, the technique assigns a matching class to each candidate query-item pairing in the set of candidate query-item pairings, to produce a set of classified pairings. The technique ultimately serves a particular output item to an end user, where the particular output item is chosen based on the results of the matching and assigning. Some implementations of the technique include a filtering operation whereby the candidate-item pairings are filtered to conform to a specified selection strategy or strategies. This filtering operation supplements or replaces the assigning operation.
A computer device instantiates a first Transport Layer Security (TLS) endpoint having access to a trusted execution environment (TEE) of the processor; generates in the TEE in an endpoint-specific public-private key pair bound to the first TLS endpoint; generates of attestation data verifying that the endpoint-specific public-private key pair was generated in the TEE and is bound to the first TLS endpoint; and signs the attestation data in the TEE using a TEE private key securely embedded in the processor. The device generates a TEE signature using an endpoint-specific private key of an endpoint-specific public-private key pair; and indicates of the attestation data, an endpoint-specific public key of the endpoint-specific public public-private key pair and the TEE signature to a second TLS endpoint within a TLS handshake message exchange between the first TLS endpoint and the second TLS endpoint.
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
23.
METADATA ENHANCEMENT FOR PACKET CAPTURE USING VXLAN ENCAPSULATION
Techniques are disclosed for capturing network traffic in a computing environment comprising a plurality of computing devices. Data packets to be captured are encapsulated within a Virtual Extensible Local Area Network (VXLAN) session. A reserved bit in a header of the encapsulated packet is set to indicate the encapsulated packet includes metadata pertaining to the data traffic to be captured.
H04L 43/022 - Capture des données de surveillance par échantillonnage
H04L 43/20 - Dispositions pour la surveillance ou le test de réseaux de commutation de données le système de surveillance ou les éléments surveillés étant des entités virtualisées, abstraites ou définies par logiciel, p.ex. SDN ou NFV
24.
SYSTEMS AND METHODS FOR RETIRING IN MULTI-STREAM DATA MOVEMENT
A hardware retire circuit includes: one or more input queues, each queue corresponding to an input stream of tasks and being configured to store input task identifiers corresponding to tasks of the input stream; and processing logic configured to: receive a completed task event; determine whether a completed task queue identifier and a completed task identifier of the completed task event match an input task identifier of an input task at a head of an input queue having an input queue identifier corresponding to the completed task queue identifier; and in response to determining a match, pop the task at the head of the input queue and output a task retirement event corresponding to the input task.
A computer device instantiates a first Transport Layer Security (TLS) endpoint having access to a trusted execution environment (TEE) of the processor; generates in the TEE in an endpoint-specific public-private key pair bound to the first TLS endpoint; generates of attestation data verifying that the endpoint-specific public-private key pair was generated in the TEE and is bound to the first TLS endpoint; and signs the attestation data in the TEE using a TEE private key securely embedded in the processor. The device generates a TEE signature using an endpoint-specific private key of an endpoint-specific public-private key pair; and indicates of the attestation data, an endpoint-specific public key of the endpoint-specific public public-private key pair and the TEE signature to a second TLS endpoint within a TLS handshake message exchange between the first TLS endpoint and the second TLS endpoint.
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
A computing device configured to removably attach a component comprises a housing comprising first and second device electromagnets. A wireless charging transmitting antenna is between the electromagnets. Instructions are executable by a processor to synchronize a first device current through the first device electromagnet with a first component current through a first component electromagnet of the component to attract the electromagnets, and to synchronize a second device current through the second device electromagnet with a second component current through a second component electromagnet of the component to attract the electromagnets.
H02J 50/00 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique
H02J 50/90 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique mettant en œuvre la détection ou l'optimisation de la position, p.ex. de l'alignement
27.
DYNAMICALLY UPDATING FIRMWARE PROFILE CONFIGURATIONS ON COMPUTING DEVICES
The present disclosure relates to utilizing a firmware configuration system to efficiently update a firmware profile configuration of computing devices (e.g., host devices in a datacenter). For example, the firmware configuration system facilitates updating the firmware profile configuration, such as for a Unified Extensible Firmware Interface (UEFI) profile and/or a Basic Input/Output System (BIOS), without needing to develop, deploy, and install a new BIOS. More specifically, the firmware configuration system updates (e.g., via a baseband management controller) firmware profile configurations by modifying a profile configuration table in flash memory (i.e., on an SPI flash-based chip) of a BIOS with a firmware profile configuration update patch and without affecting other parts of the BIOS.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p.ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
28.
SIMPLIFIED MASKING FOR SIGNED CRYPTOGRAPHY OPERATIONS
Generally discussed herein are devices, systems, and methods for secure cryptographic masking. A method can include generating a first random number, determining a result of the first random number modulo a prime number resulting in a second random number, subtracting the second random number from the prime number resulting in a first subtraction result, adding a private key value to the first subtraction result resulting in a first split, and responsive to determining the private key value is less than the random number, providing the first split and the second random number as splits of the private key.
H04L 9/00 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité
H04L 9/06 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité l'appareil de chiffrement utilisant des registres à décalage ou des mémoires pour le codage par blocs, p.ex. système DES
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
29.
ACCESS DECISION MANAGEMENT SYSTEM FOR DIGITAL RESOURCES
A data processing system implements receiving an access request from the client device to access a content item for which access to the content item is managed by a content access management platform and obtaining access control information. The access control information comprising information associated with a content owner associated with the content item, information associated with the content requestor, and information associated with the content item. The system further implements analyzing the access control information using a machine learning model trained to analyze the access control information and to output an access determination score representing a level of certainty that the content requestor should be granted access to the content item, determining an automatic access decision to grant or deny the access request based on the access determination score, and notifying the content requestor whether the access request has granted or denied based on the automatic access decision.
Systems and methods are provided for accessing a factorized neural transducer comprising a first set of layers for predicting blank tokens and a second set of layers for predicting vocabulary tokens, the second set of layers comprising a language model that includes a vocabulary predictor which is a separate predictor from the blank predictor, wherein a vocabulary predictor output from the vocabulary predictor and the encoder output are used for predicting a vocabulary token. The second set of layers is selectively modified to facilitate an improvement in an accuracy of the factorized neural transducer in performing automatic speech recognition, the selectively modifying comprising applying a particular modification to the second set of layers while refraining from applying the particular modification to the first set of layers.
A ray trace operation includes tracing a ray from an origin point in accordance with a ray path into a virtual environment (where the virtual environment comprises one or more virtual objects defined by one or more object components) and determining an intersected object component of the one or more object components that the ray intersects with. Determining the intersected object component comprises accessing (i) ray trace temporal coherence data based upon a preceding ray trace operation that temporally precedes the ray trace operation or (ii) ray trace spatial coherence data based upon a spatially proximate ray trace operation.
The indirect querying of models to determine capabilities possessed by the model. Such indirect queries take the form of model input that potentially includes a natural language input user data. Such model input is structured such that the output of the model is either not natural language at all, or else is natural language that is not semantically responsive to the natural language input. Nevertheless, the output is evaluated to estimate or determine the capability possessed by the model. Thus, models may be more fully utilized to their better potential.
Indirect time-of-flight camera systems for operating in multiple optical channels using active modulated light and accompanying methods of operation are provided. In one aspect, the indirect time-of-flight camera system includes first and second modulatable laser sources outputting light of different wavelengths for illuminating a target environment. The camera system further includes a wavelength-selective reflective element designed to reflect the light of a first wavelength and to transmit the light of a second wavelength. The camera system further includes a controller comprising instructions executable to control the camera system to, in a first time period, activate the first modulatable laser source and deactivate the second modulatable laser source, and in a second time period, deactivate the first modulatable laser source and activate the second modulatable laser source. The camera system further includes a photosensor for receiving the light outputted by the first and second modulatable laser sources.
G01S 7/481 - Caractéristiques de structure, p.ex. agencements d'éléments optiques
G01S 17/894 - Imagerie 3D avec mesure simultanée du temps de vol sur une matrice 2D de pixels récepteurs, p.ex. caméras à temps de vol ou lidar flash
G01S 17/36 - Systèmes déterminant les données relatives à la position d'une cible pour mesurer la distance uniquement utilisant la transmission d'ondes continues, soit modulées en amplitude, en fréquence ou en phase, soit non modulées avec comparaison en phase entre le signal reçu et le signal transmis au même moment
34.
SYSTEMS AND METHODS FOR ROUTING DATA PACKET IN A UNIFIED WIDE AREA NETWORK
A method and a network for routing data packet in a unified wide area network (WAN) is provided. The method includes encapsulating a data packet by an ingress aggregation router and forwarding the encapsulated data packet to an ingress backbone router. The encapsulated data packet includes a first label. The ingress backbone router selects an optimized traffic engineered tunnel and replaces the first label with the optimized traffic engineered tunnel and forwards the encapsulated data packet along the optimized traffic engineered tunnel.
H04L 45/02 - Mise à jour ou découverte de topologie
H04L 45/50 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données utilisant l'échange d'étiquettes, p.ex. des commutateurs d'étiquette multi protocole [MPLS]
H04L 45/645 - Fractionnement de la couche de calcul de la route et de la couche de routage, p.ex. pour un acheminement selon l’élément de calcul de la route [PCE] ou basé sur la fonctionnalité Openflow
H04L 45/80 - Sélection des points d'entrée par le point de terminaison source, p.ex. sélection du ISP ou du POP
35.
JOINT ACOUSTIC ECHO CANCELLATION (AEC) AND PERSONALIZED NOISE SUPPRESSION (PNS)
A data processing system implements receiving a far-end signal associated with a first computing device participating in an online communication session and receiving a near-end signal associated with a second computing device participating in the online communication session. The near-end signal includes speech of a target speaker, a first interfering speaker, and an echo signal. The system further implements providing the far-end signal, the near-end signal, and an indication of the target speaker as an input to a machine learning model. The machine learning model trained to analyze the far-end signal and the near-end signal to perform personalized noise suppression (PNS) to remove speech from one or more interfering speakers and acoustic echo cancellation (AEC) to remove echoes. The model is trained to output an audio signal comprising speech of the target speaker. The system obtains the audio signal comprising the speech of the target speaker from the model.
The description relates to hinged devices, such as hinged computing devices. One example can include a first portion secured to a first hinge arm that is configured to rotate around a first hinge axis and a second portion secured to a second hinge arm that is configured to rotate around a second hinge axis. A timing shuttle can be positioned on a central shaft that is located between the first hinge axis and the second hinge axis and is configured to control a frictional torque experienced by the first and second hinge arms depending upon orientation of the first and second hinge arms and to synchronize rotation of the first and second hinge arms around the first and second hinge axes.
Techniques for (i) using contextual information associated with an exposed credential to identify a resource that could be accessed using the exposed credential, (ii) identifying a responsible entity of that resource, and (iii) alerting the responsible entity about the exposed credential are disclosed. A credential is determined to be in an exposed state. The exposed credential, if used, could potentially provide an actor access to a resource, despite the fact that the actor should not have access to the resource. The exposed credential is analyzed to determine a context. Based on that context, the resource is identified. A responsible entity associated with the resource is identified. An alert is then sent to that entity.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p.ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p.ex. par clés ou règles de contrôle de l’accès
G06F 21/55 - Détection d’intrusion locale ou mise en œuvre de contre-mesures
38.
TRANSPOSING MATRICES BASED ON A MULTI-LEVEL CROSSBAR
Embodiments of the present disclosure include systems and methods for transposing matrices based on a multi-level crossbar. A system may include a memory configured to store a matrix comprising a plurality of elements arranged in a set of rows and a set of columns. A system may include an input buffer configured to retrieve a subset of a plurality of elements from the memory. Each element in the subset of the plurality of elements is retrieved from a different column in the matrix. A system may include a multi-level crossbar configured to perform a transpose operation on the subset of the plurality of elements. A system may include an output buffer configured to receive the transposed subset of the plurality of elements and store, in the memory, each element in the transposed subset of the plurality of elements in a different column in the matrix.
G06F 7/78 - Dispositions pour le réagencement, la permutation ou la sélection de données selon des règles prédéterminées, indépendamment du contenu des données pour changer l'ordre du flux des données, p.ex. transposition matricielle ou tampons du type pile d'assiettes [LIFO]; Gestion des occurrences du dépassement de la capacité du système ou de sa sous-alimentation à cet effet
39.
TRANSPOSING MATRICES BASED ON A MULTI-LEVEL CROSSBAR
Embodiments of the present disclosure include systems and methods for transposing matrices based on a multi-level crossbar. A system may include a memory configured to store a matrix comprising a plurality of elements arranged in a set of rows and a set of columns. A system may include an input buffer configured to retrieve a subset of a plurality of elements from the memory. Each element in the subset of the plurality of elements is retrieved from a different column in the matrix. A system may include a multi-level crossbar configured to perform a transpose operation on the subset of the plurality of elements. A system may include an output buffer configured to receive the transposed subset of the plurality of elements and store, in the memory, each element in the transposed subset of the plurality of elements in a different column in the matrix.
The present disclosure relates to utilizing a firmware configuration system to efficiently update a firmware profile configuration of computing devices (e.g., host devices in a datacenter). For example, the firmware configuration system facilitates updating the firmware profile configuration, such as for a Unified Extensible Firmware Interface (UEFI) profile and/or a Basic Input/Output System (BIOS), without needing to develop, deploy, and install a new BIOS. More specifically, the firmware configuration system updates (e.g., via a baseband management controller) firmware profile configurations by modifying a profile configuration table in flash memory (i.e., on an SPI flash-based chip) of a BIOS with a firmware profile configuration update patch and without affecting other parts of the BIOS.
Systems and methods are directed to building annotated models based on eyes-off data. Specifically, a synthetic data generation model is trained and used to further train a target model. The synthetic data generation model is trained within an eyes-off environment using an anonymity technique on confidential data. The synthetic data generation model is then used to create synthetic data that closely represents the confidential data but without any specific details that can be linked back to the confidential data. The synthetic data is then annotated and used to train the target model within an eyes-on environment. Subsequently, the target model is deployed back within the eyes-off environment to classify the confidential data.
A sample of data, including a risk factor, is selected by a machine learning (ML) model of an extreme value theory (EVT) mechanism. A threshold is determined by the ML model based on the risk factor, an outlier score is generated for the sample, and the outlier score is compared to the threshold. The sample is identified as anomalous based on the generated outlier score being greater than the threshold. A schema comprising results of an investigation into the sample and the risk factor is updated based on the received schema.
Embodiments of the disclosed technologies include generating a reward score for an entity. A rate distribution is determined using the reward score. A sampled rate value is generated by sampling the rate distribution. A probability score is generated for a pair of the entity and a user using the sampled rate value. A probability distribution is determined using the probability score. A sampled probability value is generated by sampling the probability distribution. A machine learning model is trained using the sampled probability value.
The techniques disclosed herein are directed to devices, circuits, systems, and techniques to mitigate the impact of side-channel attacks on a cryptography function in a target system. The Razor flip-flops are inserted into critical paths of the cryptography function of the target system, including at rest blocks such as key vaults and data vaults, and also including registers and/or pipelines used for calculations within the cryptography functions. Errors detected by the Razor flip-flops are processed by error detection logic in the cryptographic function, which continues the calculations until completion. The generated key and data value pairs resulting from detected errors are discarded, silently ignored without disrupting the calculation process. The schemes disclosed herein mitigate the impact of side-channel attacks with a digital logic based implementation, with reduced complexity and reduced cost.
A source code patch generation system uses the context of a buggy source code snippet of a source code program and a hint to predict a source code segment that repairs the buggy source code snippet. The hint is a source code segment that is semantically-similar to the buggy source code snippet where the similarity is based on a context of the buggy source code snippet. An autoregressive deep learning model uses the context of the buggy source code snippet and the hint to predict the most likely source code segment to repair the buggy source code snippet.
The indirect querying of models to determine capabilities possessed by the model. Such indirect queries take the form of model input that potentially includes a natural language input user data. Such model input is structured such that the output of the model is either not natural language at all, or else is natural language that is not semantically responsive to the natural language input. Nevertheless, the output is evaluated to estimate or determine the capability possessed by the model. Thus, models may be more fully utilized to their better potential.
Systems and methods for protecting privacy-relevant data from unauthorized disclosure in source code of an application. For instance, the present disclosure provides a plurality of technical features including: a privacy-relevant data analyzer that analyzes source code, detects privacy-relevant data in the source code, and generates a report of instances of detected privacy-relevant data. In some examples, the privacy-relevant data analyzer scans through source code to detect annotations that denote if fields, records, or combinations thereof include privacy-relevant data. The privacy-relevant data analyzer further generates and provides a report of detected privacy issues associated with sensitive data included in source code so that the issues can be resolved to ensure that privacy is not breached.
The present disclosure generally relates to systems, methods, and computer-readable media for managing the generation and processing of charging data records (CDRs) in a telecommunication environment (e.g., a fourth generation (4G) a fifth generation (5G), or future generation mobile network). The systems described herein involve predicting lengths of CDRs prior to encoding and providing the CDRs to a charging gateway function to ensure that the CDRs do not exceed a maximum allowable length that the charging gateway function is capable of processing while also reducing the total number of CDR packages that are encoded and transmitted. Indeed, the systems described herein can predict the length of the CDRs incrementally as charging containers are added, thus limiting the number of CDRs that are generated and processed.
A ray trace operation includes tracing a ray from an origin point in accordance with a ray path into a virtual environment (where the virtual environment comprises one or more virtual objects defined by one or more object components) and determining an intersected object component of the one or more object components that the ray intersects with. Determining the intersected object component comprises accessing (i) ray trace temporal coherence data based upon a preceding ray trace operation that temporally precedes the ray trace operation or (ii) ray trace spatial coherence data based upon a spatially proximate ray trace operation.
Described are examples for monitoring performance metrics of one or more workloads in a cloud-computing environment and reallocating compute resources based on the monitoring. Reallocating compute resources can include migrating workloads among nodes or other resources in the cloud-computing environment, reallocating hardware accelerator resources, adjusting transmit power for virtual radio access network (vRAN) workloads, and/or the like.
H04W 28/16 - Gestion centrale des ressources; Négociation de ressources ou de paramètres de communication, p.ex. négociation de la bande passante ou de la qualité de service [QoS Quality of Service]
G06F 11/32 - Surveillance du fonctionnement avec indication visuelle du fonctionnement de la machine
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p.ex. des interruptions ou des opérations d'entrée–sortie
H04W 24/10 - Planification des comptes-rendus de mesures
H04W 28/02 - Gestion du trafic, p.ex. régulation de flux ou d'encombrement
H04W 72/21 - Canaux de commande ou signalisation pour la gestion des ressources dans le sens ascendant de la liaison sans fil, c. à d. en direction du réseau
H04W 72/23 - Canaux de commande ou signalisation pour la gestion des ressources dans le sens descendant de la liaison sans fil, c. à d. en direction du terminal
51.
Joint Acoustic Echo Cancellation (AEC) and Personalized Noise Suppression (PNS)
A data processing system implements receiving a far-end signal associated with a first computing device participating in an online communication session and receiving a near-end signal associated with a second computing device participating in the online communication session. The near-end signal includes speech of a target speaker, a first interfering speaker, and an echo signal. The system further implements providing the far-end signal, the near-end signal, and an indication of the target speaker as an input to a machine learning model. The machine learning model trained to analyze the far-end signal and the near-end signal to perform personalized noise suppression (PNS) to remove speech from one or more interfering speakers and acoustic echo cancellation (AEC) to remove echoes. The model is trained to output an audio signal comprising speech of the target speaker. The system obtains the audio signal comprising the speech of the target speaker from the model.
G10L 21/0232 - Traitement dans le domaine fréquentiel
G06N 3/0442 - Réseaux récurrents, p.ex. réseaux de Hopfield caractérisés par la présence de mémoire ou de portes, p.ex. mémoire longue à court terme [LSTM] ou unités récurrentes à porte [GRU]
G10L 17/02 - Opérations de prétraitement, p.ex. sélection de segment; Représentation ou modélisation de motifs, p.ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principales; Sélection ou extraction des caractéristiques
G10L 17/04 - Entraînement, enrôlement ou construction de modèle
G10L 17/06 - Techniques de prise de décision; Stratégies d’alignement de motifs
Systems and methods for providing ticket support using a machine learning model trained using clusters of support tickets that are clustered based on similarity of resolution commands are provided. The system extracts commands used to resolve prior tickets and creates clusters of resolved tickets based on similarity of the commands. For each cluster, problem statements are extracted from the resolved tickets. The system trains a machine learning model with the extracted problem statements to identify a cluster number for each cluster. With a new support ticket, the system extracts a problem statement from the new ticket and identifies a predicted cluster number by applying the trained machine learning mode! to the problem statement from the new ticket. Based on the predicted cluster number, one or more commands used to resolve the prior tickets in the cluster corresponding to the predicted cluster number are accessed and provided to a requesting user.
The techniques disclosed herein prevent a rogue resource from being created within a cloud computing environment. For example, a rogue serverless function may be prevented from integrating with a cloud-based database, thereby preventing the serverless function from performing malicious operations such as low-rate data exfiltration. The rogue serverless function is detected before it is installed, heading off the attack completely. In some configurations, a key retrieval request is received. Parameters of the key retrieval request are analyzed for anomalies, and anomalous key retrieval requests are stored in a pool. Then, when a request to create a resource is received, the pool of anomalous key retrieval requests is searched for a match. When a match is found, the resource creation request may be suspended pending a further security review.
Various technologies relating to constructing an answer to a query are described herein, wherein the answer is in list form. The answer includes a header and a list element. A deep model receives content of a webpage that is deemed relevant to the query by a search engine and constructs the answer to the webpage upon receipt of the query.
The present disclosure relates to systems, methods, and computer readable media for modeling thermal effects within a multi-laser device. For example, systems described herein may include a plurality of laser devices that output energy streams having corresponding operating windows. One or more systems described herein may include a set of accumulators for tracking quantities of energy samples within operating windows and populating a queue representative of the tracked quantities. One or more systems described herein may additionally include filters and a summing module for determining temperature values for operating windows and synchronizing the temperature values with one another to determine an accurate system temperature for the multi-laser device. The features described herein facilitate synchronization of data for corresponding operating windows to provide an accurate determination of system temperature based on a combination of self-heating and crosstalk effects between multiple laser devices.
Techniques for auto-starting a VPN in a MAM environment are disclosed. A MAM-controlled application is launched on a computer system. Policy is queried and a determination is made as to whether to auto-start a VPN application based on the policy. Based on the policy, the VPN application is auto-started, and the VPN application initiates a VPN tunnel that is usable by at least the MAM-controlled application. Network communications transmitted to or from the MAM-controlled application then pass through the VPN tunnel.
H04L 41/22 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p.ex. des réseaux de commutation de paquets comprenant des interfaces utilisateur graphiques spécialement adaptées [GUI]
Computing resources are managed in a computing network comprising a computing service provider and an edge computing network. The edge computing network receives an indication of a disconnection of communications between the computing service provider and the edge computing network. In response to the indication, the edge computing network initiates an autonomous mode at the edge computing network. The edge computing network is configured to continue providing computing and network services at the edge computing network while the edge computing network is operating in the autonomous mode.
Technologies are disclosed that enable a computing system to present a structured arrangement for tracking content items on a shared user interface (UI) during a communication session. The structured arrangement is a list that is displayed in a specific region of the shared UI. Inclusion of content items in the list makes it easier for users to locate and interact with those content items throughout the communication session. The ability to create and manipulate the list may be limited to only certain users such as a moderator. Use of this list can promote inclusivity and fairness. For instance, inclusion in the list may prevent content items from being forgotten or ignored. Additionally, the names of users who contributed the content items may be shown in the list thereby providing recognition for those users.
A computing system is provided, which receives a molecular graph at a message passing graph neural network (MPGNN), and produces scalar embeddings representing features of nodes and edges of the graph and vector embeddings representing geometric relationships of the graph. The system processes the scalar embeddings via a vector scalar interactive message passing mechanism of a message passing sub-block of the MPGNN to generate and pass scalar information from the scalar embeddings to an embedding space containing the vector embeddings. The system updates the vector embeddings based on the embedding space containing the scalar information and the vector embeddings. The system updates the scalar embeddings based on run-time geometry calculations of the geometric relationships encoded in the vector embeddings. The system computes an updated molecular graph based on the updated scalar and vector embeddings and outputs a target molecular property value based on the updated molecular graph.
Systems and methods are provided for accessing a factorized neural transducer comprising a first set of layers for predicting blank tokens and a second set of layers for predicting vocabulary tokens. The first set of layers comprises a blank predictor, an encoder, and a joint network and the second set of layers comprising a vocabulary predictor which is a separate predictor from the blank predictor. A context encoder is added to the factorized neural transducer which encodes long-form transcription history for generating a long-form context embedding, such that the factorized neural transducer is further configured to perform long-form automatic speech recognition, at least in part, by using the long-form context embedding to augment a prediction of vocabulary tokens.
Generally discussed herein are devices, systems, and methods for generating synthetic datasets. A method includes obtaining a first training labelled dataset, obtaining a second training labelled dataset, determining an optimal transport (OT) map from a target labelled dataset to the first training labelled dataset, determining an OT map from the target labelled dataset to the second training labelled dataset, identifying, in a generalized geodesic hull formed by the first and second training labelled datasets in a distribution space and based on the OT maps, a point proximate the target dataset in the distribution space, and producing the synthetic labelled ML dataset by combining, based on distances between probability distribution representations of the first and second labelled training datasets in the distribution space and the point, the first and second labelled training datasets resulting in a labelled synthetic dataset.
The description relates to resource aware object detection for encoded video streams that can identify frames of the video stream that include an object of interest, such as a human, without decoding the frames.
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p.ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p.ex. un objet la zone étant une image, une trame ou un champ
H04N 19/177 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un groupe d’images [GOP]
H04N 19/46 - Inclusion d’information supplémentaire dans le signal vidéo pendant le processus de compression
H04N 21/234 - Traitement de flux vidéo élémentaires, p.ex. raccordement de flux vidéo ou transformation de graphes de scènes MPEG-4
H04N 21/266 - Gestion de canal ou de contenu, p.ex. génération et gestion de clés et de messages de titres d'accès dans un système d'accès conditionnel, fusion d'un canal de monodiffusion de VOD dans un canal multidiffusion
H04N 21/83 - Génération ou traitement de données de protection ou de description associées au contenu; Structuration du contenu
H04N 21/845 - Structuration du contenu, p.ex. décomposition du contenu en segments temporels
63.
DEVICES, SYSTEMS, AND METHODS FOR A COOLING SYSTEM
A cooling system may include a tank filled with a first cooling fluid. The cooling system may include a container including a chamber, the chamber receiving a heat-generating component, the container being sealed, the container being at least partially submerged in the first cooling fluid in the tank, the container including a second cooling fluid.
Techniques for identifying an exposed credential that, if used, would provide access to a resource are disclosed. The techniques enable the resource to remain online while (i) a new credential is allocated for the resource, (ii) the resource is transitioned to using the new credential instead of the exposed credential, and (iii) the exposed credential is attempted to be invalidated. A credential is accessed. This credential is suspected of being in an exposed state. The credential is accessible from within an artifact and is determined to be in the exposed state. A new credential is generated. This new credential is designed to replace the exposed credential. An instruction is transmitted to the resource to cause it to transition from using the exposed credential to using the new credential. The exposed credential is then invalidated.
G06F 21/45 - Structures ou outils d’administration de l’authentification
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p.ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
65.
GENERATING AND PROCESSING CHARGING DATA RECORDS BASED ON PREDICTED RECORD LENGTH
The present disclosure generally relates to systems, methods, and computer-readable media for managing the generation and processing of charging data records (CDRs) in a telecommunication environment (e.g., a fourth generation (4G) a fifth generation (5G), or future generation mobile network). The systems described herein involve predicting lengths of CDRs prior to encoding and providing the CDRs to a charging gateway function to ensure that the CDRs do not exceed a maximum allowable length that the charging gateway function is capable of processing while also reducing the total number of CDR packages that are encoded and transmitted. Indeed, the systems described herein can predict the length of the CDRs incrementally as charging containers are added, thus limiting the number of CDRs that are generated and processed.
Example implementations include a method, apparatus, and computer-readable medium configured for generating a network configuration using a large language model (LLM). The apparatus receives, at an interface between a user and LLM, a natural language intent for a network configuration. The apparatus requests the large language model to update the network configuration to an updated network configuration that satisfies the natural language intent in a declarative network configuration language. The apparatus verifies whether the updated network configuration satisfies a configuration syntax of the declarative network configuration language to detect an error. The apparatus requests the large language model to update the updated network configuration to correct the error. The apparatus deploys the updated network configuration to a user network.
G06F 40/40 - Traitement ou traduction du langage naturel
H04L 41/0823 - Réglages de configuration caractérisés par les objectifs d’un changement de paramètres, p.ex. l’optimisation de la configuration pour améliorer la fiabilité
67.
Leveraging affinity between content creator and viewer to improve creator retention
Methods, systems, and computer programs are presented for selecting notifications based on an affinity score between a content generator and a viewer of the content. One method includes capturing interactions of content generators with notifications, received by the content generators, associated with viewer responses to creator-generated content items. The method further includes training a machine-learning model based on the interactions, and detecting a first set of notifications, for a first content generator, associated with interactions of a set of viewers to first-content generator content. The ML model calculates an affinity score between the first content generator and each viewer, and the set of first notifications are ranked based on the affinity scores of the first content generator and the viewer associated with each notification. A set of second notifications is selected based on the ranked first notifications; and generating notifications are generated, for the first content generator, for the selected set of second notifications.
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
68.
LOCAL PAGE WRITES VIA PRE-STAGING BUFFERS FOR RESILIENT BUFFER POOL EXTENSIONS
Methods for local page writes via pre-staging buffers for resilient buffer pool extensions are performed by computing systems. Compute nodes in database systems insert, update, and query data pages maintained in storage nodes. Data pages cached locally by compute node buffer pools are provided to buffer pool extensions on local disks as pre-copies via staging buffers that store data pages prior to local disk storage. Encryption of data pages occurs at the staging buffers, which allows a less restrictive update latching during the copy process, with page metadata being updated in buffer pool extensions page tables with in-progress states indicating it is not yet written to local disk. When stage buffers are filled, data pages are written to buffer pool extensions and metadata is updated in page tables to indicate available/valid states. Data pages in staging buffers can be read and updated prior to writing to the local disk.
Examples are disclosed that relate to fairly ordering financial market trades received from different market participant computers via a cloud computing network. In one example, a plurality of trades generated by a plurality of market participant computers are received. The trades are generated based at least on a financial market data point received by the plurality of market participant computers. Each trade is tagged with a delivery clock time stamp that tracks time in relation to financial market events that occur at a corresponding market participant computer. The trades are ordered based on the delivery clock time stamps and sent to a central exchange server computer. The central exchange server computer processes the trades.
The description relates to resource aware object detection for encoded video streams that can identify frames of the video stream that include an object of interest, such as a human, without decoding the frames.
H04N 19/177 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un groupe d’images [GOP]
H04N 19/169 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif
Techniques for identifying an exposed credential that, if used, would provide access to a resource are disclosed. The techniques enable the resource to remain online while (i) a new credential is allocated for the resource, (ii) the resource is transitioned to using the new credential instead of the exposed credential, and (iii) the exposed credential is attempted to be invalidated. A credential is accessed. This credential is suspected of being in an exposed state. The credential is accessible from within an artifact and is determined to be in the exposed state. A new credential is generated. This new credential is designed to replace the exposed credential. An instruction is transmitted to the resource to cause it to transition from using the exposed credential to using the new credential. The exposed credential is then invalidated.
G06F 21/62 - Protection de l’accès à des données via une plate-forme, p.ex. par clés ou règles de contrôle de l’accès
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p.ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
G06F 21/64 - Protection de l’intégrité des données, p.ex. par sommes de contrôle, certificats ou signatures
Using a language model attention matrix to facilitate a “code radar” source code navigation experience that highlights related source code locations. A computer system identifies a first source code location within source code that is displayed at a code editor user interface (UI). From a set of mappings generated based on a language model attention matrix, the computer system identifies a second source code location as being related to the first source code location. Concurrent with presenting the first source code location in the code editor UI, the computer system presents a related source code navigation experience, which includes both (i) presenting the second source code location in the code editor UI, and (ii) presenting a visual indication that the second source code location is related to the first source code location. Some embodiments include generating the set of mappings based on a language model attention matrix.
Systems are methods are used for facilitating identify anonymization by using controlled masking and encryption of user identifiers, such as UUIDs. A system that manages a UUID converts the UUID into a set of one or more different unique versions of the UUID for one or more corresponding different partner system(s) by removing and replacing masked portions of the UUID and by selectively encrypting the non-masked portions of the UUID. New masked portions added to the new version(s) of the UUID identify different corresponding partner(s) and/or rules to be applied by the different partner(s) when handling the different unique version(s) of the UUID(s). Partner systems that receive the new versions of the UUID identify and utilize the new masked portions to deterministically control decrypting and/or other processing of the new version of the UUID.
Embodiments detect cyberattack campaigns against multiple cloud tenants by analyzing activity data to find sharing anomalies. Data that appears benign in a single tenant's activities may indicate an attack when the same or similar data is also found for additional tenants. Attack detection may depend on activity time frames, on how similar certain activities of different tenants are to one another, on how unusual it is for different tenants to share an activity, and on other factors. Sharing anomaly analysis may utilize hypergeometric probabilities or other statistical measures. Detection avoidance attempts using digital entity randomization are revealed and thwarted. Authorized vendors may be recognized, mooting anomalousness. Although data from multiple tenants is analyzed together for sharing anomalies while monitoring for attacks, tenant confidentiality and privacy are respected through technical and legal mechanisms. Mitigation is performed in response to an attack indication.
Examples are disclosed that relate to generating expressive avatars using multi-modal three-dimensional face modeling and tracking. One example includes a computer system comprising a processor coupled to a storage system that stores instructions. Upon execution by the processor, the instructions cause the processor to receive initialization data describing an initial state of a facial model. The instructions further cause the processor to receive a plurality of multi-modal data signals. The instructions further cause the processor to perform a fitting process using the initialization data and the plurality of multi-modal data signals. The instructions further cause the processor to determine a set of parameters based on the fitting process, wherein the determined set of parameters describes an updated state of the facial model.
Systems and methods for generating a shared collaborative channel for collaboration are provided. In particular, a computing device may receive a request, from an originating member of an organization, to create the shared collaborative channel, the request including an invitee to be added to the shared collaborative channel. In response to receipt of the request, the computing device may provision a substrate group by creating a container associate with the shared collaborative channel including a substrate database associated with the shared collaborative channel, generate an invitation including a custom link to the shared collaborative channel for the invitee, and determine whether the invitee belongs to an originating collaboration team associated with the originating member based on the substrate database. If the invitee belongs to the originating collaboration team, the computing device may further update the substrate group to add the invitee as a new member of the shared collaborative channel.
Systems, methods, and devices are described for enabling a user to import a library into a computer program under development. The library includes a data storage interface, one or more semantic objects, and one or more data manipulation or data analysis operations. A user is able to reference code of the library within the computer program under development to generate a dataset from data obtained via the data storage interface and associate the one or more semantic objects with the dataset to generate a semantically-annotated dataset. Systems, methods, and devices enable, based on the importing: the user to invoke a semantic-guided operation of the library that utilizes the semantically-annotated dataset to infer an aspect of a data manipulation or data analysis operation to be performed on the semantically-annotated dataset; or the suggestion of a data manipulation or data analysis operation to the user based on the semantically-annotated dataset.
A click-to-script service enables developers of big-data job scripts to quickly see the underlying script operations from optimized execution plans. Once a big-data job is received, the disclosed examples compile it and generate tokens that are associated with each operation of the big-data job. These tokens include may include the file name of the job, the line number of the operation, and/or an Abstract Syntax Tree (AST) node for the given operations. An original execution plan is optimized into an optimized execution plan, and the tokens for the original operations of the job script are assigned to the optimized operations of the optimized execution plan. The optimized execution plan is graphically displayed in an interactive manner such that users may view the optimized execution plan and click on its optimized operations to find the original operations of the job script.
Techniques for separating an image into a forward sweeping image and a backward sweeping image are disclosed. A lookup table maps MEMS projection positions on a display with corresponding pixel positions in an image generated by a camera facing the display. The lookup table is used to associate a first set of pixel positions in the image with a forward scanning sweep of the MEMS system. The lookup table is also used to associate a second set of pixel positions in the image with a backward scanning sweep of the MEMS system. The first and second sets of pixel positions are used to generate the forward sweeping image and the backward sweeping image, respectively. These images can then be used to calibrate the MEMS system to compensate for bi-phase.
H04N 9/31 - Dispositifs de projection pour la présentation d'images en couleurs
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
A virtual network provider system supports a virtual network including virtual machines that are each assigned to an underlay address of an underlay addressing scheme. The virtual network provider system further includes multiple routing domains each defined to include a different subset of the virtual machines. Each of the routing domains is assigned to a range of overlay addresses of an overlay addressing scheme. For each routing domain, the assigned range of overlay addresses is allocated among the subset of the virtual machines in the routing domain. The system further includes a virtual network host configured to use addresses of the overlay addressing scheme to selectively route messages between endpoints on select pairs of the virtual machines assigned to a same routing domain of the plurality of routing domains.
This document relates to training and employing a quality estimation model. One example includes a method or technique that can be performed on a computing device. The method or technique can include providing degraded audio signals to one or more packet loss concealment models, and obtaining enhanced audio signals output by the one or more packet loss concealment models. The method or technique can also include obtaining quality labels for the enhanced audio signals and training a quality estimation model to estimate audio signal quality based at least on the enhanced audio signals and the quality labels.
G10L 25/60 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour mesurer la qualité des signaux de voix
G10L 19/005 - Correction d’erreurs induites par le canal de transmission, lorsqu’elles sont liées à l’algorithme de codage
G10L 25/30 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par la technique d’analyse utilisant des réseaux neuronaux
G10L 25/69 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour l’évaluation de signaux de voix synthétiques ou décodés
82.
IDENTIFYING AND CONSENTING TO PERMISSIONS FOR WORKFLOW AND CODE EXECUTION
The present invention extends to methods, systems, and computer program products for identifying and consenting to permissions for workflow and code execution. Aspects of the invention can be used to automatically scan a workflow or code definition to identify (potentially all) the actions/triggers a workflow or program intends to perform on behalf of a user. The user is shown the actions/triggers the workflow or program intends to perform (e.g., at a user interface) before consent to perform the actions/triggers is granted. As such, a user is aware of intended actions/triggers of a workflow or program before granting consent. Further, since actions/triggers are identified from the workflow or code definition (and not formulated by an author), permission requests better align with permissions that workflow or program functionality actually uses during execution.
G06F 12/0891 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p.ex. mémoires cache utilisant des moyens d’effacement, d’invalidation ou de réinitialisation
G06F 21/51 - Contrôle des usagers, programmes ou dispositifs de préservation de l’intégrité des plates-formes, p.ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade du chargement de l’application, p.ex. en acceptant, en rejetant, en démarrant ou en inhibant un logiciel exécutable en fonction de l’intégrité ou de la fiabilité de la source
G06F 21/53 - Contrôle des usagers, programmes ou dispositifs de préservation de l’intégrité des plates-formes, p.ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p.ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p.ex. "boîte à sable" ou machine virtuelle sécurisée
The present disclosure relates to systems, methods, and computer-readable media for identifying a variety of events that occur within a gaming session and generating event reports based on the identified events. For example, a gaming service (e.g., a cloud gaming server) can leverage content analysis and event recognizer services on a cloud computing system to detect one or more in-game events based on gaming content (e.g., video content, audio content, controller inputs) that is delivered to a client system. Systems described herein can train and implement event recognizers trained to track various in-game events across multiple gaming applications. Based on the tracked events, the systems described herein can generate event reports for events, individual users, and groups of users of the cloud computing system.
A63F 13/77 - Aspects de sécurité ou de gestion du jeu incluant les données relatives aux dispositifs ou aux serveurs de jeu, p.ex. données de configuration, version du logiciel ou quantité de mémoire
A63F 13/75 - Application des règles, p.ex. détection des joueurs déloyaux ou établissement de listes de joueurs tricheurs
A foldable computing device comprises a first frame rotatably coupled to a second frame. The second frame comprises a push-to-open mechanism comprising an actuator and a power switch located for actuation by the actuator. A detection mechanism detects a displaced position of the actuator that corresponds to releasing the foldable computing device from a closed configuration. Actuation of the power switch is detected and used with detection of the displaced position of the actuator to control an operating state of the computing device.
Embodiments of the present disclosure include techniques for machine language processing. In one embodiment, the present disclosure includes configuring functional modules on a machine learning processor to execute a plurality of machine learning (ML) operations during a plurality of time segments. During the time segments, a first portion of the ML operations execute serially and at least one other ML operation executes during at least a majority of the time of each of the time segments. Serial ML operations may be processed simultaneously with the at least one other ML operation.
Indirect time-of-flight camera systems for operating in multiple optical channels using active modulated light and accompanying methods of operation are provided. In one aspect, the indirect time-of-flight camera system includes first and second modulatable laser sources outputting light of different wavelengths for illuminating a target environment. The camera system further includes a wavelength-selective reflective element designed to reflect the light of a first wavelength and to transmit the light of a second wavelength. The camera system further includes a controller comprising instructions executable to control the camera system to, in a first time period, activate the first modulatable laser source and deactivate the second modulatable laser source, and in a second time period, deactivate the first modulatable laser source and activate the second modulatable laser source. The camera system further includes a photosensor for receiving the light outputted by the first and second modulatable laser sources.
Methods and systems cause display of email messages of a user on a screen of a computing system based on scores associated with the email messages. An email ranking system may have assigned the scores to the email messages. The scores are based on actions that other recipients of the email messages have taken with respect to the email messages. In calculating the scores, the actions of the other recipients may receive different weights based on how closely connected a recipient is to the user and a type of connection the recipient has to the user. A network graph may indicate how closely connected the recipient is to the user and the type of connection the recipient has to the user.
H04L 51/212 - Surveillance ou traitement des messages utilisant un filtrage ou un blocage sélectif
H04L 51/216 - Gestion de l'historique des conversations, p.ex. regroupement de messages dans des sessions ou des fils de conversation
H04L 51/224 - Surveillance ou traitement des messages en fournissant une notification sur les messages entrants, p.ex. des poussées de notifications des messages reçus
H04L 51/42 - Aspects liés aux boîtes aux lettres, p.ex. synchronisation des boîtes aux lettres
Examples are disclosed that relate to applying haptic output to a touch-sensitive input device. One example provides a touch-sensitive input device comprising a body, a haptic feedback mechanism within the body, a sensor subsystem, a logic processor, and a memory. The memory stores instructions executable by the processor to receive from the sensor subsystem sensor data indicating locations along the body of a plurality of contact points between a user hand and the body, based at least in part on the sensor data, determine a touch profile of the user hand applied to the body, based at least in part on the touch profile of the user hand, determine a selected haptic output to be applied to the body, and cause a drive signal to be transmitted to the haptic feedback mechanism to apply the selected haptic output to the body.
G06F 3/038 - Dispositions de commande et d'interface à cet effet, p.ex. circuits d'attaque ou circuits de contrôle incorporés dans le dispositif
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p.ex. souris 2D, boules traçantes, crayons ou palets
89.
TRANSFORMER-BASED TEXT ENCODER FOR PASSAGE RETRIEVAL
A computing system includes a logic subsystem and a storage subsystem holding instructions executable by the logic subsystem to implement a transformer-based text encoder. The transformer-based text encoder includes a plurality of transformer blocks previously-trained to apply encoding operations to computer-readable text representations of input text strings, the computer-readable text representations including computer-readable question representations of input text questions, and computer-readable passage representations of input text passages. The plurality of transformer blocks include a shared transformer block trained for both the computer-readable question representations and the computer-readable passage representations and a specialized transformer block including two or more input-specific subnetworks, and a routing function to select an input-specific subnetwork of the two or more input-specific subnetworks for each of the computer-readable text representations.
A computer implemented method includes receiving first firmware information at a hosting environment identifying that a user has selected user-controlled firmware for user virtual machines to be hosted on the hosting environment. A copy of the user-controlled firmware is obtained and a user virtual machine is deployed that includes the user-controlled firmware. The user-controlled firmware is locked against changes by the hosting environment absent receiving permission from the user.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
A system provisioning resources of a processing unit. The system predicts a performance impact on a workload attributable to a performance constraint of the processing unit for the workload according to a resource model, wherein the workload includes a query and the resource model characterizes attainable compute bandwidth, attainable memory bandwidth, and arithmetic intensity based on peak compute bandwidth and peak memory bandwidth of the processing unit. The system determines a resource allocation of the processing unit, based on the predicted performance impact and instructs the processing unit to allocate the resources for processing the workload based on the determined resource allocation.
Examples are disclosed that relate to generating expressive avatars using multi-modal three-dimensional face modeling and tracking. One example includes a computer system comprising a processor coupled to a storage system that stores instructions. Upon execution by the processor, the instructions cause the processor to receive initialization data describing an initial state of a facial model. The instructions further cause the processor to receive a plurality of multi-modal data signals. The instructions further cause the processor to perform a fitting process using the initialization data and the plurality of multi-modal data signals. The instructions further cause the processor to determine a set of parameters based on the fitting process, wherein the determined set of parameters describes an updated state of the facial model.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
93.
BLINKLESS AND MARKERLESS BI-PHASE DISPLAY CALIBRATION
Techniques for separating an image into a forward sweeping image and a backward sweeping image are disclosed. A lookup table maps MEMS projection positions on a display with corresponding pixel positions in an image generated by a camera facing the display. The lookup table is used to associate a first set of pixel positions in the image with a forward scanning sweep of the MEMS system. The lookup table is also used to associate a second set of pixel positions in the image with a backward scanning sweep of the MEMS system. The first and second sets of pixel positions are used to generate the forward sweeping image and the backward sweeping image, respectively. These images can then be used to calibrate the MEMS system to compensate for bi-phase.
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
The present disclosure relates to systems, methods, and computer-readable media for extending functionality of unstructured data storage function (UDSF) nodes in supporting features and functionality of services and applications that are accessible via a core network. The systems described herein include a UDSF node having a UDSF data management system that enables network functions to interact with and modify data resources separately maintained by the UDSF node(s). A network function may selectively target discrete sets of blocks of data on records to access without accessing entire records and without issuing redundance application programming interface (API) calls to the USDF node(s). the UDSF node may be implemented in a core network to enhance network functions in fifth generation (5G) and beyond communication environments.
A system provisioning resources of a processing unit. The system predicts a performance impact on a workload attributable to a performance constraint of the processing unit for the workload according to a resource model, wherein the workload includes a query and the resource model characterizes attainable compute bandwidth, attainable memory bandwidth, and arithmetic intensity based on peak compute bandwidth and peak memory bandwidth of the processing unit. The system determines a resource allocation of the processing unit, based on the predicted performance impact and instructs the processing unit to allocate the resources for processing the workload based on the determined resource allocation.
Transparently providing a virtualization feature to an unenlightened guest operating system (OS). A guest partition, corresponding to a virtual machine, is divided into a first guest privilege context and a second guest privilege context. A compatibility component executes within the first guest privilege context, while a guest OS executes within the second guest privilege context. The compatibility component is configured to intercept input/output (I/O) operations associated with the guest operating OS. Based on the compatibility component intercepting an I/O operation associated with the guest OS, the compatibility component processes the I/O operation using a virtualization feature that is unsupported by the guest OS. Examples of the virtualization feature include accelerated access to a hardware device and virtual machine guest confidentiality.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
A systematic mechanism for visualizing functions or capabilities that an application has. One or more user experience objects are generated corresponding to an application. An application definition is obtained for that application, and then multiple user experience templates are identified based on that application definition. Information from the application definition is then used to populate at least one of the user experience templates to generate at least one object experience object. The user may then review visualizations of the user experience objects to determine the general capabilities of the application, and thereby determine whether to install or open the application, and how best to use the application.
A source code patch generation system uses the context of a buggy source code snippet of a source code program and a hint to predict a source code segment that repairs the buggy source code snippet. The hint is a source code segment that is semantically-similar to the buggy source code snippet where the similarity is based on a context of the buggy source code snippet. An autoregressive deep learning model uses the context of the buggy source code snippet and the hint to predict the most likely source code segment to repair the buggy source code snippet.
Authentication request notifications are selectively suppressed, reducing notification fatigue and susceptibility to social engineering attacks. Authentication request notifications may be suppressed by not presenting a push notification on the user's phone. The authentication request may still be accessed and approved by manually opening the authenticator app. Notifications may be suppressed based on an estimation that the person attempting to login is not who they say they are. This estimation may be based on applying heuristics and/or machine learning models to the context of the login attempt, such as the IP address that originated the login request, time of day, recent user actions, patterns of previous logins, etc. One heuristic determines that the user has repeatedly ignored notifications caused by a particular IP address. Machine learning models generate a risk score from the login context, and notifications may be suppressed if the risk score exceeds a threshold.
Systems are methods are used for facilitating identify anonymization by using controlled masking and encryption of user identifiers, such as UUIDs. A system that manages a UUID converts the UUID into a set of one or more different unique versions of the UUID for one or more corresponding different partner system(s) by removing and replacing masked portions of the UUID and by selectively encrypting the non-masked portions of the UUID. New masked portions added to the new version(s) of the UUID identify different corresponding partner(s) and/or rules to be applied by the different partner(s) when handling the different unique version(s) of the UUID(s). Partner systems that receive the new versions of the UUID identify and utilize the new masked portions to deterministically control decrypting and/or other processing of the new version of the UUID.