Disclosed is a system for positioning fiber and electronics cables within a server room, which includes a wire-pulley system which includes a wire operably coupled to a first pulley wheel and a second pulley wheel. The wire is looped around the first and second pulley wheel such that a point in the wire is laterally movable between the first and second pulley wheels when the first and second pulley wheels are rotated. The system for positioning cables includes a cable carrier which is removably coupled to the point in the wire. The cable carrier includes a first panel, and a second panel hingedly coupled to the first panel. The first panel and second panel each include a plurality of receiving slots, where the receiving slots are configured to removably receive a distal end of various cables.
H02G 1/04 - Méthodes ou appareils spécialement adaptés à l'installation, entretien, réparation, ou démontage des câbles ou lignes électriques pour lignes ou câbles aériens pour les monter ou les tendre
2.
LAYER-2 NETWORKING USING ACCESS CONTROL LISTS IN A VIRTUALIZED CLOUD ENVIRONMENT
Techniques are described for communications in an L2 virtual network. In an example, the L2 virtual network includes a plurality of L2 compute instances hosted on a set of host machines and a plurality of L2 virtual network interfaces and L2 virtual switches hosted on a set of network virtualization devices. An L2 virtual network interface emulates an L2 port of the L2 virtual network. Access control list (ACL) information applicable to the L2 port is sent to a network virtualization device that hosts the L2 virtual network interface.
H04L 45/586 - Association de routeurs de routeurs virtuels
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
H04L 47/12 - Prévention de la congestion; Récupération de la congestion
H04L 47/2483 - Trafic caractérisé par des attributs spécifiques, p.ex. la priorité ou QoS en impliquant l’identification des flux individuels
H04L 49/00 - TRANSMISSION D'INFORMATION NUMÉRIQUE, p.ex. COMMUNICATION TÉLÉGRAPHIQUE Éléments de commutation de paquets
H04L 61/103 - Correspondance entre adresses de types différents à travers les couches réseau, p.ex. résolution d’adresse de la couche réseau dans la couche physique ou protocole de résolution d'adresse [ARP]
H04L 61/2517 - Traduction d'adresses de protocole Internet [IP] en utilisant des numéros de port
H04L 69/324 - Protocoles de communication intra-couche entre entités paires ou définitions d'unité de données de protocole [PDU] dans la couche liaison de données [couche OSI 2], p.ex. HDLC
3.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR ADJUSTING AND USING PRIORITIES OF SERVICE/NOTIFICATION REQUEST MESSAGES AT NETWORK FUNCTIONS WITH MULTIPLE SLICE SUPPORT
A method for adjusting priorities of messages at a network function (NF) with multiple network slice support includes, at a first NF that supports multiple network slices, storing a database of rules specifying network-slice-based priority adjustment parameters. The method further includes receiving a message from a second NF. The method further includes determining that a congestion or overload condition exists, and, in response. determining network slice information associated with the message, determining, using the network slice information and the database of network-slice-based priority adjustment parameters, a network-slice-adjusted priority value for the message, and discarding or processing the message based on the network-slice-adjusted priority value for the message.
A computer analyzes a relational schema of a database to generate a data entry schema and encodes the data entry schema as JSON. The data entry schema is sent to a database client so that the client can validate entered data before the entered data is sent for storage. From the client, entered data is received that conforms to the data entry schema because the client used the data entry schema to validate the entered data before sending the data. Into the database, the entered data is stored that conforms to the data entry schema. The data entry schema and the relational schema have corresponding constraints on a datum to be stored, such as a range limit for a database column or an express set of distinct valid values. A constraint may specify a format mask or regular expression that values in the column should conform to, or a correlation between values of multiple columns.
Disclosed is a system for positioning cables within a server room, which includes a wire-pulley system which includes a wire operably coupled to a first pulley wheel and a second pulley wheel. The wire is looped around the first and second pulley wheel such that a point in the wire is laterally movable between the first and second pulley wheels when the first and second pulley wheels are rotated. The system for positioning cables includes a cable carrier which is removably coupled to the point in the wire. The cable carrier includes a central body defining an elongated vertical structure. The cable carrier includes a plurality of posts extending laterally from the central body, where gaps defined between adjacent posts define receiving slots, where each of the plurality of receiving slots are configured to removably receive a segment of an electronics cable.
H02G 3/00 - Installations de câbles ou de lignes électriques ou de leurs tubes de protection dans ou sur des immeubles, des structures équivalentes ou des véhicules
H02G 1/00 - Méthodes ou appareils spécialement adaptés à l'installation, entretien, réparation, ou démontage des câbles ou lignes électriques
6.
DUAL PERSONALITY MEMORY FOR AUTONOMOUS MULTI-TENANT CLOUD ENVIRONMENT
A computing device is configured to allocate memory for exclusive use of an execution entity from both a shared memory area and a private memory area of the device. Specifically, the shared memory area is configured with a united memory pool (UMP) component. The UMP component is configured to provide portions of huge page-based memory to execution entities for exclusive use of the execution entities. Memory granules that are allocated to the UMP component are divided into smaller memory chunks (which are smaller than a huge page), each of which can be allocated for exclusive use of an execution entity. These memory chunks are mapped to virtual address spaces of the assigned execution entities. Because memory granules can be allocated to, and deallocated from, the UMP component at run-time, the amount of memory that is available for private data generated by execution entities is able to be dynamically adjusted.
Techniques for performing analytics using automatically generated labels for time series data and numerical lists are disclosed. In some embodiments, a system loads a set of one or more time series datasets. A respective time series dataset may include a set of data points based on varying values of a metric of one or more computing resources over a window of time. The system assigns labels to a subset of the data points in the time series datasets. The label assigned to a given data point may be descriptive of a pattern reflected by the data point relative to other data points in the time series. The system further identifies a pattern of automatically assigned labels that is indicative of an event affecting the one or more computing resources. Responsive to identifying the pattern of labels, the system may trigger a responsive action.
Systems, devices, and methods of the present invention involve discourse trees. In an example, a method involves generating a discourse tree. The method includes identifying, from the discourse tree, a central entity that is associated with a rhetorical relation of type elaboration and corresponds to a topic node that identifies a central entity of the text. The method includes determining a subset of elementary discourse units of the discourse tree that are associated with the central entity. The method includes forming generalized phrases from the subset of elementary discourse units. The method includes forming tuples from the generalized phrases, where a tuple is an ordered set of words in normal form. The method involves responsive to successfully converting an elementary discourse unit associated with an identified tuple into a logical representation, updating the ontology with an entity from the identified tuple.
The present embodiments relate to systems and methods for automatic sign in upon account signup. Particularly, the present embodiments can utilize a federated login approach for automatic sign in upon account signup for a cloud infrastructure. Specifically, the signup and sign in service (also known as SOUP) and an identity provider portal can be configured such that the nodes are aware of each other as Security Assertion Markup Language (SAML) partners. After new account registration, the signup service can redirect the user browser to a cloud infrastructure console to start with a federated login flow, where a sign in service can issue a SAML authentication request, and redirects it to signup service. Responsive to validating the browser using a SAML authentication process, the browser can be automatically signed into the new account and allowed access the account relating to the cloud infrastructure service.
A computer program product, system, and computer implemented method for scalable specification and self-governance for autonomous databases, cluster databases, and multi-tenant databases in cloud and on-prem environments. The approach disclosed herein enables management of a consolidated databases using a template-based process that allows for the consolidated databases (CDBs) and pluggable databases (PBDs) to be reconfigured automatically. In some embodiments, the approach instantiates one or more monitoring modules and one or more CDB/PDB configuration managers. These elements can detect relevant changes in the conditions in which CDB instances and open PDBs operate and adjust the configurations thereof in response. The configurations are specified in and adjusted using one or more corresponding templates, where the template comprise a set of rules that may have various interdependencies and specify how to determine what value a particular configuration setting should be to automatically configuration and reconfigure CDB instances and open PDBs.
A blockchain object stores multiple user blockchains, each blockchain comprising an ordered set of records in the blockchain object. The records of a user blockchain have the same blockchain key value. Users can create multiple blockchains by establishing respective blockchain key values for the blockchains. Like blocks in a blockchain, the records in a user blockchain are ordered by a sequence of numbers that is specific to the user blockchain; each user blockchain has its own sequence of numbers. Each record in a user blockchain holds a sequence number in a field of the blockchain object. An efficient mechanism maintains and assigns a sequence number to a record when appended to a user blockchain.
According to certain implementations, a motherboard is provided that enables operation as either multiple independent single-processor systems or a single multiple-processor system. In response to a request to configure the motherboard as multiple independent single-processor systems, a control block is implemented for each processor attached to the motherboard, where the control blocks configure the processors to boot and operate independently of each other, and the processors utilize separate motherboard resources. In response to a request to configure the motherboard as multiple independent single-processor systems, a single control block is implemented all processors attached to the motherboard, where the single control block configures all processors to boot and operate in a connected state, and the processors share all motherboard resources.
G06F 13/12 - Commande par programme pour dispositifs périphériques utilisant des matériels indépendants du processeur central, p.ex. canal ou processeur périphérique
Embodiments include systems and methods for generating a data throughput estimation model. A system may be monitored to measure both (a) data throughput and (b) computing statistics of one or more computing resources to generate an initial data set. The relationship between the data throughput and the computing statistics, in the initial data set, is used to generate a data throughput estimation model. The data throughput estimation model may be generated using a machine learning model, a neural network algorithm, boosting decision tree algorithm, and/or a random forest decision tree algorithm. Additional measurements of the computing resource statistics may be applied to the data throughput estimation model to estimate data throughput.
Techniques are described for identifying root cause anomalies in time series. Information to be used for root cause analysis (RCA) is obtained from a graph neural network (GNN) and is used to construct a dependency graph having nodes corresponding to each time series and directed edges corresponding to dependencies between the time series. Nodes corresponding to time series that do not contain anomalies may be removed from this dependency graph, as well as edges connected to these nodes. This edge and node removal may result in the creation of one or more sub-graphs from the dependency graph. A root cause analysis algorithm may be run on these one or more sub-graphs to create a root cause graph for each sub-graph. These root cause graphs may then be used to identify root cause anomalies within the multiple time series, as well as sequences of anomalies within the multiple time series.
A computer-implemented method includes receiving a text input string defining components of a select query in a first representation and parsing the text input string to identify a set of key-value pairs that define portions of a where clause. The method also includes identifying an operator for the first key-value pair, determining a data type of the first operand value, comparing the operator with a predefined set of eligible operators, and comparing the data type of the first operand value with the data type of the first property. The method further includes transmitting a rejection message without submitting a query command to the query processor of the database when, based on the comparisons, one or more of (i) the operator is not within the predefined set of eligible operators and (ii) the data type of the first operand value does not match the data type of the first property.
Embodiments relate to improving efficiency of data analytics performed on sets of entity data in which different entity properties having very different update frequencies. Time-based analytical queries track the entity states at each moment within a given time window. Analytical queries are executed over a massive number of entity states while using a reasonable memory footprint. The technique partitions the entity properties into partial historical snapshots of data and combines the partial snapshots on demand only as needed to execute analytical queries over business entities. A complete entity state having values for all entity properties is not required to execute most queries. Only partial snapshots including values referenced by the query need to be combined to satisfy the query. Using partial snapshots minimizes data replication, and the snapshots can be efficiently combined into entity states sufficient for query execution.
Techniques described herein relate to authorization between integrated cloud products. An example includes receiving, by a computing device and from a first resource, a first request for permission to access a certificate to verify a requestor's identity. The computing device can transmit a second request to a second resource to authorize permitting access to the certificate. The computing device can receive a response from the second resource comprising an authorization to permit access to the certificate. The computing device can grant permission to the first resource to access the certificate, wherein the first resource is configured to verify the requestor's identity based on accessing the certificate. The computing device can receive a third request from the first resource to generate an association object between the first resource and the certificate. The computing device can generate the association object, wherein the association object associates the first resource and the certificate.
G06F 21/33 - Authentification de l’utilisateur par certificats
18.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR AUTOMATICALLY TRIGGERING NETWORK SLICE SELECTION ASSISTANCE INFORMATION (NSSAI) AVAILABILITY INFORMATION UPDATES WITH NETWORK SLICE SELECTION FUNCTION (NSSF)
A method for automatically triggering network slice selection assistance information (NSSAI) availability information updates with a network slice selection function (NSSF) includes receiving, from an access and mobility management function (AMF), a service request message. The method further includes determining, that the NSSAI availability information for the AMF is not present in an NSSAI availability information database maintained by the NSSF. The method further includes sending, to the AMF, a message for triggering the AMF to update its NSSAI availability information with the NSSF, receiving, an NSSAI Availability PUT request including NSSAI availability information for the AMF, and updating, the NSSAI availability information database to include the NSSAI availability information for the AMF.
H04L 41/12 - Découverte ou gestion des topologies de réseau
H04L 41/40 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p.ex. des réseaux de commutation de paquets en utilisant la virtualisation des fonctions réseau ou ressources, p.ex. entités SDN ou NFV
19.
TECHNIQUES FOR ADAPTIVE INDEPENDENT COMPRESSION OF KEY AND NON-KEY PORTIONS OF DATABASE ROWS IN INDEX ORGANIZED TABLES (IOTS)
Techniques for adaptive, independent compression of key and non-key sections of rows in index-organized tables (IOTs) are provided. In one technique, an IOT is stored that comprises a plurality of rows, each of which comprises a key section and a non-key section. After storing the IOT, a compression technique is performed on the non-key section of each row in the plurality of rows to generate a plurality of compressed non-key sections. However, none of the key sections of the plurality of rows is compressed. In a related technique, instead of compressing the non-key section of each row, the key section of each row is compressed. In a related technique, both sections are compressed, but using different compression techniques. The compression techniques may be determined based on data access history of the different sections of the rows.
A computer analyzes a relational schema of a database to generate a data entry schema encoded as JSON. The data entry schema is sent to a database client so that the client can validate entered data before the entered data is sent for storage. From the client, entered data is received that conforms to the data entry schema because the client used the data entry schema to validate the entered data before sending the data. Into the database, the entered data is stored that conforms to the data entry schema. The data entry schema and the relational schema have corresponding constraints on a datum to be stored, such as a range limit for a database column or an express set of distinct valid values. A constraint may specify a format mask or regular expression that values in the column should conform to, or a correlation between values of multiple columns.
Techniques are provided for determining an optimal configuration for an in-memory store based on both benefits and overhead that would result from having database elements populated in the in-memory store. The techniques include determining an overhead-adjusted benefit score for each element based, at least in part, on (a) a scan-benefit value, (b) a scan-overhead value, and (c) a DML-overhead value. Based on the plurality of overhead-adjusted benefit scores, the database determines an optimal configuration of the in-memory store, and then evicts in-memory copies of elements and/or loads in-memory copies of elements based on the optimal configuration.
G06F 16/22 - Indexation; Structures de données à cet effet; Structures de stockage
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
Techniques are disclosed hereafter for efficiently enforcing a “WITHOUT OVERLAP” range constraint by confirming primary-key integrity for a new or modified row (the “target row”) by checking just two neighboring index entries, using a new “two-sided halted range scan” of the same key index on entities which have range-endpoints data as part of their primary key. Techniques are described for reducing search time and resources in situations where a query specifies an entity and a point within a non-overlapping range. Techniques are also described for optimized handling of queries that do not specify a primary key but have both a range condition and a filter on a non-key column.
Techniques are provided for optimizing storage of database records in segments using sub-segments. A base segment is a container used for storing records that belong to a database object. A database management system receives a request to load, into the database object, a first set of records that are in a first state. In response to receiving the request, the system generates a new sub-segment, which is a container that is separate from the base segment. The system stores the first set of records, in their first state, within the sub-segment. The system then monitors one or more characteristics of the database system. In response to the one or more characteristics satisfying criteria, the system performs a migration of one or more records of the first set of records from the sub-segment to the base segment. During the migration, the system converts the one or more records from the first state to a second state and stores the one or more records, in their second state, in the base segment.
Techniques for cache invalidation across distributed microservices are disclosed, including: monitoring, by a resource manager, a resource that is available for obtaining by a set of one or more resource utilizers, wherein a resource utilizer in the set of one or more resource utilizers obtains a version of the resource; publishing, by the resource manager, a notification stream including notifications associated with the resource, wherein the resource utilizer subscribes to the notification stream including the notifications associated with the resource; detecting, by the resource manager, a modification of the resource; responsive to detecting the modification of the resource: publishing a notification to the notification stream that indicates the modification to the resource.
G06F 12/0817 - Protocoles de cohérence de mémoire cache à l’aide de méthodes de répertoire
G06F 12/0846 - Mémoire cache avec matrices multiples d’étiquettes ou de données accessibles simultanément
G06F 12/0891 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p.ex. mémoires cache utilisant des moyens d’effacement, d’invalidation ou de réinitialisation
25.
SYSTEMS AND METHODS FOR COMPILE-TIME DEPENDENCY INJECTION AND LAZY SERVICE ACTIVATION FRAMEWORK
In accordance with an embodiment, described herein are systems and methods for providing a compile-time dependency injection and lazy service activation framework including generation of source code reflecting the dependencies, and which enables an application developer using the system to build microservice applications or cloud-native services. The framework includes the use of a service registry that provides lazy service activation and meta-information associated with one or more services, in terms of interfaces or APIs describing the functionality of each service and their dependencies on other services. An application's use of particular services can be intercepted and accommodated during code generation at compile-time, avoiding the need to use reflection. Extensibility features allow application developers to provide their own templates for code generation, or provide alternative service implementations for use with the application, other than a reference implementation provided by the framework.
A multiple-tier operation evaluates a query across storage tiers in columnar format. A database server receives from a client a query for reading values from a set of columns of a database table. The multiple-tier operation comprises accessing a first subset of rows for the set of columns in columnar format in a first tier to generate a first set of results and accessing a second subset of rows for the set of columns in columnar format in a second tier to generate a second subset of results. The multiple-tier operation further comprises aggregating the first set of results and the second set of results to form a query result set and returning the query result set to the client.
A hardware-assisted Distributed Memory System may include software configurable shared memory regions in the local memory of each of multiple processor cores. Accesses to these shared memory regions may be made through a network of on-chip atomic transaction engine (ATE) instances, one per core, over a private interconnect matrix that connects them together. For example, each ATE instance may issue Remote Procedure Calls (RPCs), with or without responses, to an ATE instance associated with a remote processor core in order to perform operations that target memory locations controlled by the remote processor core. Each ATE instance may process RPCs (atomically) that are received from other ATE instances or that are generated locally. For some operation types, an ATE instance may execute the operations identified in the RPCs itself using dedicated hardware. For other operation types, the ATE instance may interrupt its local processor core to perform the operations.
Techniques discussed herein relate to provisioning one or more virtual cloud-computing edge devices at a physical cloud-computing edge device. A manifest may be generated/utilized to specify various attributes of the virtual cloud-computing edge devices to be executed at a physical cloud-computing edge device. A first set of resources corresponding to a first virtual cloud-computing edge device may be obtained from memory of a centralized cloud-environment and provisioned at the first virtual cloud-computing edge device. Similar operations may be performed with respect to a second virtual cloud-computing edge device. The techniques described herein split the physical edge device into multiple virtual device resources that can be utilized in combination or separately to extend the functionality and versatility of the physical edge device.
The present disclosure generally relates to systems and methods for operation research optimization. The systems and methods include receiving, at a data processing system, a payload including a request for optimizing a service and processing the payload using a meta learning classifier. The processing includes extracting a problem and use case characteristics from the payload, predicting at least one machine learning model capable of solving the problem having the use case characteristics, and executing the at least one machine learning model to solve the problem. The systems and methods also include outputting a solution to the problem for optimizing the service from the at least one machine learning model, and providing the solution to a computing device.
Techniques are provided for implementing an in-memory columnar data store that is configured to either grow or shrink in response to performance prediction data generated from database workload information. A system maintains allocations of volatile memory from a given memory area for a plurality of memory-consuming components in a database system. The system receives for each memory-consuming component, performance prediction data that contains performance predictions for a plurality of memory allocation sizes for the memory-consuming components. The system determines a target memory allocation for an in-memory columnar data store based on the performance predictions. The system determines an incrementally adjusted amount of memory for the in-memory columnar data store and causes the incrementally adjusted amount to be allocated to the in-memory columnar data store.
Disclosed is an approach to implement a multi-tenant DNS resolver for secure communications for a virtual cloud environment. The approach can perform split-horizon DNS forwarding via an intermediate customized DNS server.
Techniques are provided for optimizing workload performance by automatically discovering and implementing performance optimizations for in-memory units (IMUs). A system maintains a set of IMUs for processing database operations in a database. The system obtains a database workload information for the database system and filters the database workload information to identify database operations in the database workload information that may benefit from performance optimizations. The system analyzes the database operations to identify a set of performance optimizations and ranks the performance optimizations based on their potential benefit. The system selects a subset of the performance optimizations, based on their ranking, and generates new versions of IMUs that reflect the performance optimizations. The system performs verification tests on the new versions of IMUs and analyzes the tests to determine whether the new versions of IMUs yield expected performance benefits. The system then categorizes the new set of IMUs into a first set of IMUs to be retained and a second set of IMUs to be discarded. The system then makes the first set of IMUs available to the current workload and discards the second set of IMUs.
In accordance with an embodiment, described herein are systems and methods for providing a compile-time dependency injection and lazy service activation framework including generation of source code reflecting the dependencies, and which enables an application developer using the system to build microservice applications or cloud-native services. The framework includes the use of a service registry that provides lazy service activation and meta-information associated with one or more services, in terms of interfaces or APIs describing the functionality of each service and their dependencies on other services. An application's use of particular services can be intercepted and accommodated during code generation at compile-time, avoiding the need to use reflection.
In a computer embodiment, in a polyglot database management system (DBMS) that contains a guest language runtime environment, a database buffer is configured that the guest language runtime environment does not manage. In the polyglot DBMS, logic that is defined in a guest language is invoked to retrieve, into the database buffer, a value stored in a database in the polyglot DBMS. Compiling the logic causes semantic analyzing the logic to detect that usage of the retrieved value cannot occur after the retrieved value is overwritten in the database buffer. When detecting that such usage of the retrieved value cannot occur, the logic is executed without, after the retrieved value is overwritten in the database buffer, retaining a copy of the retrieved value in a memory region that the guest language runtime environment manages.
In some aspects, techniques may include monitoring a primary load of a datacenter and a reserve load of the datacenter. The primary load and reserve load can be monitored by a computing device. The primary load of the datacenter can be configured to be powered by one or more primary generator blocks having a primary capacity, and the reserve load of the datacenter can be configured to be powered by one or more reserve generator blocks having a reserve capacity. Also, the techniques may include detecting that the primary load of the datacenter exceeds the primary capacity. In addition, the techniques may include connecting the reserve generator blocks to at least one of the primary generator blocks and the primary load using a computing device switch.
Each of multiple anomaly detectors infers an anomaly score for each of many tuples. For each tuple, a synthetic label is generated that indicates for each anomaly detector: the anomaly detector, the anomaly score inferred by the anomaly detector for the tuple and, for each of multiple contamination factors, the contamination factor and, based on the contamination factor, a binary class of the anomaly score. For each particular anomaly detector excluding a best anomaly detector, a similarity score is measured for each contamination factor. The similarity score indicates how similar, between the particular anomaly detector and the best anomaly detector, are the binary classes of labels with that contamination factor. For each contamination factor, a combined similarity score is calculated based on the similarity scores for the contamination factor. Based on a contamination factor that has the highest combined similarity score, an additional anomaly detector is detected as inaccurate.
Herein is a universal anomaly threshold based on several labeled datasets and transformation of anomaly scores from one or more anomaly detectors. In an embodiment, a computer meta-learns from each anomaly detection algorithm and each labeled dataset as follows. A respective anomaly detector based on the anomaly detection algorithm is trained based on the dataset. The anomaly detector infers respective anomaly scores for tuples in the dataset. The following are ensured in the anomaly scores from the anomaly detector: i) regularity that an anomaly score of zero cannot indicate an anomaly and ii) normality that an inclusive range of zero to one contains the anomaly scores from the anomaly detector. A respective anomaly threshold is calculated for the anomaly scores from the anomaly detector. After all meta-learning, a universal anomaly threshold is calculated as an average of the anomaly thresholds. An anomaly is detected based on the universal anomaly threshold.
Techniques are described herein for running multiple logical secure elements (LSEs) on the same physical secure element (SE) hardware. For example, embodiments may include running multiple logical Subscriber Identification Modules (SIM) cards on the same physical SIM card or universal integrated circuit card (UICC). Additionally or alternatively, embodiments may include running other secure element applications and services on the same SE hardware. The techniques allow for mobile devices users to access multiple security services, which may originate from different security service providers (SSPs), in a secure manner using the same SE hardware without requiring the integration of multiple physical slots on a mobile device or the physical exchange of different cards within the same slot.
Disclosed techniques relate to managing power within a power distribution system. Power consumption corresponding to devices (e.g., servers) that receive power from an upstream device (e.g., a bus bar) may be monitored (e.g., by a service) to determine when power consumption corresponding to those devices breaches (or approaches) a budget threshold corresponding to an amount of power allocated to the upstream device. If the budget threshold is breached, or is likely to be breached, the service may initiate operations to distribute power caps for the devices and to initiate a timer. Although distributed, the power caps may be ignored by the devices until they are instructed to enforce the power caps (e.g., upon expiration of the timer). This allows the power consumption of the devices to exceed the budgeted power associated with the upstream device at least until expiration of the timer while avoiding power outage events.
G06F 1/26 - Alimentation en énergie électrique, p.ex. régulation à cet effet
G06F 1/30 - Moyens pour agir en cas de panne ou d'interruption d'alimentation
G06F 1/3206 - Surveillance d’événements, de dispositifs ou de paramètres initiant un changement de mode d’alimentation
G06F 1/324 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise par réduction de la fréquence d’horloge
G06F 1/329 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise par planification de tâches
G06F 1/3296 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise par diminution de la tension d’alimentation ou de la tension de fonctionnement
G06F 1/28 - Surveillance, p.ex. détection des pannes d'alimentation par franchissement de seuils
G06F 1/3203 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements
40.
NETWORK DEVICE LEVEL OPTIMIZATIONS FOR LATENCY SENSITIVE RDMA TRAFFIC
Discussed herein is a framework that provisions for customized processing for different classes of traffic. A network device in a communication path between a source host machine and a destination host machine extracts a tag from a packet received by the network device. The packet originates at a source executing on the source host machine and whose destination is the destination host machine. The tag set by the source and indicative of a first traffic class to be associated with the packet, the first traffic class being selected by the source from a plurality of traffic classes. The network device determines, based on the tag, that the first traffic class corresponds to a latency sensitive traffic and processes the packet using one or more settings configured at the network device for processing packets associated with the first traffic class.
H04L 47/28 - Commande de flux; Commande de la congestion par rapport à des considérations temporelles
H04L 47/2441 - Trafic caractérisé par des attributs spécifiques, p.ex. la priorité ou QoS en s'appuyant sur la classification des flux, p.ex. en utilisant des services intégrés [IntServ]
H04L 47/26 - Commande de flux; Commande de la congestion utilisant un retour explicite à la source, p.ex. paquets de signalisation de congestion
41.
FRAMEWORK FOR EFFECTIVE STRESS TESTING AND APPLICATION PARAMETER PREDICTION
Techniques disclosed herein can include receiving an instruction to perform a stress test on one or more cloud computing resources of a cloud computing system. Worker nodes of the cloud computing system can be provisioned by a resource manager to perform the stress test on the cloud computing resources. The resource manager can instruct the one or more worker nodes of the cloud computing system to perform the stress test. Data generated by the worker nodes during the stress test can be received by the resource manager and used to train a projection framework comprising a trained machine learning model. The projection framework can generate a resource projection and the projection can be used to provision cloud computing resources to host the cloud service.
Disclosed techniques relate to managing power within a power distribution system. Power consumption corresponding to devices (e.g., servers) that receive power from an upstream device (e.g., a bus bar) may be monitored (e.g., by a service) to determine when power consumption corresponding to those devices breaches (or approaches) a budget threshold corresponding to an amount of power allocated to the upstream device. If the budget threshold is breached, or is likely to be breached, the service may initiate operations to distribute power caps for the devices and to initiate a timer. Although distributed, the power caps may be ignored by the devices until they are instructed to enforce the power caps (e.g., upon expiration of the timer). This allows the power consumption of the devices to exceed the budgeted power associated with the upstream device at least until expiration of the timer while avoiding power outage events.
G05B 19/042 - Commande à programme autre que la commande numérique, c.à d. dans des automatismes à séquence ou dans des automates à logique utilisant des processeurs numériques
43.
REMOTE DATA PLANES FOR VIRTUAL PRIVATE LABEL CLOUDS
Novel techniques are disclosed for accessing resources in both CSP-provided infrastructure in a region and a remote infrastructure through various control planes associated with a virtual private label cloud (vPLC). In some embodiments, the CSP-provided infrastructure in a region and a remote infrastructure are connected through a communication channel. In some embodiments, a control plane associated with the CSP-provided infrastructure in a region can provide access to both infrastructures (i.e., the CSP-provided infrastructure in a region and the remote infrastructure). In some embodiments, a control plane associated with the vPLC in the CSP-provided infrastructure in a region can provide access to both infrastructures. Yet, in other embodiments, a control plane associated with the vPLC but located within the remote infrastructure can provide access to both infrastructures.
Aspects of the present disclosure include implementing fabric availability and synchronization (FAS) agents within a fabric network. In one example, a first FAS agent executing on a first network device may receive, from a second network device, a command to modify a configuration of a second network device. The first FAS may upgrade the configuration of the first network device based on the command from a current configuration to a new configuration. The first FAS agent increment a state identifier associated with the configuration of the first network device to a new state identifier associated with the new configuration. The first FAS agent may then transmit a control packet that includes the new state identifier. A second FAS agent executing on the second network device may receive the control packet and execute the command to update the configuration of the second network device to the new configuration.
H04L 41/082 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant des mises à jour ou des mises à niveau des fonctionnalités réseau
H04L 41/0659 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant la reprise sur incident de réseau en isolant ou en reconfigurant les entités défectueuses
H04L 41/08 - Gestion de la configuration des réseaux ou des éléments de réseau
H04L 41/084 - Configuration en utilisant des informations préexistantes, p.ex. en utilisant des gabarits ou en copiant à partir d’autres éléments
H04L 41/0853 - Récupération de la configuration du réseau; Suivi de l’historique de configuration du réseau en recueillant activement des informations de configuration ou en sauvegardant les informations de configuration
45.
TECHNIQUES FOR RESOLVING SNAPSHOT KEY INTER-DEPENDENCY DURING FILE SYSTEM CROSS-REGION REPLICATION
Techniques are described for snapshot key inter-dependency resolution during cross-region replications. Dependency between a first type of replication-related information (e.g., crypto keys associated with a parent directory iNode or a file iNode) and a second type of replication-related information (e.g., files, file data/FMAPs, or symbolic links) during a cross-region replication may be resolved to enable non-blocking delta application in a target file system. In some embodiments, temporary dummy entries for the first type of information may be created in the B-tree of the target file system for the out-of-order download (e.g., the second type being downloaded before the first type) of these two types of information. In some embodiments, a consolidation process may be performed between the dummy entries and the later-arriving first type of information.
In an embodiment, a database management system (DBMS) hosted by a computer receives a request to execute a database statement and responsively generates an interpretable execution plan that represents the database statement. The DBMS decides whether execution of the database statement will or will not entail interpreting the interpretable execution plan and, if not, the interpretable execution plan is compiled into object code based on partial evaluation. In that case, the database statement is executed by executing the object code of the compiled plan, which provides acceleration. In an embodiment, partial evaluation and Turing-complete template metaprogramming (TMP) are based on using the interpretable execution plan as a compile-time constant that is an argument for a parameter of an evaluation template.
A computer sorts empirical validation scores of validated training scenarios of an anomaly detector. Each training scenario has a dataset to train an instance of the anomaly detector that is configured with values for hyperparameters. Each dataset has values for metafeatures. For each predefined ranking percentage, a subset of best training scenarios is selected that consists of the ranking percentage of validated training scenarios having the highest empirical validation scores. Linear optimizers train to infer a value for a hyperparameter. Into many distinct unvalidated training scenarios, a scenario is generated that has metafeatures values and hyperparameters values that contains the value inferred for that hyperparameter by a linear optimizer. For each unvalidated training scenario, a validation score is inferred. A best linear optimizer is selected having a highest combined inferred validation score. For a new dataset, the best linear optimizer infers a value of that hyperparameter.
Techniques for presenting a graphical user interface (GUI) for configuring a cloud service workstation are disclosed. The system presents a GUI that presents a plurality of possible workstation configurations and the costs associated with each respective workstation configuration, prior to creation of a workstation. The GUI updates the cost associated with a workstation configuration responsive to receiving a selection to modify the workstation configuration from a user. The user may request a different configuration based on a single user input, without specifying which resources to modify. The GUI may recommend a workstation configuration based on one or more user inputs such as a budget, an application service domain, a duration, or a processing power requirement.
Techniques for implementing an orchestration service for data replication are provided. In one technique, a recipe is stored that comprises (1) a set of configuration parameters and (2) executable logic, for a data replication operation, that comprises multiple sub-steps. Each sub-step corresponds to one or more configuration parameters in the set of configuration parameters, which includes a first parameter that is associated with a default value and a second parameter that is not so associated. User input that specifies a value for the second parameter is received. The set of configuration parameters is updated to associate the value with the second parameter. The data replication operation is then initiated by processing the executable logic, which processing comprises, for each sub-step of one or more sub-steps, making an API call to a data replication service. In response to each API call, a response is received from the data replication service.
G06F 16/00 - Recherche d’informations; Structures de bases de données à cet effet; Structures de systèmes de fichiers à cet effet
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 16/21 - Conception, administration ou maintenance des bases de données
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
50.
LEARNING HYPER-PARAMETER SCALING MODELS FOR UNSUPERVISED ANOMALY DETECTION
A computer sorts empirical validation scores of validated training scenarios of an anomaly detector. Each training scenario has a dataset to train an instance of the anomaly detector that is configured with values for hyperparameters. Each dataset has values for metafeatures. For each predefined ranking percentage, a subset of best training scenarios is selected that consists of the ranking percentage of validated training scenarios having the highest empirical validation scores. Linear optimizers train to infer a value for a hyperparameter. Into many distinct unvalidated training scenarios, a scenario is generated that has metafeatures values and hyperparameters values that contains the value inferred for that hyperparameter by a linear optimizer. For each unvalidated training scenario, a validation score is inferred. A best linear optimizer is selected having a highest combined inferred validation score. For a new dataset, the best linear optimizer infers a value of that hyperparameter.
Techniques are disclosed herein for objective function optimization in target based hyperparameter tuning. In one aspect, a computer-implemented method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that comprises a domain score for each domain that is calculated based on a number of instances within an evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial. For each trial of a hyperparameter tuning process: training the machine learning algorithm to generate a machine learning model, running the machine learning model in different domains using the set of hyperparameter values, evaluating the machine learning model for each domain, and once the machine learning model has reached convergence, outputting at least one machine learning model.
Techniques for presenting a graphical user interface (GUI) for configuring a cloud service workstation are disclosed. The system presents a GUI that presents a plurality of possible workstation configurations and the costs associated with each respective workstation configuration, prior to creation of a workstation. The GUI updates the cost associated with a workstation configuration responsive to receiving a selection to modify the workstation configuration from a user. The user may request a different configuration based on a single user input, without specifying which resources to modify. The GUI may recommend a workstation configuration based on one or more user inputs such as a budget, an application service domain, a duration, or a processing power requirement.
Novel techniques are disclosed for enabling customizable consoles of different virtual private label clouds (vPLCs). In some embodiments, one console server may execute multiple consoles for multiple vPLCs and CSP. In other embodiments, one console server may be dedicated to a vPLC-specific console. In certain embodiments, console customization including a customized set of console user interfaces (UIs) may be performed for each vPLC-specific console.
Novel techniques for creating service endpoints associated with different virtual private label clouds (vPLCs) for accessing a cloud service are disclosed. In certain embodiments, an endpoint management service (EMS) uses a novel architecture that enables the concurrent use of multiple vPLC-specific service endpoints with one endpoint per cloud service per vPLC to access the same cloud service running on multiple vPLC-specific resources. In some embodiments, each vPLC-specific service endpoint may be associated with a fully qualified domain name (FQDN) and an IP address.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Novel techniques are disclosed for virtualizing a cloud infrastructure in a region provided by a cloud service provider (CSP) to allow a reseller of the CSP to provide reseller-offered cloud services using a securely isolated portion of the CSP-provided infrastructure in the region and have a direct business relationship with the reseller'customers. In certain embodiments, the CSP-provided infrastructure in a region is organized into one or more data centers. In certain embodiments, the securely isolation portion of the CSP-provided infrastructure comprises at least one compute resource or a memory resource.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
56.
SECURE BI-DIRECTIONAL NETWORK CONNECTIVITY SYSTEM BETWEEN PRIVATE NETWORKS
A secure private network connectivity system (SNCS) within a cloud service provider infrastructure (CSPI) is described that provides secure private network connectivity between external resources residing in a customer's on-premise environment and the customer's resources residing in the cloud. The SNCS provides secure private bi-directional network connectivity between external resources residing in a customer's external site representation and resources and services residing in the customer's VCN in the cloud without a user (e.g., an administrator) of the enterprise having to explicitly configure the external resources, advertise routes or set up site-to-site network connectivity. The SNCS provides a high performant, scalable, and highly available site-to-site network connection for processing network traffic between a customer's on-premise environment and the CSPI by implementing a robust infrastructure of network elements and computing nodes that are used to provide the secure site to site network connectivity.
Novel techniques are disclosed for providing vPLC-specific metadata service including customized vPLC-specific metadata. In certain embodiments, each vPLC may generate a customized metadata using its corresponding vPLC-specific customization instructions. In some embodiments, a vPLC-specific metadata service may be performed using pre-generated customized vPLC-specific metadata, on-the-fly customized metadata, pre-generated CSP-format metadata, or combinations thereof.
Novel techniques are disclosed that enable the creation of a two-tier marketplace comprising a CSP marketplace and one or more marketplaces for virtual private label clouds (vPLCs). Each marketplace can be created and operated independently. In some embodiments, a publisher may publish a solution offering directly on a vPLC marketplace without involving the CSP marketplace. In other embodiments, a solution offering published on a marketplace may be automatically republished on another marketplace. Yet, in another embodiment, a customer subscribing to a vPLC marketplace can see a composite view of a directly published solution listing and a republished solution listing.
Novel techniques for resource usage monitoring, billing, and enforcement for virtual private label clouds (vPLCs) are disclosed. In some embodiments, resource usage for a vPLC associated with a reseller is monitored at both reseller level and customer-of-reseller level using resource IDs, and stored as usage information in two levels and associated with a tenancy ID for the reseller (at the reseller level) and tenancy IDs for customers of the reseller (at the customer-of-reseller level). In some embodiments, a two-level billing process generates invoices using two-level pricing information and the generated invoices to either resellers or customers of resellers directly. In some embodiments, usage enforcement can be performed per vPLC or per customer tenancy of a reseller's customer.
Techniques for facilitating connectivity to vPLCs created in a CSP-provided infrastructure in a region. Within the CSP-provided infrastructure in a region, when the destination of a packet is determined to be an endpoint associated with a particular vPLC, the packet is tagged with information related to the particular vPLC. The vPLC-related information for the particular vPLC can include, for example, a vPLC identifier identifying the particular vPLC, an identifier identifying a customer associated with the endpoint, a virtual cloud network identifier identifying a virtual cloud network (VCN) belonging to the particular vPLC and where the endpoint is part of the VCN, and other vPLC-related information. The packet is then routed or communicated within the CSP-provided infrastructure in a region along with the tagged vPLC-related information. The vPLC-related information is used as part of the connectivity and for routing of packets within the CSP-provided infrastructure in a region.
Techniques are disclosed for deploying a computing resource (e.g., a service) in response to user input. A computer-implemented method can include operations of receiving (e.g., by a gateway computer of a cloud-computing environment) a request comprising an identifier for a computing component of the cloud-computing environment. The computing device receiving the request may determine whether the identifier exists in a routing table that is accessible to the computing device. If so, the request may be forwarded to the computing component. If not, the device may transmit an error code (e.g., to the user device that initiated the request) indicating the computing component is unavailable and a bootstrap request to a deployment orchestrator that is configured to deploy the requested computing component. Once deployed, the computing component may be added to a routing table such that subsequent requests can be properly routed to and processed by the computing component.
H04L 67/1031 - Commande du fonctionnement des serveurs par un répartiteur de charge, p.ex. en ajoutant ou en supprimant de serveurs qui servent des requêtes
H04L 67/51 - Découverte ou gestion de ceux-ci, p.ex. protocole de localisation de service [SLP] ou services du Web
H04L 67/63 - Ordonnancement ou organisation du service des demandes d'application, p.ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises en acheminant une demande de service en fonction du contenu ou du contexte de la demande
Techniques are disclosed for tuning external invocations utilizing weight-based parameter resampling. In one example, a computer system determines a plurality of samples, each sample being associated with a parameter value of a plurality of potential parameter values of a particular parameter. The computer system assigns weights to each of the parameter values, and then selects a first sample for processing via a first external invocation based on a weight of the parameter value of the first sample. The computer system then determines feedback data associated with a level of performance of the first external invocation. The computer system adjusts the weights of the parameter values of the particular parameter based on the feedback data. The computer system then selects a second sample of the plurality of samples to be processed via execution of a second external invocation based on the adjustment of weights of the parameter values.
G06F 16/215 - Amélioration de la qualité des données; Nettoyage des données, p.ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
G06F 16/21 - Conception, administration ou maintenance des bases de données
Disclosed are techniques for processing user profiles using data structures that are specialized for processing by a GPU. More particularly, the disclosed techniques relate to systems and methods for evaluating characteristics of user profiles to determine whether to offload certain user profiles to the GPU for processing or to process the user profiles locally by one or more central processing units (CPUs). Processing user profiles may include comparing the interest tags included in the user profiles with logic trees, for example, logic trees representing marketing campaigns, to identify user profiles that match the campaigns.
Aspects of the present disclosure include implementing fabric availability and synchronization (FAS) agents within a fabric network. In one example, a first FAS agent executing on a first network device may receive, from a second network device, a command to modify a configuration of a second network device. The first FAS may upgrade the configuration of the first network device based on the command from a current configuration to a new configuration. The first FAS agent increment a state identifier associated with the configuration of the first network device to a new state identifier associated with the new configuration. The first FAS agent may then transmit a control packet that includes the new state identifier. A second FAS agent executing on the second network device may receive the control packet and execute the command to update the configuration of the second network device to the new configuration.
H04L 41/082 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant des mises à jour ou des mises à niveau des fonctionnalités réseau
H04L 41/0659 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant la reprise sur incident de réseau en isolant ou en reconfigurant les entités défectueuses
H04L 41/08 - Gestion de la configuration des réseaux ou des éléments de réseau
H04L 41/084 - Configuration en utilisant des informations préexistantes, p.ex. en utilisant des gabarits ou en copiant à partir d’autres éléments
H04L 41/0853 - Récupération de la configuration du réseau; Suivi de l’historique de configuration du réseau en recueillant activement des informations de configuration ou en sauvegardant les informations de configuration
65.
UNIFY95: META-LEARNING CONTAMINATION THRESHOLDS FROM UNIFIED ANOMALY SCORES
Herein is a universal anomaly threshold based on several labeled datasets and transformation of anomaly scores from one or more anomaly detectors. In an embodiment, a computer meta-learns from each anomaly detection algorithm and each labeled dataset as follows. A respective anomaly detector based on the anomaly detection algorithm is trained based on the dataset. The anomaly detector infers respective anomaly scores for tuples in the dataset. The following are ensured in the anomaly scores from the anomaly detector: i) regularity that an anomaly score of zero cannot indicate an anomaly and ii) normality that an inclusive range of zero to one contains the anomaly scores from the anomaly detector. A respective anomaly threshold is calculated for the anomaly scores from the anomaly detector. After all meta-learning, a universal anomaly threshold is calculated as an average of the anomaly thresholds. An anomaly is detected based on the universal anomaly threshold.
Data can be received that includes information corresponding to a set of users. Privacy protection protocols that apply to the data can be identified. A subset of the data can be identified as being personally identifiable information (PII) data, where the subset includes a set of PII attributes. The PII attributes can be split into categories based on a format of a data field in the PII attributes. The processed PII data can be combined with non-PII data to create processed client data. It can be determined to add noise to part of the processed PII data. An amount of noise can be determined based on the privacy protection protocols. The amount of noise can be added to part of the processed PII data to produce protected data. A machine-learning model can be trained using the protected data.
Techniques for predicting marketing outcomes using contrastive learning are disclosed, including: obtaining historical marketing messages; obtaining historical open rates associated respectively with the historical marketing messages; based on the historical marketing messages, generating latent space representations associated respectively with the historical marketing messages; based on the latent space representations and respective contents of the historical marketing messages, training a first machine learning model to map contents of marketing messages to corresponding latent space representations of the marketing messages; based at least on the latent space representations and the historical open rates, training a second machine learning model to map latent space representations of marketing messages to predicted open rates of the marketing messages.
Systems and methods for automatic network health check are disclosed herein. A method for performing an automatic health check includes determining to perform a health check on a portion of a communications network, the communications network including a plurality of hosts that each include a routing agent and an advertising agent. The method includes adding a test route indicated as applicable to every host and pointing to an IP address to a database, and receiving the test route from the database with the routing agents of at least some of the plurality of hosts. The method includes providing the test route from the routing agent to the advertising agent, advertising the test route with of the at least some of the plurality of hosts to a plurality of switches within the communications network, and determining success of health check based on information received from the plurality of switches
Systems and methods for performing an automatic route flip are disclosed herein. The method can include receiving a request to flip a primary route and a secondary route in a communications network including at least a first host and a second host, each including a routing agent and an advertising agent. The method includes identifying the first host as having a dynamic path length and the second host as having a static path length, updating routing information in a database accessible by the first host to change the path length of the first host from a first path length to a second path length, receiving the updated routing information from the database with the routing agent of the first host, and advertising the updated routing information with the first host to at least one switch within the communications network.
H04L 45/122 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données Évaluation de la route la plus courte en minimisant les distances, p.ex. en sélectionnant une route avec un nombre minimal de sauts
H04L 45/00 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données
70.
ACCESS CONTROL SYSTEMS AND METHODS FOR LOGICAL SECURE ELEMENTS RUNNING ON THE SAME SECURE HARDWARE
Techniques are described herein for applying access controls to logical secure elements (LSEs) running on the same secure element hardware platform. Embodiments include a firmware component that determines whether a message targeting an LSE is authorized to trigger an operation. For example, the firmware component may verify a signature of the received message using a public key, shared secret, or other access control key. Additionally or alternatively, access control policies may be defined to constrain the load of the LSEs on the SE platform hardware and/or to prioritize LSE access. For example, the access control policies may define usage thresholds, such as maximum threshold memory and/or processor utilization rates. As another example, the access controls may restrict the active time for an LSE to a threshold duration. If access constraints are violated or the message cannot be verified, then the firmware component may delay or deny the operation.
Techniques for predicting marketing outcomes using contrastive learning are disclosed, including: obtaining historical marketing messages; obtaining historical open rates associated respectively with the historical marketing messages; based on the historical marketing messages, generating latent space representations associated respectively with the historical marketing messages; based on the latent space representations and respective contents of the historical marketing messages, training a first machine learning model to map contents of marketing messages to corresponding latent space representations of the marketing messages; based at least on the latent space representations and the historical open rates, training a second machine learning model to map latent space representations of marketing messages to predicted open rates of the marketing messages.
Data can be received that includes information corresponding to a set of users. Privacy protection protocols that apply to the data can be identified. A subset of the data can be identified as being personally identifiable information (PII) data, where the subset includes a set of PII attributes. The PII attributes can be split into categories based on a format of a data field in the PII attributes. The processed PII data can be combined with non-PII data to create processed client data. It can be determined to add noise to part of the processed PII data. An amount of noise can be determined based on the privacy protection protocols. The amount of noise can be added to part of the processed PII data to produce protected data. A machine-learning model can be trained using the protected data.
Novel techniques are disclosed for virtualizing a cloud infrastructure in a region provided by a cloud service provider (CSP) to allow a reseller of the CSP to provide reseller-offered cloud services using a securely isolated portion of the CSP-provided infrastructure in the region and have a direct business relationship with the reseller' customers. In certain embodiments, the CSP-provided infrastructure in a region is organized into one or more data centers. In certain embodiments, the securely isolation portion of the CSP-provided infrastructure comprises at least one compute resource or a memory resource.
Novel techniques of resource allocation services for virtual private label cloud (vPLC) are disclosed. A vPLC is created for a reseller of a Cloud Services Provider (CSP) using CSP-provided infrastructure in a region such that the reseller can provide one or more reseller-offered cloud services to customers of the reseller. In certain embodiments, the resource allocation services check a first-level policy and a resource database to determine whether a requested resource is allowed and available to be allocated to a vPLC associated with a reseller. The resource allocation services may further check a second-level policy and the resource database to determine whether the requested resource is allowed and available to be allocated to a customer of the reseller. In some embodiments, the resource allocation services may allocate resources for a vPLC according to a partitioning requirement.
Novel techniques for creating service endpoints associated with different virtual private label clouds (vPLCs) for accessing a cloud service are disclosed. In certain embodiments, an endpoint management service (EMS) uses a novel architecture that enables the concurrent use of multiple vPLC-specific service endpoints with one endpoint per cloud service per vPLC to access the same cloud service running on multiple vPLC-specific resources. In some embodiments, each vPLC-specific service endpoint may be associated with a fully qualified domain name (FQDN) and an IP address.
Novel techniques are disclosed for providing vPLC-specific metadata service including customized vPLC-specific metadata. In certain embodiments, each vPLC may generate a customized metadata using its corresponding vPLC-specific customization instructions. In some embodiments, a vPLC-specific metadata service may be performed using pre-generated customized vPLC-specific metadata, on-the-fly customized metadata, pre-generated CSP-format metadata, or combinations thereof.
Novel techniques are disclosed for accessing resources in both CSP-provided infrastructure in a region and a remote infrastructure through various control planes associated with a virtual private label cloud (vPLC). In some embodiments, the CSP-provided infrastructure in a region and a remote infrastructure are connected through a communication channel. In some embodiments, a control plane associated with the CSP-provided infrastructure in a region can provide access to both infrastructures (i.e., the CSP-provided infrastructure in a region and the remote infrastructure). In some embodiments, a control plane associated with the vPLC in the CSP-provided infrastructure in a region can provide access to both infrastructures. Yet, in other embodiments, a control plane associated with the vPLC but located within the remote infrastructure can provide access to both infrastructures.
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
H04L 47/70 - Contrôle d'admission; Allocation des ressources
H04L 47/78 - Architectures d'allocation des ressources
H04L 41/5041 - Gestion des services réseau, p.ex. en assurant une bonne réalisation du service conformément aux accords caractérisée par la relation temporelle entre la création et le déploiement d’un service
H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau
Systems and methods for route mismatch identification are disclosed herein. A method of route mismatch identification can create in cache an expected routing table based on expected routing information received by a routing agent of a host from a database accessible by each of the plurality of hosts. The method can include creating in cache an actual routing table based on actual routing information received by the routing agent of the host from an advertising agent of the host, comparing the actual routing table and the expected routing table, and taking an action based on the comparison of the actual routing table and the expected routing table.
In an embodiment, a database management system (DBMS) hosted by a computer receives a request to execute a database statement and responsively generates an interpretable execution plan that represents the database statement. The DBMS decides whether execution of the database statement will or will not entail interpreting the interpretable execution plan and, if not, the interpretable execution plan is compiled into object code based on partial evaluation. In that case, the database statement is executed by executing the object code of the compiled plan, which provides acceleration. In an embodiment, partial evaluation and Turing-complete template metaprogramming (TMP) are based on using the interpretable execution plan as a compile-time constant that is an argument for a parameter of an evaluation template.
Techniques are described herein for running multiple logical secure elements (LSEs) on the same physical secure element (SE) hardware. For example, embodiments may include running multiple logical Subscriber Identification Modules (SIM) cards on the same physical SIM card or universal integrated circuit card (UICC). Additionally or alternatively, embodiments may include running other secure element applications and services on the same SE hardware. The techniques allow for mobile devices users to access multiple security services, which may originate from different security service providers (SSPs), in a secure manner using the same SE hardware without requiring the integration of multiple physical slots on a mobile device or the physical exchange of different cards within the same slot.
G06F 21/34 - Authentification de l’utilisateur impliquant l’utilisation de dispositifs externes supplémentaires, p.ex. clés électroniques ou cartes à puce intelligentes
G06F 21/72 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du calcul ou du traitement de l’information dans les circuits de cryptographie
G06F 21/74 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du calcul ou du traitement de l’information opérant en mode dual ou compartimenté, c. à d. avec au moins un mode sécurisé
G06F 21/78 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du stockage de données
81.
EXPERT-OPTIMAL CORRELATION: CONTAMINATION FACTOR IDENTIFICATION FOR UNSUPERVISED ANOMALY DETECTION
In a computer, each of multiple anomaly detectors infers an anomaly score for each of many tuples. For each tuple, a synthetic label is generated that indicates for each anomaly detector: the anomaly detector, the anomaly score inferred by the anomaly detector for the tuple and, for each of multiple contamination factors, the contamination factor and, based on the contamination factor, a binary class of the anomaly score. For each particular anomaly detector excluding a best anomaly detector, a similarity score is measured for each contamination factor. The similarity score indicates how similar, between the particular anomaly detector and the best anomaly detector, are the binary classes of labels with that contamination factor. For each contamination factor, a combined similarity score is calculated based on the similarity scores for the contamination factor. Based on a contamination factor that has the highest combined similarity score, the computer detects that an additional anomaly detector is inaccurate.
Systems and techniques for budget-based management of a cloud infrastructure are disclosed. A system monitors a cloud infrastructure for one or more trigger-action conditions associated with the cloud infrastructure. When a trigger-action condition is detected, the system determines a cloud infrastructure modification action that corresponds to the detected trigger condition. The system may apply the cloud infrastructure modification action to the cloud infrastructure. A cloud infrastructure modification action may modify one or more the workstation resources such that a rate of budget consumption is changed, for example, by pausing a resource, deleting a resource, resuming a paused resource, or changing from one resource to a different resource.
Novel techniques of resource allocation services for virtual private label cloud (vPLC) are disclosed. A vPLC is created for a reseller of a Cloud Services Provider (CSP) using CSP-provided infrastructure in a region such that the reseller can provide one or more reseller-offered cloud services to customers of the reseller. In certain embodiments, the resource allocation services check a first-level policy and a resource database to determine whether a requested resource is allowed and available to be allocated to a vPLC associated with a reseller. The resource allocation services may further check a second-level policy and the resource database to determine whether the requested resource is allowed and available to be allocated to a customer of the reseller. In some embodiments, the resource allocation services may allocate resources for a vPLC according to a partitioning requirement.
Novel techniques are disclosed for enabling identity cloud service for virtual private label clouds (vPLCs). A vPLC is created for a reseller of a Cloud Services Provider (CSP) using CSP-provided infrastructure in a region such that the reseller can provide one or more reseller-offered cloud services to customers of the reseller. In some embodiments, the identity management may be configured with either a shared identity cloud service (IDCS) stack model or an independent IDCS stack model. In certain embodiments, two-tier vPLC-aware identity management functions are performed for resellers of the CSP and customers of the resellers.
G06Q 20/40 - Autorisation, p.ex. identification du payeur ou du bénéficiaire, vérification des références du client ou du magasin; Examen et approbation des payeurs, p.ex. contrôle des lignes de crédit ou des listes négatives
85.
LOAD-BASED MANAGEMENT FOR NVME OVER TCP CONNECTIONS
The disclosed systems, methods and computer readable media relate to managing Non-Volatile Memory Express (NVMe) over Transmission Control Protocol (TCP) (NVMeOTCP) connections between a smart network interface card (smartNIC) and a block storage data plane (BSDP) of a cloud computing environment. A software agent (“agent”) executing at the smartNIC may manage a number of network paths (active and, in some cases, passive network paths). The agent may monitor the network traffic (e.g., input/output operations (IOPS)) through the paths (e.g., using established NVMeOTCP connections corresponding to the paths). If a condition is met relating to a performance threshold associated with the monitored paths, the agent may increase or decrease the number established NVMeOTCP connections to match real time network conditions.
Techniques are described for enabling concurrent and non-blocking replication object deletion during cross-region replications. In some embodiments, in a target file system, a target replication pipeline as part of a cross-region replication, and a deletion pipeline operate in parallel. The deletion pipeline deletes processed objects reaching the last pipeline stage of the target replication pipeline after each checkpoint in the target replication pipeline. In some embodiments, after a non-recoverable failure during the cross-region replication, the cross-region replication can be restarted from the beginning (i.e., fresh restart) without waiting for its unused objects in the Object Store to be deleted by utilizing a generation number associated with each object to delete the unused objects in a background process while allowing deleting processed objects as normal for the freshly restarted cross-region replication.
Techniques are provided for using context tags in named-entity recognition (NER) models. In one particular aspect, a method is provided that includes receiving an utterance, generating embeddings for words of the utterance, generating a regular expression and gazetteer feature vector for the utterance, generating a context tag distribution feature vector for the utterance, concatenating or interpolating the embeddings with the regular expression and gazetteer feature vector and the context tag distribution feature vector to generate a set of feature vectors, generating an encoded form of the utterance based on the set of feature vectors, generating log-probabilities based on the encoded form of the utterance, and identifying one or more constraints for the utterance.
Techniques are disclosed herein for objective function optimization in target based hyperparameter tuning. In one aspect, a computer-implemented method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that comprises a domain score for each domain that is calculated based on a number of instances within an evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial. For each trial of a hyperparameter tuning process: training the machine learning algorithm to generate a machine learning model, running the machine learning model in different domains using the set of hyperparameter values, evaluating the machine learning model for each domain, and once the machine learning model has reached convergence, outputting at least one machine learning model.
G06F 40/40 - Traitement ou traduction du langage naturel
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p.ex. des réponses automatiques ou des messages générés par un agent conversationnel
89.
CONSOLE CUSTOMIZATION FOR VIRTUAL PRIVATE LABEL CLOUDS
Novel techniques are disclosed for enabling customizable consoles of different virtual private label clouds (vPLCs). In some embodiments, one console server may execute multiple consoles for multiple vPLCs and CSP. In other embodiments, one console server may be dedicated to a vPLC-specific console. In certain embodiments, console customization including a customized set of console user interfaces (UIs) may be performed for each vPLC-specific console.
Novel techniques are disclosed for enabling identity cloud service for virtual private label clouds (vPLCs). A vPLC is created for a reseller of a Cloud Services Provider (CSP) using CSP-provided infrastructure in a region such that the reseller can provide one or more reseller-offered cloud services to customers of the reseller. In some embodiments, the identity management may be configured with either a shared identity cloud service (IDCS) stack model or an independent IDCS stack model. In certain embodiments, two-tier vPLC-aware identity management functions are performed for resellers of the CSP and customers of the resellers.
Techniques for facilitating connectivity to vPLCs created in a CSP-provided infrastructure in a region. Within the CSP-provided infrastructure in a region, when the destination of a packet is determined to be an endpoint associated with a particular vPLC, the packet is tagged with information related to the particular vPLC. The vPLC-related information for the particular vPLC can include, for example, a vPLC identifier identifying the particular vPLC, an identifier identifying a customer associated with the endpoint, a virtual cloud network identifier identifying a virtual cloud network (VCN) belonging to the particular vPLC and where the endpoint is part of the VCN, and other vPLC-related information. The packet is then routed or communicated within the CSP-provided infrastructure in a region along with the tagged vPLC-related information. The vPLC-related information is used as part of the connectivity and for routing of packets within the CSP-provided infrastructure in a region.
Methods and systems are disclosed for automatic generation of content distribution images that include receiving user input corresponding to a content-distribution operation. The user input may be parsed to identify keywords. Image data corresponding to the keywords can be identified. Image-processing operations may be executed on the image data. Executing a generative adversarial network on the processed image data, which includes: executing a first neural network on the processed-image data to generate first images that correspond to the keywords, the first images generated based on a likelihood that each image of the first images would not be detected as having been generated by the first neural network. A user interface can display the first images with second images that include images that were previously part of content-distribution operations or images that were designated by an entity as being available for content-distribution operations.
Systems, computer-implemented methods, and computer-readable media for facilitating resource balancing based on resource capacities and resource assignments are disclosed. Electronic communications, received via interfaces, from monitoring devices to identify resource descriptions of resources may be monitored. A resource descriptions data store may be updated to associate each entity of the entities and resource capacities of each resource type of resource types. A first electronic communication, from resource-controlling systems, may be detected. Model data from a model data store may be accessed based on the identified resource descriptions. A first model may be identified based on the model data. A resources assessment corresponding may be generated based on whether a threshold is satisfied based on the first model, a first resource capacity of a first resource type, and the first electronic communication. An electronic notification may be transmitted to the client devices to identify the resources assessment.
H04L 47/76 - Contrôle d'admission; Allocation des ressources en utilisant l'allocation dynamique des ressources, p.ex. renégociation en cours d'appel sur requête de l'utilisateur ou sur requête du réseau en réponse à des changements dans les conditions du réseau
H04L 47/70 - Contrôle d'admission; Allocation des ressources
94.
ADAPTIVE SAMPLING TO COMPUTE GLOBAL FEATURE EXPLANATIONS WITH SHAPLEY VALUES
Techniques for computing global feature explanations using adaptive sampling are provided. In one technique, first and second samples from an dataset are identified. A first set of feature importance values (FIVs) is generated based on the first sample and a machine-learned model. A second set of FIVs is generated based on the second sample and the model. If a result of a comparison between the first and second FIV sets does not satisfy criteria, then: (i) an aggregated set is generated based on the last two FIV sets; (ii) a new sample that is double the size of a previous sample is identified from the dataset; (iii) a current FIV set is generated based on the new sample and the model; (iv) determine whether a result of a comparison between the current and aggregated FIV sets satisfies criteria; repeating (i)-(iv) until the result of the last comparison satisfies the criteria.
In some aspects, techniques may include monitoring a primary load of a datacenter and a reserve load of the datacenter. The primary load and reserve load can be monitored by a computing device. The primary load of the datacenter can be configured to be powered by one or more primary generator blocks having a primary capacity, and the reserve load of the datacenter can be configured to be powered by one or more reserve generator blocks having a reserve capacity. Also, the techniques may include detecting that the primary load of the datacenter exceeds the primary capacity. In addition, the techniques may include connecting the reserve generator blocks to at least one of the primary generator blocks and the primary load using a computing device switch.
H02J 9/06 - Circuits pour alimentation de puissance de secours ou de réserve, p.ex. pour éclairage de secours dans lesquels le système de distribution est déconnecté de la source normale et connecté à une source de réserve avec commutation automatique
G06F 1/26 - Alimentation en énergie électrique, p.ex. régulation à cet effet
Techniques for business-to-business (B2B) chat routing are disclosed, including: receiving, by a B2B chatbot during a chat session with a user, user input including a user-supplied business name; performing a business lookup based at least on the user-supplied business name, to obtain a canonical business name and a unique business identifier associated with the canonical business name; performing a customer relationship management (CRM) system lookup based at least on the unique business identifier, to identify a corresponding business account; routing the chat session from the B2B chatbot to a human chat agent assigned to the corresponding business account.
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p.ex. des réponses automatiques ou des messages générés par un agent conversationnel
Techniques are described for transmitting metric data between tenancies. Metric data is gathered for resources within a customer tenancy of a multi-tenant environment. This metric data is sent to a service tenancy of the multi-tenant environment, where the service tenancy is separate from the customer tenancy. The metric data is validated and preprocessed within the service tenancy to make sure that all required fields (such as key-value pairs) are located within the metric data. The preprocessed metric data is then sent to a telemetry service for analysis.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p.ex. des interruptions ou des opérations d'entrée–sortie
G06N 5/04 - Modèles d’inférence ou de raisonnement
Techniques for generating a schema transformation for application data to monitor and manage the application in a runtime environment are disclosed. A system runs an application plugin in a runtime environment to identify data generated by application modules in one or both of an application build process and an application execution process. The application plugin is a software program executed together with the application build process. The application plugin identifies a source schema associated with application data. The application plugin identifies a target schema associated with an analysis program or machine learning model. The application plugin generates a schema transformation to convert application runtime data into a target data set. The system applies the target data set to an analysis program, such as a machine learning model, to generate output analysis data associated with the application.
Techniques are disclosed for generating machine learning models that are insensitive to drift. A system trains a machine learning model using a divergent training dataset including synthesized data points simulating drift. The system can evaluate the machine learning models in terms of accuracy, latency, efficiency, and other metrics. Based on the evaluation, the system can select a machine learning model least susceptible to drift.
Fingerprint inference of software artifacts includes receiving a request including classes, generating request fingerprints from the classes, and querying at least one index with the request fingerprints to identify a matching set of artifact versions. Fingerprint inference further includes obtaining, for each matching artifact version in the matching set of artifact versions, a count of the request fingerprints matching a indexed fingerprint related, in the at least one index, to the artifact version, and selecting a subset of the matching set of artifact versions having a count that is maximal amongst the matching set of artifact versions. Fingerprint inference further includes returning the subset of the matching set of artifact versions.