Methods and apparatus to manage cloud computing resources are disclosed. An example apparatus includes network interface circuitry; computer readable instructions; and programmable circuitry to instantiate: group management circuitry to determine a group identifier associated with a resource to be provisioned; allocation circuitry to: determine an intersection of placements for resources associated with the group identifier; validate a rule for the intersection; and cause provisioning of the resources when the placement rule passes.
H04L 47/70 - Contrôle d'admission; Allocation des ressources
H04L 47/762 - Contrôle d'admission; Allocation des ressources en utilisant l'allocation dynamique des ressources, p.ex. renégociation en cours d'appel sur requête de l'utilisateur ou sur requête du réseau en réponse à des changements dans les conditions du réseau déclenchée par le réseau
2.
OFFLOADING PACKET PROCESSING PROGRAMS FROM VIRTUAL MACHINES TO A HYPERVISOR AND EFFICIENTLY EXECUTING THE OFFLOADED PACKET PROCESSING PROGRAMS
In one set of embodiments, a hypervisor of a host system can receive a packet processing program from a virtual network interface controller (NIC) driver of a virtual machine (VM) running on the hypervisor. The hypervisor can then attach the packet processing program to a first execution point in a physical NIC driver of the hypervisor and to a second execution point in a virtual NIC backend of the hypervisor, where the virtual NIC backend corresponds to a virtual NIC of the VM that originated the packet processing program.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
3.
METHODS AND APPARATUS FOR BLUEPRINT EXTENSIBILITY VIA PLUGINS
Methods, apparatus, systems, and articles of manufacture are disclosed. An example system comprises interface circuitry; programmable circuitry; and instructions to program the programmable circuitry to: retrieve metadata associated with a plugin for a cloud resource platform, the plugin to provide a capability to provision a cloud resource of the cloud resource platform; perform a first transformation of the metadata from a first format associated with the plugin to a second format associated with a blueprint service; register the capability into the blueprint service; generate a blueprint including instructions to provision the cloud resource; transform the blueprint from the second format to a third format, the third format at least partially defined by the first format; and provision the cloud resource based on the transformed blueprint.
Some embodiments of the invention provide a method of remediating anomalies in an SD-WAN implemented by multiple forwarding elements (FEs) located at multiple sites connected by the SD-WAN. The method determines that a particular anomaly detected in the SD-WAN requires remediation to improve performance for a set of one or more flows traversing through the SD-WAN. The method identifies a set of two or more remedial actions for remediating the particular anomaly in the SD-WAN. For each identified remedial action in the set, the method selectively implements the identified remedial action for a subset of the set of flows for a duration of time in order to collect a set of performance metrics associated with SD-WAN performance during the duration of time for which the identified remedial action is implemented. Based on the collected sets of performance metrics, the method uses a machine-trained process to select one of the identified remedial actions as an optimal remedial action in the set to implement for all of the flows in the set of flows. The method implements the selected remedial action for all of the flows in the set of flows.
The current document is directed to contention control for computational resources in distributed computer systems and, in particular, to contention control for memory in distributed metrics collection systems that collect and aggregate metric data in distributed computer systems. In one implementation, parallel metric-data collectors in a first distributed computer system collect metric data and one or more aggregators aggregate collected metric data and forward the aggregated metric data to a second distributed computer system, which uses the metric data for various monitoring, analysis, and management tasks. Each parallel data collector stores received metrics in a metrics container assigned to the parallel collector and a write/read-write lock provides contention control that allows multiple metric-data collectors to concurrently access metrics containers but only a single aggregator to access the metrics containers.
Various shared infrastructure governance decision-making systems and methods are disclosed. One such method comprises receiving, by at least one blockchain node from a client device, a reconfiguration request for changing infrastructure of a blockchain service by performing a reconfiguration action; triggering, by the at least one blockchain node, an infrastructure governance approval process to approve or deny the reconfiguration request for changing the infrastructure of the blockchain service; and invoking, by the at least one blockchain node, initiation of the reconfiguration action by the blockchain service upon approval of the reconfiguration request. Other methods and system are also disclosed.
Some embodiments of the invention provide a method for remediating anomalies in an SD-WAN implemented by multiple forwarding elements (FEs) located at multiple sites connected by the SD-WAN. The method is performed for each particular FE in a set of one or more FEs. The method identifies a set of metrics associated with each application of multiple applications for which the particular FE forwards traffic flows. For each particular application of the multiple applications, the method generates a distribution graph that shows the identified set of metrics associated with the particular application for the particular FE over a first duration of time. The method analyzes the generated distribution graphs using a machine-trained process to identify one or more per-application incidents by identifying that a threshold number of metrics associated with the particular application (1) are outliers with respect to the generated distribution graph for the particular application and (2) occurred within a second duration of time.
A protocol for federated decision tree learning is provided. In one set of embodiments, this protocol employs a cryptographic technique known as private set intersection (PSI) (and more precisely, a variant of PSI known as quorum private set intersection analytics (QPSIA)) to carry out federated learning of decision trees in an efficient and effective manner.
An example method of managing on-premises software executing in a data center includes: probing, by a connectivity agent executing in the data center, connectivity between a cloud service executing in a public cloud and the data center; storing, by the connectivity agent, probe results in a connectivity store of the data center; reading, by connectivity sensing logic in the on-premises software, a current probe result from the connectivity store; and providing, by the on-premises software to a user, functionality based on the current probe result.
Improved techniques for testing the effectiveness of signatures used by a signature-based intrusion detection system (IDS) are provided. In one set of embodiments, these techniques involve parsing each signature in the IDS's signature set (or a subset of the signature set) to understand the signature's content and creating a synthetic network traffic flow for the signature that mimics/simulates its corresponding attack. The synthetic network traffic flows can then be replayed against the IDS in order to verify that the correct alerts are generated by the IDS.
An example method of synchronizing a first inventory of a cross-cluster control plane (xCCP) with a second inventory of a cluster control plane (CCP) includes: receiving, at a replication engine of the xCCP from the CCP, a notification of a CCP operation that modified an object in the second inventory; determining, by the replication engine, a first operation to modify the first inventory with the object; identifying, in a buffer of the replication engine, a second operation to modify the first inventory with a related object associated with the object, the related object included in an earlier CCP notification, received at the xCCP before the notification, but not used to modify the first inventory due to an unresolved dependency; and calling, by the replication engine in response to satisfaction of the unresolved dependency, a service of the xCCP to modify the first inventory by performing the first and second operations.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
12.
DYNAMIC BUFFER LIMIT CONFIGURATION OF MONITORING AGENTS
An example system may include a first endpoint executing a remote collector and a second endpoint in communication with the first endpoint. The remote collector may monitor the second endpoint. The remote collector may include an agent installation unit to install a monitoring agent with configuration data on the second endpoint. The configuration data may specify a configuration for the monitoring agent to monitor a first program executing in the second endpoint. Further, the second endpoint may include a buffer limit configuration unit to execute the monitoring agent in a test mode to determine a first number of metrics to be collected in one cycle based on the configuration data. Furthermore, the buffer limit configuration unit may configure a buffer limit of the monitoring agent based on the first number of metrics and, upon configuring the buffer limit, enable the monitoring agent to monitor the first program.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
13.
DYNAMIC BUFFER LIMIT CONFIGURATION OF MONITORING AGENTS
An example system may include a first endpoint and a second endpoint executing a remote collector to monitor the first endpoint. The remote collector may include a buffer limit configuration unit to receive a request to install a monitoring agent on the first endpoint. The request may include an operating system type. Further, the buffer limit configuration unit may determine a first predefined buffer limit corresponding to the operating system type. Furthermore, the remote collector may include an installation unit to install the monitoring agent with configuration data on the first endpoint. The configuration data may specify a configuration for the monitoring agent to monitor an operating system executing in the first endpoint and the first predefined buffer limit as a buffer limit for the monitoring agent. Furthermore, the installation unit may enable the monitoring agent to monitor the operating system based on the configuration data with the buffer limit.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
A log is received at a user space process of a host from a logical logging component of a virtual computing instance (VCI), the log generated by a container running on the VCI. The log is communicated from the user space process to a logical logging component of the host. The log is communicated from the logical logging component of the host to a logging process of the host. The log is configured and stored in host storage.
Some embodiments of the invention provide a method of performing layer 7 (L7) packet processing for a set of Pods executing on a host computer, the set of Pods managed by a container orchestration platform. The method is performed at the host computer. The method receives notification of a creation of a traffic control (TC) custom resource (CR) that is defined by reference to a TC custom resource definition (CRD). The method identifies a set of interfaces of a set of one or more managed forwarding elements (MFEs) executing on the host computer that are candidate interfaces for receiving flows that need to be directed based on the TC CR to a layer 7 packet processor. Based on the identified set of interfaces, the method provides a set of flow records to the set of MFEs to process in order to direct a subset of flows that the set of MFEs receive to the layer 7 packet processor.
H04L 45/00 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données
H04L 69/329 - Protocoles de communication intra-couche entre entités paires ou définitions d'unité de données de protocole [PDU] dans la couche application [couche OSI 7]
16.
MULTIPLE CONNECTIVITY MODES FOR CONTAINERIZED WORKLOADS IN A MULTI-TENANT NETWORK
The disclosure provides a method for isolated environments for containerized workloads within a virtual private cloud in a networking environment. The method generally includes defining, by a user, a subnet custom resource object for creating a subnet in the virtual private cloud, wherein defining the subnet custom resource object comprises defining a connectivity mode for the subnet; deploying the subnet custom resource object such that the subnet is created in the virtual private cloud with the connectivity mode specified for the subnet; defining, by the user, a subnet port custom resource object for assigning a node to the subnet, wherein one or more containerized workloads are running on the node; and deploying the subnet port custom resource object such that the node is assigned to the subnet.
Some embodiments provide a novel method of migrating a particular virtual machine (VM) from a first host computer to a second host computer. The first host computer of some embodiments has a physical network interface card (PNIC) that performs at least one of network forwarding operations and middlebox service operations for the particular VM. The first host computer sends, to the PNIC of the first host computer, a request for state information relating to at least one of network forwarding operations and middlebox service operations that the PNIC performs for the particular VM. The first host computer receives the state information from the PNIC. The first host computer provides the state information received from the PNIC to the second host computer as part of a data migration that is performed to migrate the particular VM from the first host computer to the second host computer.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
H04L 41/0897 - Capacité à monter en charge au moyen de ressources horizontales ou verticales, ou au moyen d’entités de migration, p.ex. au moyen de ressources ou d’entités virtuelles
Example methods and systems for multi-engine intrusion detection are described. In one example, a computer system may configure a set of multiple intrusion detection system (IDS) engines that include at least a first IDS engine and a second IDS engine. In response to detecting establishment of a first packet flow and a second packet flow, the computer system may assign the first packet flow to the first IDS engine and second packet flow to the second engine based on an assignment policy. This way, first packet flow inspection may be performed using the first IDS engine to determine whether first packet(s) associated with the first packet flow are potentially malicious. Second packet flow inspection may be performed using the second IDS engine to determine whether second packet(s) associated with the second packet flow are potentially malicious.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
19.
MANAGING CONFIGURATION OF SUPERNETS FOR A ROUTE TABLE BASED ON AVAILABLE CAPACITY IN THE ROUTE TABLE
Described herein are systems, methods, and software to manage prefixes for a route table in a gateway according to an implementation. In one implementation, a management service monitors a quantity of prefix routes associated with a route table in a gateway and determines when the quantity satisfies one or more criteria. When the capacity satisfies the one or more criteria, the management service determines one or more supernets that each represent a subset of the prefix routes and adds the one or more supernets to the route table to replaces the subset of the prefix routes.
Some embodiments provide a method for monitoring a multi-tenant system deployed in a cloud, at a monitoring service deployed in the cloud. The method deploys a first service instance in the cloud for a first tenant that is based on a monitoring service configuration defined by an administrator of the multi-tenant system. The method collects (i) a first set of metrics of the first service instance and (ii) a second set of metrics of a second, existing service instance deployed in the cloud for a second, existing tenant of the multi-tenant system. The method uses the second set of metrics to determine an effect on the second service instance of the deployment of the first service instance.
H04L 41/40 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p.ex. des réseaux de commutation de paquets en utilisant la virtualisation des fonctions réseau ou ressources, p.ex. entités SDN ou NFV
21.
METHODS AND APPARATUS TO IMPROVE AVAILABILITY OF FAILED ENTITIES
An apparatus disclosed herein includes memory; computer readable instructions; and programmable circuitry to be programmed by the computer readable instructions to: generate a reclamation recommendation based on a subset of entities eligible for reclamation, the subset of the entities meeting a resource requirement of a failed entity; reconfigure the subset of the entities to reclaim resources of the subset of the entities based on the reclamation recommendation; and execute the failed entity using the reclaimed resources of the subset of the entities.
Described herein are systems, methods, and software to manage an active/standby gateway configuration using Duplicate Address Detection (DAD) packets. In one implementation, a first gateway determines that a heartbeat connection with a second gateway has failed. In response to the failed heartbeat connection, the first gateway implements a packet filter for the data plane that permits DAD packets but blocks one or more other protocols. The first gateway then determines whether a response is received to the DAD packets within a timeout period. If received, the first gateway will revert to a standby state. If not received, the first gateway will assume the active state in place of the second gateway.
H04L 43/106 - Surveillance active, p.ex. battement de cœur, utilitaire Ping ou trace-route en utilisant des informations liées au temps dans des paquets, p.ex. en ajoutant des horodatages
Methods, apparatus, systems, and articles of manufacture are disclosed to generate code as a plug-in in a cloud computing environment. An example system includes at least one memory, programmable circuitry, and machine readable instructions to program the programmable circuitry to introspect code in a library to obtain introspection data, the library corresponding to a resource that is to be deployed in a cloud infrastructure environment, generate a model based on the introspection data, the model to be a representation of the resource, cross-reference the model with a resource meta-model, the resource meta-model to map characteristics of the resource represented by the model to an actual state of the resource, and generate a plug-in based on the cross-referenced model.
Some embodiments provide a method for a monitoring service that monitors a multi-tenant system with multiple tenant-specific service instances executing in a cloud. For each tenant-specific service instance monitored by the monitoring service, the method collects values for metrics defined in a declarative configuration file for the tenant-specific service instance and compares the collected values to values specified in the declarative configuration file for the metrics to determine whether deployment of other service instances affects operation of the tenant-specific service instance. The metrics in the declarative configuration file are generated based on a service-level agreement for the tenant.
H04L 41/5009 - Détermination des paramètres de rendement du niveau de service ou violations des contrats de niveau de service, p.ex. violations du temps de réponse convenu ou du temps moyen entre l’échec [MTBF]
H04L 43/55 - Test de la qualité du niveau de service, p.ex. simulation de l’utilisation du service
25.
CREDIT UNITS-BASED ACCESS CONTROL FOR DATA CENTER RESOURCES
An example method may include generating a credit unit defining a value indicating a number of times an operation can be performed on a resource type in a data center. Further, the method may include assigning credits, a credit limit, and the credit unit to a user account. The credit limit may indicate maximum credits that can be used to perform each operation. Furthermore, the method may include receiving a request to perform an operation on a data center resource from a user associated with the user account. Upon receiving the request, the method may include determining whether the user is permitted to perform the operation on the data center resource based on available credits of the assigned credits, the credit limit, and the credit unit. Further, the method may include executing or denying execution of the operation on the data center resource based on the determination.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
26.
METHODS AND APPARATUS TO ORCHESTRATE INTERNET PROTOCOL ADDRESS MANAGEMENT
An example system includes at least one memory; programmable circuitry; and machine-readable instructions to program the programmable circuitry to: select an orchestration integration based on capability tags of a plurality of orchestration integrations and based on constraints of an internet protocol address management (IPAM) integration; and cause execution of a workflow using the orchestration integration, the workflow to cause an IPAM system to allocate an internet protocol address for a resource of a cloud application.
Improved techniques for compressing gradient information that is communicated between clients and a parameter server in a distributed or federated learning training procedure are disclosed. In certain embodiments these techniques enable bi-directional gradient compression, which refers to the compression of both (1) the gradients sent by the participating clients in a given round to the parameter server and (2) the global gradient returned by the parameter server to those clients. In further embodiments, the techniques of the present disclosure eliminate the need for the parameter server to decompress each received gradient as part of computing the global gradient, thereby improving training performance.
Methods, apparatus, systems, and articles of manufacture are disclosed to dynamically monitor and control compute device identities during operations. Disclosed is an apparatus comprising interface circuitry, machine readable instructions, and processor circuitry to at least one of instantiate or execute the machine readable instructions to generate a unique label for a node from a data plane, the unique label to identify the node, perform an operation on the node, the operation to be performed on the node by identifying the node associated with the unique label, and maintain the unique label until the operation on the node is successful.
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p.ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
H04L 41/5003 - Gestion des accords de niveau de service [SLA]; Interaction entre l'accord de niveau de service et la qualité de service [QoS]
H04L 43/0817 - Surveillance ou test en fonction de métriques spécifiques, p.ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité en vérifiant le fonctionnement
29.
DETECTING PORT SCANS IN A CONTAINER ORCHESTRATION SYSTEM CLUSTER
Some embodiments of the invention provide a method for detecting port scans in a container orchestration system cluster that includes at least a first machine executing on a host computer. The method identifies a packet stream between the first machine and a second machine operating outside of the host computer. The method determines that the packet stream is potentially part of a port scanning operation based on an assessment that the packet stream includes less than a threshold number of packets during a particular time period. Based on said determination, the method identifies an amount of payload data exchanged between the first and second machines in the packet stream during the particular time period. When the identified amount of payload data is less than or equal to a threshold amount of payload data, the method classifies the stream as a probable port-scanning stream.
Some embodiments provide a method for a health monitoring service that monitors a system with a set of services executing across a set of one or more datacenters. For each of multiple services monitored by the health monitoring service, the method (1) contacts an API exposed by the service to provide health monitoring data for the service and (2) receives health monitoring data for the service that provides, for each of multiple aspects of the service, (i) a status and (ii) an explanation for the status in a uniform format used by the APIs of each of the services. At least two different services provide health monitoring data in the uniform format for different groups of aspects of the services.
H04L 43/0817 - Surveillance ou test en fonction de métriques spécifiques, p.ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité en vérifiant le fonctionnement
H04L 61/3015 - Enregistrement, génération ou allocation de nom
31.
RAN APPLICATIONS FOR INTER-CELL INTERFERENCE MITIGATION FOR MASSIVE MIMO IN A RAN
Some embodiments of the invention provide a method for operating a first base station of a radio access network (RAN). At the first base station, the method receives a set of allow and block policies for allocating carrier resources to carrier beams utilized by the first base station for mobile devices within a first region serviced by the first base station, said first region located near a second region serviced by a second base station. At the first base station, the method identifies a first mobile device operating in the first region. At the first base station, the method uses the set of allow and block policies to allocate carrier resources to a carrier beam used to communicate with the first mobile device in the first region.
H04W 72/0453 - Ressources du domaine fréquentiel, p.ex. porteuses dans des AMDF [FDMA]
H04B 7/06 - Systèmes de diversité; Systèmes à plusieurs antennes, c. à d. émission ou réception utilisant plusieurs antennes utilisant plusieurs antennes indépendantes espacées à la station d'émission
H04W 24/06 - Réalisation de tests en trafic simulé
32.
RAN APPLICATIONS FOR INTER-CELL INTERFERENCE MITIGATION FOR MASSIVE MIMO IN A RAN
Some embodiments of the invention provide a method for mitigating inter-region interference for multiple regions serviced by multiple RAN (Radio Access Network) base stations. The method is performed for each region serviced by each particular RAN base station. The method identifies a set of one or more sub-regions receiving interfering signals from other RAN base stations. The method specifies, for each particular sub-region in the identified set of sub-regions that receives interfering signals from the particular RAN base station and another RAN base station, (1) an allow policy that identifies an allowed first set of carrier resources of the particular RAN base station that are to be allocated to a set of one or more user equipments operating in the particular sub-region, and (2) a block policy that identifies a blocked second set of carrier resource of the other RAN base station that correspond to the first set of resources and that cannot be allocated to the set of user equipments operating in the particular sub-region. The method distributes the specified allow and block policies to the RAN base stations.
Some embodiments of the invention provide a system for mitigating inter-region interference for multiple regions serviced by multiple RAN (Radio Access Network) base stations. The system includes a first RAN application for generating a map that identifies, for each particular region serviced by each particular RAN base station, a set of one or more sub-regions receiving interfering signals from other RAN base stations. The system includes a second RAN application for (1) using the generated map and a set of input received from the plurality of RAN base stations to define, for each sub-region in the set of sub-regions, policies for allocating carrier resources of the particular RAN base station to carrier beams transmitted by the particular RAN base station to the sub-regions with the interfering signals, and (2) providing the defined policies to the RAN base stations for which the policies are defined.
H04W 72/541 - Critères d’affectation ou de planification des ressources sans fil sur la base de critères de qualité en utilisant le niveau d’interférence
H04W 72/044 - Affectation de ressources sans fil sur la base du type de ressources affectées
34.
USER INTERFACE FOR HEALTH MONITORING OF MULTI-SERVICE SYSTEM
Some embodiments provide a method for providing health status for a system implemented in a network. The method displays, in a graphical user interface (GUI), representations of health status for multiple different services of the system. Each representation for a respective service shows health status for the respective service over a first particular time period. Upon receiving selection of a particular service, the method displays representations of health status data for each of multiple different aspects of the particular service. Each representation for a respective aspect of the service shows operational status for the respective aspect of the service over a second particular time period.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
35.
HEALTH MONITORING ARCHITECTURE FOR MULTI-TENANT SYSTEM
Some embodiments provide a method for monitoring a system deployed in a cloud. The method deploys a first health monitoring service that monitors a first set of common services of the system deployed in the cloud by directly communicating with each service in the first set of services to determine whether a respective set of aspects of each respective service are properly operational. The first set of common services are accessed by multiple tenants of the system. Within each respective tenant-specific service instance of multiple tenant-specific service instances deployed in the cloud for the tenants, the method deploys a respective health monitoring service that monitors a respective group of microservices of the service instance by directly communicating with each microservice of the tenant-specific service instance to determine whether a respective set of aspects of each respective microservice are properly operational.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
An example method may include obtaining, at a first instance, first compatibility metadata associated with a product from a webserver, wherein the compatibility metadata includes an indication of compatibility or incompatibility between a plurality of versions associated with the product in a first format. Further, the method may include transforming, using a data structure, the compatibility metadata from the first format to a second format and storing the transformed compatibility metadata on a local datastore. The second format may indicate a list of candidate upgrade versions that are compatible with a current version of the product. Furthermore, the method may include rendering the stored compatibility metadata including the list of candidate upgrade versions that are compatible with the current version of the product on a user interface of a client device in response to receiving an upgrade request.
Described herein are systems, methods, and software to manage internet protocol (IP) address allocation for tenants in a computing environment. In one implementation, a logical router associated with a tenant in the computing environment requests a public IP address for a new segment instance from a controller. In response to the request, the controller may select a public IP address from a pool of available IP addresses and update networking address translation (NAT) on the logical router to associate the public IP address with a private IP address allocated to the new segment instance.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Some embodiments of the invention provide a method of detecting and remediating anomalies in an SD-WAN implemented by multiple forwarding elements (FEs) located at multiple sites connected by the SD-WAN. The method receives, from the multiple FEs, multiple sets of flow data associated with application traffic that traverses the multiple FEs. The method uses a first set of machine-trained processes to analyze the multiple sets of flow data in order to identify at least one anomaly associated with at least one particular FE in the multiple FEs. The method uses a second set of machine-trained processes to identify at least one remedial action for remediating the identified anomaly. The method implements the identified remedial action by directing an SD-WAN controller deployed in the SD-WAN to implement the identified remedial action.
H04L 41/0604 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant du filtrage, p.ex. la réduction de l’information en utilisant la priorité, les types d’éléments, la position ou le temps
H04L 41/0654 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant la reprise sur incident de réseau
H04L 41/0816 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant une adaptation, p.ex. en réponse aux événements dans le réseau
39.
PROVISIONING IMAGES TO DEPLOY CONTAINERIZED WORKLOADS IN A VIRTUALIZED ENVIRONMENT
A method for provisioning images to deploy containerized workloads in a virtualized environment can include bringing up a containerized workload in a virtualized computing environment responsive to receiving a request to run a containerized workload in the virtualized computing environment. Bringing up the containerized workload can include creating a VMDK that includes a container image in shared storage of an image registry responsive to authenticating with the image registry, attaching the VMDK to a virtual computing instance, responsive to receiving a request, made by a container running in the VCI, for a file of the container image in the attached VMDK, retrieving the file from the shared storage, and bringing up the containerized workload using the file.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
40.
SYSTEM AND METHOD FOR MANAGING LIFECYCLES OF NETWORK FUNCTIONS IN MULTIPLE CLOUD ENVIRONMENTS USING DECLARATIVE REQUESTS
A system and computer-implemented method for managing lifecycles of network functions in multiple cloud environments uses declarative requests to execute lifecycle management operations for network functions running in the multiple cloud environments, which have been transformed from imperative requests to execute the lifecycle management operation at a declarative service. Execution of the lifecycle management operations at the multiple cloud environments are managed from a central network function lifecycle orchestrator based on the declarative requests.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Techniques for implementing a hardware-based cache controller in, e.g., a tiered memory computer system are provided. In one set of embodiments, the cache controller can flexibly operate in a number of different modes that aid the OS/hypervisor of the computer system in managing and optimizing its use of the system's memory tiers. In another set of embodiments, the cache controller can implement a hardware architecture that enables it to significantly reduce the probability of tag collisions, decouple cache capacity management from cache lookup and allocation, and handle multiple concurrent cache transactions.
G06F 12/0802 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p.ex. mémoires cache
A memory hierarchy includes a first memory and a second memory that is at a lower position in the memory hierarchy than the first memory. A method of managing the memory hierarchy includes: observing, over a first period of time, accesses to pages of the first memory; in response to determining that no page in a first group of pages was accessed during the first period of time, moving each page in the first group of pages from the first memory to the second memory; and in response to determining that the number of pages in other groups of pages of the first memory, which were accessed during the first period of time, is less than a threshold number of pages, moving each page in the other group of pages, that was not accessed during the first period of time from the first memory to the second memory.
G06F 12/0891 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p.ex. mémoires cache utilisant des moyens d’effacement, d’invalidation ou de réinitialisation
G06F 12/1009 - Traduction d'adresses avec tables de pages, p.ex. structures de table de page
43.
USING DIFFERENT EVENT-DISTRIBUTION POLICIES TO STREAM EVENT DATA TO DIFFERENT EVENT CONSUMERS
Some embodiments provide a novel policy-driven method for providing event data to several event consumers. An event server stores in a set of one or more data storages event data published by a set of one or more event data publishers. The event server receives first and second different event-distribution policies from first and second event consumers for first and second streams of event data tuples for which the first and second event consumers register with the event server to receive. Each event consumer has multiple consumer instances. The event server uses the first and second event-distribution policies to differently distribute the first and second streams of event data tuples to the consumer instances of the first and second event consumers.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
44.
ENFORCING GOVERNANCE AND DATA SOVEREIGNTY POLICIES ON A COMPUTING DEVICE USING A DISTRIBUTED LEDGER
Disclosed are various embodiments for binding the configuration state of client devices to the blockchain and utilizing the binding for managing compliance. A management agent can send a request to a smart contract hosted by a blockchain network for a configuration state for a computing device, the state including data sovereignty and governance policies of the computing device. The management agent can update the configuration of the computing device based upon the configuration state obtained from the blockchain network.
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
45.
TRUSTED PLATFORM MODULE ATTESTATION FOR SOFT REBOOTS
TPM attestation for soft reboots is described herein. One embodiment includes instructions to receive a request to perform a soft reboot of a computing device executing an existing OS instance and having a TPM, and perform a soft reboot process on the computing device responsive to receiving the request. The soft reboot process can include loading a new kernel and boot modules associated with a new OS instance into a memory of the computing device, measuring the boot modules into PCRs of the TPM, generating entries in an event log of the TPM corresponding to the boot modules and the new kernel, exporting the event log and a metadata file associated with the existing OS instance to storage, importing the event log from storage to the new kernel, copying the metadata file from storage to a server, and storing a new metadata file created from manifests of the new OS instance at the server.
Systems and methods are included for causing a computing device to boot by retrieving hardware information from a device tree and further properties by utilizing a native access method call identified in the device tree. The access method can allow for getting a property, getting a property length, or setting a property. A table within firmware can identify the method, which then can retrieve the property information from memory. This Device tree Runtime (“DTRT”) mechanism can allow the computing device to retrieve the hardware configuration and act as a power management interface for turning on the correct hardware and hardware properties on the computing device.
Methods, system, and articles of manufacture are disclosed to provide high availability to a cluster of nodes. Example apparatus disclosed herein are to identify member nodes of a cluster, determine whether an instance of an infrastructure supervisor is operating on any of the nodes, when an infrastructure supervisor is determined to not be operating, instantiate an infrastructure supervisor, and broadcast a discovery message to other nodes.
The current document is directed to an improved communications protocol that encompasses XOR-based forward error correction and that uses dynamic check-packet graphs that provide for efficient recovery of packets for which transmission has failed. During the past 20 years, XOR-based forward-error-correction (“FEC”) communications protocols have been developed to provide reliable multi-packet message transmission with relatively low latencies and computational complexity. These XOR-based FEC communications protocols, however, are associated with a significant amount of redundant-data transmission to achieve reliable multi-packet message transmission. The currently disclosed XOR-based FEC communications protocol employs dynamic, sparse check-packet graphs that provide for receiver-side packet recovery with significantly less redundant-data transmission. Because less redundant data needs to be transmitted in order to guarantee reliable multi-packet message delivery, the currently disclosed XOR-based FEC communications protocols are associated with significantly smaller temporal latencies and provide for greater data-transmission bandwidth.
System and method for backing up management components of a software-defined data center (SDDC) managed by a cloud-based service uses backup rules for the SDDC, which are used to configure a backup manager agent in the SDDC. The backup rules are then used by the backup manager agent to determine whether at least one of system logs generated by the management components in the SDDC, which are monitored by the backup manager agent, satisfies the backup rules to initiate a backup operation for at least one of the management components of the SDDC.
G06F 16/14 - Systèmes de fichiers; Serveurs de fichiers - Détails de la recherche de fichiers basée sur les métadonnées des fichiers
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p.ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
50.
FULLY ASSOCIATIVE CACHE LOOKUP WITH MULTIPLE CHOICE HASHING
Techniques for implementing a hardware-based cache controller in, e.g., a tiered memory computer system are provided. In one set of embodiments, the cache controller can flexibly operate in a number of different modes that aid the OS/hypervisor of the computer system in managing and optimizing its use of the system's memory tiers. In another set of embodiments, the cache controller can implement a hardware architecture that enables it to significantly reduce the probability of tag collisions, decouple cache capacity management from cache lookup and allocation, and handle multiple concurrent cache transactions.
G06F 12/02 - Adressage ou affectation; Réadressage
G06F 12/0864 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p.ex. mémoires cache utilisant des moyens pseudo-associatifs, p.ex. associatifs d’ensemble ou de hachage
G06F 12/0895 - Mémoires cache caractérisées par leur organisation ou leur structure de parties de mémoires cache, p.ex. répertoire ou matrice d’étiquettes
51.
DECOUPLING CACHE CAPACITY MANAGEMENT FROM CACHE LOOKUP AND ALLOCATION
Techniques for implementing a hardware-based cache controller in, e.g., a tiered memory computer system are provided. In one set of embodiments, the cache controller can flexibly operate in a number of different modes that aid the OS/hypervisor of the computer system in managing and optimizing its use of the system's memory tiers. In another set of embodiments, the cache controller can implement a hardware architecture that enables it to significantly reduce the probability of tag collisions, decouple cache capacity management from cache lookup and allocation, and handle multiple concurrent cache transactions.
G06F 12/0846 - Mémoire cache avec matrices multiples d’étiquettes ou de données accessibles simultanément
G06F 12/0891 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p.ex. mémoires cache utilisant des moyens d’effacement, d’invalidation ou de réinitialisation
A method of reducing data transmission between neural networks in a distributed or federated learning environment, includes the steps of: training a quantization neural network by using a plurality of training vectors each having a dimension k, wherein the quantization neural network is configured to, based on said training, output quantization levels for approximating input vectors having the dimension k; after training the quantization neural network, randomly sampling coordinates of a vector having a dimension d, to provide a first set of k coordinates, wherein d is greater than k; inputting the first set of k coordinates to the quantization neural network to determine first quantization levels for approximating the first set of k coordinates; quantizing the vector having the dimension d based on the determined first quantization levels; and using the quantized vector in the distributed or federated learning environment.
A distributed file system operating over a plurality of hosts is built on top of a tree structure having a root node, internal nodes, and leaf nodes. Each host maintains at least one node and non-leaf nodes are allocated buffers according to a workload of the distributed file system. A write operation is performed by inserting write data into one of the nodes of the tree structure having a buffer. A read operation is performed by traversing the tree structure down to a leaf node that stores read target data, collecting updates to the read target data, which are stored in buffers of the traversed nodes, applying the updates to the read target data, and returning the updated read target data as read data.
A method of managing a network file copy (NFC) operation, includes the steps of: transmitting a request to execute a first NFC operation on at least a first data store, wherein the first NFC operation comprises creating a full copy of a file that is stored in the first data store; after transmitting the request to execute the first NFC operation, determining that the first NFC operation should be stopped; and based on determining that the first NFC operation should be stopped: transmitting a request to stop the first NFC operation, selecting a second data store, and transmitting a request to execute a second NFC operation on at least the second data store, wherein the second NFC operation comprises creating a copy of at least a portion of the file.
Some embodiments provide a novel method for providing data regarding events that is stored in a data store. An event server receives a registration to receive data for a first event from a particular consumer. The event server uses an identity associated with the particular consumer to identify a set of one or more partitions of the data store that store data for the first event. The event server provides, to the particular consumer, a stream of data regarding the first event that is stored in the identified partition set for the particular consumer to process.
The present disclosure relates to moving workloads between cloud providers. A traceability application can receive a request to register a workload from a first virtualization service associated with a first cloud computing environment. To register the workload, the traceability application can generate an identification token in a distributed data store and an asset record corresponding to the identification token. The identification token can uniquely identify the workload among a plurality of workloads associated with a plurality of cloud computing environments. The traceability application can detect a migration of the VM from the first virtualization service associated with the first cloud computing environment to a second virtualization service associated with the second cloud computing environment. The traceability application can cause a transfer of an ownership of the identification token from the first virtualization service to the second virtualization service. The traceability application can update the asset record to reflect the transfer of the ownership of the identification token from the first virtualization service to the second virtualization service.
The technology disclosed herein enables. In a particular example, a control plane for a software-defined data center performs a method including identifying a tenant network address space for use by a tenant of the software-defined data center. The method further includes generating a filter rule for a tenant gateway between the tenant network address space and a provider gateway outside of the tenant network address space. Also, the method includes implementing the filter rule in the tenant gateway, wherein the filter rule prevents the tenant gateway from advertising network addresses outside of the tenant network address space.
This disclosure is directed to automated computer-implemented methods and systems for prioritizing recommended suboptimal resources of a data center. Methods and system described herein save time and increase the accuracy of identifying actual suboptimal resources and executing remedial measures to correct the suboptimal resources.
G06F 9/48 - Lancement de programmes; Commutation de programmes, p.ex. par interruption
G06F 18/2413 - Techniques de classification relatives au modèle de classification, p.ex. approches paramétriques ou non paramétriques basées sur les distances des motifs d'entraînement ou de référence
Systems and methods for dynamic migration between Receive Side Scaling (RSS) engine states include monitoring a traffic load of a first shared RSS engine of a physical network interface card (PNIC) of a host machine, the first shared RSS engine being shared among a first plurality of virtual machines (VMs) running on the host machine, determining the traffic load of the first shared RSS engine exceeds a threshold, and, in response to determining that the traffic load of the first shared RSS engine exceeds the threshold, migrating a first VM of the first plurality of VMs to either a dedicated RSS engine of the PNIC or to a second shared RSS engine of the PNIC.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
60.
UNIFIED DEPLOYMENT OF CONTAINER INFRASTRUCTURE AND RESOURCES
Systems and methods for unified virtual infrastructure and containerized workload deployment via a deployment platform include receiving, at the deployment platform, a definition of the virtual infrastructure and the containerized workload, sending, by the deployment platform, first information comprising the definition of the virtual infrastructure to an infrastructure manager configured to deploy the virtual infrastructure including a container orchestrator, and sending, by the deployment platform, second information comprising the definition of the containerized workload to the container orchestrator configured to deploy the containerized workload on the deployed virtual infrastructure.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
61.
Tiered memory data structures and algorithms for dynamic searching via balanced binary search trees
In one set of embodiments, a computer system can receive a request to insert or delete a key into or from a plurality of keys maintained by a dynamic search data structure, where the dynamic search data structure is implemented using a balanced binary search tree (BBST) comprising a plurality of nodes corresponding to the plurality of keys, where a first subset of the plurality of nodes are stored in the first memory tier, and where a second subset of the plurality of nodes are stored in the second memory tier. The computer system can further execute the request to insert or delete the key, where the executing results in a change in height of at least one node in the plurality of nodes. In response to the executing, the computer system can move one or more nodes in the plurality of nodes between the first and second memory tiers, the moving causing a threshold number of nodes of highest height in the BBST to be stored in the first memory tier.
In one set of embodiments, a computer system can receive a request to insert or delete a key into or from a plurality of keys maintained by a dynamic search data structure, where the first memory tier is faster than the second memory tier, where the dynamic search data structure is implemented using a treap comprising a plurality of nodes corresponding to the plurality of keys, and where each node in the plurality of nodes is identified by a key in the plurality of keys and a random priority. The computer system can then execute the request in a manner that causes a threshold number of nodes of highest priority in the treap to be stored in the first memory tier.
An example method of beacon probing in a computing system includes: sending, by cross-host beacon probing (CHBP) software executing in a first host of the computing system, a first beacon probe from a first network interface controller (NIC) of the first host to NICs on a same layer 2 domain as the first NIC, the NICs including a second NIC of the first host and cross-host NICs of at least one host other than the first host; receiving, at the CHBP software through the first NIC, acknowledgements (ACKs) to the first beacon probe from the cross-host NICs; and determining, in response to the first beacon probe, connectivity statuses of the first NIC and the second NIC by the CHBP software based on the ACKs and on whether the second NIC receives the first beacon probe.
Methods, apparatus, systems, and articles of manufacture are disclosed to predict power consumption in a server. An example apparatus includes interface circuitry to obtain a power prediction request corresponding to the server range determiner circuitry to divide a training data set into a first sub-range of data and a second sub-range of the data; a data point in the training data set representative of resource utilization of a workload and a corresponding power consumption metric of the workload; model trainer circuitry to train first candidate models based on the first sub-range of the data and second candidate models based on the second sub-range of the data; and prediction selector circuitry to: select a first prediction model from the first candidate models; and select a second prediction model from the second candidate models, outputs of the first and the second prediction models to predict the power consumption of the server.
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p.ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
65.
STUN FREE SNAPSHOTS IN VIRTUAL VOLUME DATASTORES USING DELTA STORAGE STRUCTURE
The disclosure provides a method for virtual volume snapshot creation by a storage array. The method generally includes receiving a request to generate a snapshot of a virtual volume associated with a virtual machine, in response to receiving the request, preparing a file system of the storage array to generate the snapshot, wherein preparing the file system comprises creating a delta storage structure to receive write input/output (I/O) requests directed for the virtual volume when generating the snapshot of the virtual volume, deactivating the virtual volume, activating the delta storage structure, generating the snapshot of the virtual volume, and during the generation of the snapshot of the virtual volume: receiving a write I/O directed for the virtual volume and committing the write I/O in the delta storage structure.
G06F 3/06 - Entrée numérique à partir de, ou sortie numérique vers des supports d'enregistrement
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
66.
CONTINUOUS DATA PROTECTION AGAINST RANSOMWARE FOR VIRTUAL VOLUMES
The disclosure provides a method for virtual volume (vvol) recovery. The method generally includes determining to initiate recovery of a compromised vvol associated with a virtual machine (VM), transmitting a query requesting a list of snapshots previously captured for the compromised vvol, receiving the list of the snapshots previously captured for the compromised vvol and information about one or more snapshots in the list of snapshots, wherein for each of the snapshots, the information comprises an indication of at least one change between the snapshot and a previous snapshot, determining a recovery point snapshot among snapshots in the list of the snapshots based, at least in part, on the information about the one or more snapshots, creating a clone of the recovery point snapshot to generate a recovered virtual volume, creating a virtual disk from the recovered virtual volume, and attaching the virtual disk to the VM.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p.ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
G06F 21/78 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du stockage de données
67.
ONLINE FORMAT CONVERSION OF VIRTUAL DISK FROM REDO-LOG SNAPSHOT FORMAT TO SINGLE-CONTAINER SNAPSHOT FORMAT
System and method for converting a storage object in a redo-log snapshot format to a single-container snapshot format in a distributed storage system uses a temporary snapshot object, which is created by taking a snapshot of the storage object, and an anchor object, which points to a root object of the storage object. For each object chain of the storage object, each selected object is processed for format conversion. For each selected object, difference data between the selected object and a parent object of the selected object is written to the anchor object, a child snapshot of the anchor object is created in the single-container snapshot format, and the anchor object is updated to point to the selected object. The data of the running point object of the storage object is then copied to the anchor object, and each processed object and the temporary snapshot object are removed.
Disclosed are various embodiments for determining whether to initiate a remote device wipe in a mobile device management context. In one example, a system comprises a computing device configured to identify a device wipe condition for a client device and determine a wipe policy associated with the device wipe condition. A time for a time delay is initiated for a device wipe action of the client device. A wipe instruction is transmitted to execute the device wipe action based on an expiration of the time delay for the device wipe action.
G06F 21/50 - Contrôle des usagers, programmes ou dispositifs de préservation de l’intégrité des plates-formes, p.ex. des processeurs, des micrologiciels ou des systèmes d’exploitation
Solutions for secure metering of hyperconverged infrastructures are disclosed. Examples include: receiving a security token; accessing a secondary storage (e.g., cold storage, backups) using the security token; determining usage data for the secondary storage; generating a first message digest for a combination of the usage data and the security token; and transmitting, to a metering server, the usage data and the first message digest. In some examples, the combination of the usage data and the security token comprises a concatenation of the usage data and the security token. In some examples, the metering server requests verification usage data from the secondary storage, generates a second message digest for a combination of the verification usage data and the security token, and compares the first message digest with the second message digest. Examples do not persist the security token on customer premises. Examples leverage the usage data to optimize the secondary storage.
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
70.
NETWORK ADDRESS TRANSLATION IN ACTIVE-ACTIVE EDGE CLUSTER
Some embodiments provide a method for forwarding data messages at multiple edge gateways of a logical network that process data messages between the logical network and an external network. At a first edge gateway, the method receives a data message, having an external address as a destination address, from the logical network. Based on the destination address, the method applies a default route to the data message that routes the data message to a second edge gateway and specifies a first output interface of the first edge gateway for the data message. After routing the data message, the method applies a stored NAT entry that (i) modifies a source address of the data message to be a public NAT address associated with the first edge gateway and (ii) redirects the modified data message to a second output interface of the first edge gateway instead of the first output interface.
The present disclosure relates to workload placement responsive to fault. One embodiment includes instructions to remove a first host from a first cluster of a software-defined datacenter (SDDC) responsive to a determination of a fault in a hypervisor of the first host, place the first host into a second cluster of the SDDC, wherein the second cluster is designated to run stateless workloads, and add a second host to the first cluster.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
72.
SELECTIVELY PREVENTING RESOURCE OVERALLOCATION IN A VIRTUALIZED COMPUTING ENVIRONMENT
The present disclosure is related to devices, systems, and methods for selectively preventing resource overallocation in a virtualized computing environment. One example includes instructions to receive a request to prevent overallocation of a resource in a software-defined datacenter associated with a customer, determine an amount of the resource available to the customer, and assign a respective portion of the amount of the resource available to the customer to each of a plurality of virtual computing instances (VCIs) irrespective of a power state of each of the plurality of VCIs.
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
73.
SYSTEM AND METHOD FOR PROVIDING CONNECTIVITY BETWEEN A PROXY CLIENT AND TARGET RESOURCES USING A TRANSPORT SERVICE
System and computer-implemented method for connecting a proxy client to a transport client through a transport service with a plurality of stateless transport server nodes in a distributed computing system uses a command channel established from the transport client to a first transport server node in the transport service. A second transport server node in the transport service is selected for a connection request from the proxy client. The first transport server node is connected from the second transport server node when the second transport server node is not the first transport server node with the command channel so that connectivity between the proxy client and the transport client is established through the first transport server node and the second transport server node.
H04L 67/2871 - Architectures; Dispositions - Détails de mise en œuvre d'entités intermédiaires uniques
H04L 41/40 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p.ex. des réseaux de commutation de paquets en utilisant la virtualisation des fonctions réseau ou ressources, p.ex. entités SDN ou NFV
H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau
74.
SELECTIVE CONFIGURATION IN A SOFTWARE-DEFINED DATA CENTER FOR APPLIANCE DESIRED STATE
An example method of managing a configuration of a virtualization management server in a software-defined data center (SDDC), the virtualization management server managing a cluster of hosts and a virtualization layer executing therein, includes: generating, by a service executing in the SDDC, a profile that includes a managed configuration exclusive of an unmanaged configuration, a union of the managed configuration and the unmanaged configuration being a configuration of the virtualization management server; validating, by the service, that the unmanaged configuration in the profile does not include dependencies with the unmanaged configuration; applying, by the service, the profile to the virtualization management server.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Systems and methods for containerization of an application, by an application transformer running a first operating system, to run on a second operating system, include gathering, by the application transformer running the first operating system, process artifacts of the application running on a first machine running the second operating system, sending, to a builder machine running the second operating system, the process artifacts of the application, and building, by the builder machine, a container image corresponding to the application based on the process artifacts, the container image being configured to run on the second operating system.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
76.
AUTOMATE SUSPENSION AND REDEPLOYMENT OF CLOUD RESOURCES
Methods, apparatus, systems, and articles of manufacture are disclosed to automate suspension and redeployment of cloud resources. The example apparatus is to, based on network traffic associated with a compute cluster hosting a containerized application, determine whether to suspend the containerized application. Additionally, the example apparatus is to determine a port of a transient container that is available to be mapped to the containerized application and cause a request to access the containerized application to be forwarded to the port of the transient container instead of the containerized application. The example apparatus is also to deprovision one or more resources associated with the containerized application.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
77.
BYZANTINE FAULT TOLERANT AGREEMENT ON CURRENT TIME
Disclosed are examples of systems and methods for determining a blockchain time. One such method comprises receiving, by a non-primary replica node, a pre-prepare message having a timestamp representing a local time of a primary replica node in a blockchain network, verifying, by the non-primary replica node, that a difference between the local time of the primary replica node and a local time of the non-primary replica node does not exceed a hard time limit established for the blockchain network; responding, to the primary replica node, that a time value of the timestamp is within acceptable bounds of the local time of the non-primary replica node; and after consensus is reached amongst the non-replica nodes regarding acceptance of the time value, saving a current blockchain time for the blockchain network based on the timestamp.
H04L 9/00 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
78.
ALERTING AND REMEDIATING AGENTS AND MANAGED APPLIANCES IN A MULTI-CLOUD COMPUTING SYSTEM
An example method of alerting and remediation in a multi-cloud computing system having a public cloud in communication with a data center includes: receiving, at remediation and troubleshooting software executing in the public cloud, event and log information generated by endpoint software executing in the data center during operation thereof; generating, at the remediation and troubleshooting software, an incident in response to the event and log information; sending, by a remediation and troubleshooting service (RTS) of the remediation and troubleshooting software in response to the incident, a remediation task to a coordinator agent over a message fabric, the coordinator agent executing in an agent platform appliance of the data center; and executing, by the coordinator agent, remediation of the endpoint software according to the remediation task.
An example method of packet processing in a host cluster of a virtualized computing system includes: receiving traffic at packet processing software of a hypervisor executing on a host of the host cluster; processing the traffic using a network service of the packet processing software in the hypervisor; redirecting the traffic to a service virtual machine (VM) in the host cluster through a virtual network interface card (vNIC) of the service VM; sending metadata from the network service of the packet processing software to the service VM; processing the traffic and the metadata through at least one network service executing in the service VM; returning the traffic from the service VM to the packet processing software of the hypervisor; and forwarding, by the packet processing software, the traffic to a destination.
In one set of embodiments, a computer system comprising first and second memory tiers can receive a request to carry out union-find with respect to a set of n elements. The computer system can then initialize a disjoint-set forest comprising a plurality of trees, each tree including a node corresponding to an element in the set of n elements, and can execute one or more union-find operations on the disjoint-set forest, where the initializing and the executing comprises storing a threshold number of nodes of highest rank in the disjoint-set forest in the first memory tier.
Methods, apparatus, systems, and articles of manufacture are disclosed including a system to manage a cloud deployment, the system comprising: at least one memory; programmable circuitry; and machine readable instructions to cause the programmable circuitry to: create a custom resource corresponding to the cloud deployment, the cloud deployment identifiable by cloud credentials of a cloud environment, the custom resource to include an action identifier; generate an infrastructure-as-data to represent the custom resource corresponding to the cloud deployment, the infrastructure-as-data representation to include the cloud credentials; and provide the infrastructure-as-data to an infrastructure adaptor, the infrastructure-as-data to cause performance of an operation corresponding to the action identifier using the cloud deployment.
Components of a distributed data object are synchronized using streamlined tracking metadata. A target component of the distributed data object is detected as it becomes available and stale. A source component that is up-to-date and that mirrors the address space of the detected target component is identified. A set of mapped address ranges and a set of unmapped address ranges of the identified source component are obtained. A mapped address range of the target component that corresponds with an unmapped address range of the source component is identified. The identified mapped address range of the target component is then synchronized with the corresponding unmapped address range of the source component. Thus, unmapped address ranges are synchronized without using tracking metadata of the source component.
References to changing data sets in distributed data lakes are optimized. As part of a transaction, a first message is received. The first message identifies a table and first data to be written to the table. Based on at least the table, the first message is routed to a first ingestion node of a plurality of ingestion nodes. The first data is persisted in temporary storage. Location information of the persisted first data is determined. A data available message comprising a self-describing reference to the first data is published, by the first ingestion node, to a first reader node of a plurality of reader nodes. The self-describing reference identifies the first ingestion node, the location information of the first data, and a range of the first data.
Storage file size in distributed data lakes is optimized. At a first ingestion node of a plurality of ingestion nodes, a merge advisory is received from a coordinator. The merge advisory indicates a transaction identifier (ID). Received data associated with the transaction ID is persisted, which includes: determining whether the received data, persisted together in a single file will exceed a maximum desired file size; based on determining that the maximum desired file size will not be exceeded, persisting the received data in a single file; and based on determining that the maximum desired file size will be exceeded, persisting the received data in a plurality of files that each does not exceed the maximum desired file size. A location of the persisted received data in the permanent storage is identified, by the first ingestion node, to the coordinator.
An example method of distributed load balancing in a virtualized computing system includes: configuring, at a logical load balancer, a traffic detector to detect traffic to a virtual internet protocol address (VIP) of an application having a plurality of instances; detecting, at the traffic detector, a first request to the VIP from a client executing in a virtual machine (VM) supported by a hypervisor executing on a first host; sending, by a configuration distributor of the logical load balancer in response to the detecting, a load balancer configuration to a configuration receiver of a local load balancer executing in the hypervisor for configuring the local load balancer to perform load balancing for the VIP at the hypervisor using the load balancer configuration.
H04L 67/101 - Sélection du serveur pour la répartition de charge basée sur les conditions du réseau
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
H04L 67/1008 - Sélection du serveur pour la répartition de charge basée sur les paramètres des serveurs, p.ex. la mémoire disponible ou la charge de travail
86.
AUTOMATED ENTERPRISE INFORMATION TECHNOLOGY ALERTING SYSTEM
Disclosed are various examples for automatically analyzing telemetry data from managed devices in one or more organizations and alerting information technology (IT) administrators as early as possible when widespread issues are detected. Telemetry data can be collected from managed devices across multiple organizations and/or enterprises. The collected data can be used to identify events (e.g., system crashes, application crashes, system boot times, system shutdown times, application hangs, application foreground/usage events, device central processing unit (CPU) and memory utilization, battery performance, etc.) that may indicate a potential issue in the IT infrastructure. Time-series data associated with the detected events can be generated and analyzed. Upon detection of a potential issue in view of an analysis of the time-series data, an alert can be generated and presented to an IT administrator or other entity who can further analyze and potentially remedy the issue.
Aspects of providing an excess capacity grid for artificial intelligence, machine learning, and lower-priority processes are described. A grid orchestration client is installed on a virtual machine or a physical device that performs a production workload for an enterprise. The grid orchestration client communicates with a grid orchestration server as part of an excess capacity grid that performs grid workloads. A request to execute a grid workload is received. The grid orchestration client causes the grid workload to be executed.
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
88.
UNIFIED RESOURCE MANAGEMENT ARCHITECTURE FOR WORKLOAD SCHEDULERS
Various aspects are disclosed for unified resource management for multiple workload schedulers. A resource manager receives a candidate host request from a workload scheduler. The resource manager transmits a set of candidate host snapshots for candidate hosts that match the workload resource requirements. The resource manager receives a workload allocation request for a host and reserves hardware resources on the host that match the workload resource requirements. The resource manager provides, to the workload scheduler, an indication that the hardware resources are successfully reserved for execution of the workload.
An example method of identifying an equal cost multipath (ECMP)-enabled route-based virtual private networks (RBVPN) in a virtualized computing system, comprises: obtaining, at a telemetry agent executing in an edge server of a data center, learned routes; identifying, by the telemetry agent from the routes, a destination network and a plurality of next hops associated therewith and a plurality of virtual tunnel interfaces (VTIs); identifying, by the telemetry agent for each of the plurality of VTIs, an associated VPN session; grouping, by the telemetry agent, the VPN sessions identified as associated with the plurality of VTIs into an ECMP-enabled RBVPN; adding, by the telemetry agent, a description of the ECMP-enabled RBVPN to telemetry data; and sending, by the telemetry agent, the telemetry data to a telemetry service.
Certain embodiments described herein are generally directed to techniques for determining items of inventory of a data center to which a user has access. Embodiments include receiving permission information indicating specific user permissions assigned to particular items of a plurality of items in an inventory of data center resources, wherein items of the plurality of items are organized in a hierarchical manner across nodes of a hierarchical tree. Embodiments include assigning categories to the plurality of items based on the permission information, wherein each of the particular items is assigned a unique category based on the specific user permissions and each of the plurality of items that is not in the particular items and that has a parent node in the hierarchical tree is assigned a category corresponding to the parent node. Embodiments include storing category information in a data store based on the assigning of the categories.
Example methods and systems for elastic provisioning of container-based graphics processing unit (GPU) nodes are described. In one example, a computer system may monitor usage information associated with a pool of multiple container-based GPU nodes. Based on the usage information, the computer system may apply rule(s) to determine whether capacity adjustment is required. In response to determination that capacity expansion is required, the computer system may configure the pool to expand by adding (a) at least one container-based GPU node to the pool, or (b) at least one container pod to one of the multiple container-based GPU nodes. Otherwise, in response to determination that capacity shrinkage is required, the computer system may configure the pool to shrink by removing (a) at least one container-based GPU node, or (b) at least one container pod from the pool.
A method of correlating alerts that are generated by a plurality of endpoints includes the steps of: collecting alert data of alerts generated by the endpoints; for each endpoint, computing alert sequences based on the collected alert data; training a sequence-based model with the computed alert sequences, to generate a vector representation for each of the alerts; for each alert in a set of alerts generated during a first time period, acquiring a vector representation corresponding thereto, which has been generated by the sequence-based model; and applying a clustering algorithm to the vector representations of the alerts in the set of alerts to generate a plurality of clusters of correlated alerts.
A method of registering and deploying an agent platform appliance in a hybrid environment includes the steps of: transmitting a first code to a cloud platform to create an authentication account for the agent platform appliance, wherein credentials for accessing the authentication account include the first code; transmitting a request for an access token that permits downloading images of agents from an agent repository of the cloud platform, wherein the request for the access token includes the first code for accessing the created authentication account; upon receiving the access token, transmitting a request to the agent repository, to download the images of the agents, wherein the request to download the images of the agents includes the received access token; and upon receiving the images of the agents from the agent repository, installing the agents on the agent platform appliance using the received images of the agents.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
94.
DISSEMINATING CONFIGURATION ACROSS DISTRIBUTED SYSTEMS USING DATABASE NODES
Certain embodiments described herein are generally directed to techniques for distributing configuration information in a network. Embodiments include receiving, by a database node running on a computing device, from a parent component, configuration information with respect to one or more logical entities and span information indicating one or more respective host computers related to each of the one or more logical entities. Embodiments include determining a first subset of the configuration information and a first subset of the span information to provide to a first child database node based on a first set of host computers associated with the first child database node. Embodiments include determining a second subset of the configuration information and a second subset of the span information to provide to a second child database node based on a second set of host computers associated with the second child database node.
The disclosure provides a method for tracking virtual machines (VMs) associated with a plurality of hosts in an inventory. The method generally includes determining to remove a first host of the plurality of hosts, the first host running a first VM, wherein: the first host and a second host are associated with a first host cluster in the inventory; the first host is the associated-host and the registered-host of the first VM in the inventory; determining the first VM is associated with first host cluster based on the associated-host of the first VM being the first host and the first host being associated with the first host cluster; identifying the second host is associated with the first host cluster in the inventory; altering the associated-host of the first VM to the second host and unsetting the registered-host for the first VM in the inventory; and removing the first host.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Disclosed are various examples of signaling host kernel crashes to a data processing unit (DPU) management operating system (OS). A host kernel crash handler is installed to a host device. A crash of a host kernel of the host device is detected. This triggers the host kernel crash handler to provide the signal to the DPU device, which executes a DPU side crash handling process based on the signal.
The disclosure provides an example method for connection health monitoring and troubleshooting. The method generally includes monitoring a plurality of connections established between a first application running on a first host and a second application running on a second host; based on the monitoring, detecting two or more connections of the plurality of connections have failed within a first time period; in response to detecting the two or more connections have failed within the first time period, determining to initiate a single health check between the first host and the second host and enqueuing a single health check request in a queue to invoke performance of the single health check based on the single health check request; determining the queue comprises: a queued active health check request, or no previously-queued health check requests; enqueuing the single health check request in the queue; and performing the single health check.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
98.
Protocol Switching and Secure Sockets Layer (SSL) Cross-Wiring to Enable Inter-Network Resource Connectivity
Aspects of protocol switching and cross-wiring to enable inter-network connectivity are described. For example, a transporter system including a transporter server and a transporter client can securely connect applications to resources in differing networked environments (e.g., clouds and/or data centers). The transporter client may establish data channels as secure socket layer (SSL) connections (e.g., Secure Websockets (WSS)) between a resource in one networked environment and a transporter server that is in communication via a proxy channel with an initiator device in another networked environment. Upon completing the build of a data path between the initiator device and the resource, the handling protocol of the data channels that are established as SSL connections can be modified to a basic socket-level channel (e.g., transmission control protocol, user datagram protocol, etc.) to permit socket-level data stream communications without restrictions.
A method for flow based breakout of firewall usage based on trust is provided. Some embodiments include receiving flow data for one or more flows associated with an endpoint external to a data center, the flow data indicating the one or more flows meet one or more good flow criteria, the one or more flows corresponding to flows of data communicated via a firewall and determining, based on the flow data meeting one or more trusted endpoint criteria, the endpoint is trusted. Some embodiments of the method include generating one or more policies that flows associated with the endpoint can bypass the firewall and configuring an edge services gateway with the one or more policies to cause the edge services gateway to apply the one or more policies without applying the firewall.
The disclosure provides approaches for managing multicast group membership at a node. An approach includes policing whether a pod can join a multicast group based on one or more rules. The approach further includes updating forwarding tables of a virtual switch based on whether the pod is allowed to join the multicast group.