Embodiments disclosed include a method and apparatus for global traffic control and optimization for software-defined networks. In an embodiment, data traffic is optimized by distributing predefined metrics (data traffic information) to all controllers in the network. The predefined metrics are specific to local network switches and controllers, but are distributed to all peers at configurable intervals. “Local” as used herein implies one POP and its associated switch and controller. The method of distribution of local POP metrics is strictly in band using a packet as defined by the protocol used by the data network.
Techniques for reducing the startup latency of functions in a Functions-as-a-Service (FaaS) infrastructure are provided. In one set of embodiments, a function manager of the FaaS infrastructure can receive a request to invoke a function uploaded to the infrastructure and can retrieve information associated with the function. The retrieved information can include an indicator of whether instances of the function may be sticky (i.e., kept in host system primary memory after function execution is complete), and a list of zero or more host systems in the FaaS infrastructure that currently have an unused sticky instance of the function in their respective primary memories. If the indicator indicates that instances of the function may be sticky and if the list identifies at least one host system with an unused sticky instance of the function in its primary memory, the function manager can select the at least one host system for executing the function.
A system that uses machine learning (ML) models—and in particular, deep neural networks—with features extracted from memory snapshots of malware programs to automatically recognize the presence of malicious techniques in such programs is provided. In various embodiments, this system can recognize the presence of malicious techniques that are defined by the MITRE ATT&CK framework and/or other similar frameworks/taxonomies.
Some embodiments provide a method of load balancing data message flows across multiple secure connections. The method receives a data message having source and destination addresses formatted according to a first protocol. Based on the source and destination addresses, the method selects one of the multiple secure connections for the data message. Each of the secure connections handles a first set of connections formatted according to the first protocol and a second set of connections formatted according to a second protocol that is an alternative to the first protocol. The method securely encapsulates the data message and forwards the encapsulated data message onto a network. The encapsulation includes an identifier for the selected secure connection.
The rate of incoming data records in a data stream is dynamically limited based on stream delay. A current delay representing a latency between a beginning of the data stream and a currently processed data record is obtained. A maximum delay representing a maximum tolerated delay is determined. A threshold delay representing a delay value that triggers calculation of a new drop rate is determined. A drop rate is calculated based on the current delay, the maximum delay, and the threshold delay. The drop rate represents a percentage of the incoming data records. A drop strategy is selected. One or more data records are discarded from the incoming data stream based on the drop rate, according to the drop strategy.
Techniques that leverage symbolic execution to automatically analyze and understand malicious XL4 macros is provided. Using symbolic execution, these techniques can automatically infer the “correct” values for environmental inputs that are employed by advanced XL4 malware for obfuscating their malicious payloads, thereby allowing for a complete analysis of such malware.
In one set of embodiments, a computer system can receive a plurality of requests for placing a plurality of clients on a plurality of graphics processing units (GPUs), where each request includes a profile specifying a number of GPU compute slices and a number of GPU memory slices requested by a corresponding client. The computer system can further formulate an integer linear programming (ILP) problem based on the requests and a maximum number of GPU compute and memory slices supported by each GPU. The computer system can then generate a solution for the ILP problem and place the plurality of clients on the plurality of GPUs in accordance with the solution.
Disclosed are approaches for providing per-application tunnel access, such as virtual private network (VPN) access, in LINUX based systems. In response to an application requesting a network connection, a process identifier of the application and an inode identifier representing a socket for the network connection are obtained. Then, a kernel space map is updated to include the process identifier of the application and the inode identifier. In response to the application making a network connection request, the inode identifier of the application is obtained based at least in part on a source network address, a source port number, a destination network address, and a destination port number. Then, the kernel space map is queried to obtain the process identifier of the application, wherein the inode identifier is a query parameter. Then, a routing policy is identified based at least in part on the process identifier.
Distributed appending of transactions in data lakes is described. A first message is received, at a first ingestion node of a plurality of ingestion nodes, as part of a transaction. The first message identifies a transaction identifier (ID) and a portion of data for the transaction. The data of the first message is persisted in temporary storage. A count of messages for the transaction for the first ingestion node is determined. Based on at least the count of messages, it is determined that the first ingestion node has received a complete set of messages for the transaction for the first ingestion node. A metadata write request is transmitted, by the first ingestion node, to a coordinator. The metadata write request includes a self-describing reference to persisted data. The self-describing reference identifies the first ingestion node, location information of the persisted data, and a range of the first data.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
10.
EXTENSIBILITY FOR CUSTOM DAY-2 OPERATIONS ON CLOUD RESOURCES
The present disclosure is related to devices, systems, and methods for extensibility for custom day-2 operations on cloud resources. One example includes receiving an indication of a resource type of a software-defined datacenter via an interface of a cloud automation platform, receiving an indication of an ABX action via the interface, associating the resource type with the ABX action to create a resource action responsive to an input via the interface, and deploying a blueprint containing a resource of the resource type, wherein the resource action is executable to modify an internal state of the resource.
Some embodiments provide a novel method for deploying cloud gateways between a set of cloud machines in a first network and a set of on-premises machines in an external network. The method collects a set of statistics for a first cloud gateway used to connect the set of cloud machines and the set of on-premises machines. The method analyzes the set of statistics to determine that a second cloud gateway is needed to connect the set of cloud machines and the set of on-premises machines. The method identifies a subset of the set of cloud machines. The method distributes a set of one or more forwarding rules to the subset of cloud machines to forward a set of data message flows from the subset of cloud machines to the set of on-premises machines through the second cloud gateway.
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
12.
DYNAMIC GROUPING OF NETWORK SEGMENTS FOR FORWARDING DATA MESSAGE FLOWS FROM MACHINES OF NETWORK SEGMENT GROUPS TO AN EXTERNAL NETWORK THROUGH DIFFERENT EDGE FORWARDING ELEMENTS
Some embodiments provide a novel method for dynamically deploying gateways for a first network connecting machines. The first network includes segments, routers, and a first gateway that connects to an external network. The method identifies a set of two or more segments that consumes more than a threshold amount of bandwidth of the first gateway. The identified set includes at least first and second segments. The method identifies one or more segment groups by aggregating two or more segments in the identified set. A first segment group includes the first and second segments and a third segment that is not in the identified set of two or more segments. The method configures a second gateway to process flows associated with each identified group including the first group. The method configures a set of routers to forward flows from machines of each segment of each identified group to the second gateway.
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 47/125 - Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
13.
ORGANIZATIONAL MACHINE LEARNING FOR ALERT PROCESSING
A computer system comprises a machine-learning (ML) platform at which prior alerts are received from endpoints during a training period and divided into a plurality of clusters, wherein each of the clusters has an associated cluster profile that specifies expected value constraints for attributes of new alerts that are determined to belong to the cluster, and wherein the ML platform is configured to: receive a first alert and then determine that the first alert belongs to a first cluster of the clusters; compare actual values of the attributes for the first alert to respective expected value constraints for the attributes specified in the cluster profile of the first cluster; determine any deviation between the actual values of the attributes and the respective expected value constraints for the attributes; and classify the first alert into one of a plurality of alert groups based on whether there is any deviation.
The disclosure provides an approach for scalable volumes of stateful containers in a virtual environment. A method includes detecting a size change of an existing storage volume for a container running on a host; checking a volume mapping table to determine a size of the existing storage volume; computing a difference between the changed size of the existing storage volume and the size of the existing storage volume in the volume mapping table; creating a storage volume for the container, wherein the size of the created storage volume is at least equal to the difference; and adding an identifier of the container, an identifier of the existing storage volume, an identifier of the created storage volume, and a size of the created storage volume, to an entry in the volume mapping table.
Solutions for rapid ransomware detection and recovery include: receiving a first set of in-memory changed data blocks; identifying, within the first set of in-memory changed data blocks, a second set of in-memory changed data blocks addressed for storage within a file index for a virtual machine (VM) disk; determining, relative to a change history of the file index, an anomalous condition; based on at least determining the anomalous condition, identifying a third set of blocks within the file index that are changed between two versions of the VM disk; determining that changes in the third set of blocks indicate ransomware; and based on at least determining that changes in the third set of blocks indicate ransomware, generating an alert. Machine learning (ML) models may perform anomaly/ransomware detection. Remediation activities may include disk restoration storing the VM memory.
An example method of deploying a hypervisor to a host in a public cloud includes: obtaining, by a deployment service, a prototype hypervisor image from shared storage; obtaining, by the deployment service, configuration information for the host and a physical network of the public cloud to which the host is attached; customizing, by the deployment service, the prototype hypervisor image in response to the configuration information to generate a customized hypervisor image; storing, by the deployment service, the customized hypervisor image in the shared storage in a manner accessible by the public cloud; and invoking a deployment application programming interface (API) of the public cloud to retrieve and install the customized hypervisor image to the host.
An example method of enabling a virtual infrastructure management (VIM) appliance for lifecycle management includes: identifying, by a cloud platform executing in a public cloud, a manager VIM appliance for the VIM appliance, the manager VIM appliance and the VIM appliance executing in at least one on-premises data center of an on-premises environment; obtaining information related to a management cluster having the manager VIM appliance and a virtual machine (VM) executing the VIM appliance; creating and applying, by the cloud platform in response to the information, a desired state for both the manager VIM appliance and the VIM appliance; and updating the cloud platform with a topology of the manager VIM appliance and the VIM appliance in the management cluster.
Solutions for ARP-based annotations for virtual machines. In some solutions, a hypervisor implemented in a first host might determine that a first process is executing on the first host. The hypervisor can determine first context information for the first process, generate an Address Resolution Protocol (ARP) request, and/or transmit a first packet comprising the ARP request and the context information to a central controller as an indication that the first process is executing on the first host.
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 61/103 - Mapping addresses of different types across network layers, e.g. resolution of network layer into physical layer addresses or address resolution protocol [ARP]
Some embodiments provide a novel method for reducing load on a first virtual private network (VPN) gateway of a first datacenter by using a second VPN gateway to perform data message encryption needed for VPN communication with a second datacenter. The second gateway performs encryption for machines executing on several host computers of the first datacenter. The first gateway establishes a VPN session with a third gateway of the second datacenter and establishes a tunnel. The first gateway provides, to the second gateway, state information specifying that the second gateway is to perform encryption for a set of data messages exchanged along the tunnel. The first gateway receives, from the second gateway, an encrypted data message to be sent to a destination machine in the second datacenter. The first gateway forwards the encrypted data message to the third gateway for the third gateway to forward to the destination machine.
Some embodiments provide a novel method for dynamically performing data message encryption for machines of a first network at several gateways. The encryption is needed for VPN communication with a second network. The method receives, through a user interface, a VPN policy associated with a first segment set of the first network. The method uses a first gateway to establish VPN sessions for a first machine set associated with the first segment set, uses a second gateway to perform encryption operations for the first machine set, and uses the first gateway to perform encryption operations for a second machine set associated with a second segment set of the first network. The method monitors load on the first or second gateways. Based on the monitored load, the method uses a third gateway to perform encryption operations for a third machine set associated with a third segment set of the first network.
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
Some embodiments of the invention provide a method for configuring multiple hardware offload units of a host computer to perform operations on packets associated with machines (e.g., virtual machines or containers) executing on the host computer and to pass the packets between each other efficiently. For instance, in some embodiments, the method configures a program executing on the host computer to identify a first hardware offload unit that has to perform a first operation on a packet associated with a particular machine and to provide the packet to the first hardware offload unit. The packet in some embodiments is a packet that the particular machine has sent to a destination machine on the network, or is a packet received from a source machine through a network and destined to the particular machine.
The present disclosure is related to methods, systems, and machine-readable media for force provisioning virtual objects in degraded stretched clusters. A request to provision a virtual object by a stretched cluster according to a storage policy specified as part of the request can be received by a software defined data center (SDDC). The cluster can include a plurality of sites. An insufficiency of storage policy resources to satisfy the storage policy specified for the virtual object can be determined. The virtual object can be force provisioned responsive to determining storage policy resources sufficient to satisfy the storage policy at one of the plurality of sites.
A method of managing configurations of SDDCs of a tenant includes the steps of: retrieving a base configuration document, a first supplemental configuration document of a first SDDC, and a second supplemental configuration document of a second SDDC; issuing, to the first SDDC, a first instruction to update a running configuration state thereof according to the base configuration document and the first supplemental configuration document; and issuing, to the second SDDC, a second instruction to update a running configuration state thereof according to the base configuration document and the second supplemental configuration document, wherein the base configuration document includes settings of first configuration properties common across all of the tenant's SDDCs, the first supplemental configuration document includes first settings of second configuration properties only applicable to the first SDDC, and the second supplemental configuration document includes second settings of the second configuration properties only applicable to the second SDDC.
Disclosed are various approaches for performing biometric authentication of users using an application running on a client device. A biometric model can be trained using biometric data from a population of users. The biometric model can be used by the client application to authenticate users and can be separate from system-level biometric authentication capabilities of the client device.
Container images are managed in a clustered container host system with a shared storage device. Hosts of the system each include a virtualization software layer that supports execution of virtual machines (VMs), one or more of which are pod VMs that have implemented therein a container engine that supports execution of containers within the respective pod VM. A method of deploying containers includes determining, from pod objects published by a master device of the system and accessible by all hosts of the system, that a new pod VM is to be created, creating the new pod VM, and spinning up one or more containers in the new pod VM using images of containers previously spun up in another pod VM, wherein the images of the containers previously spun up in the other pod VM are stored in the storage device.
The disclosure provides a method for preparing a simulation system to simulate upgrade operations for a distributed container orchestration system. The method generally includes monitoring, by a simulation operator of the simulation system, for new resources generated at a management cluster in the distributed container orchestration system, based on the monitoring, discovering, by the simulation operator, a new resource generated at the management cluster specifying a version of container orchestration software supported and made available by the management cluster, and triggering, by the simulation operator, a creation of a new mock virtual machine (VM) template in the simulation system specifying the version of the container orchestration software, wherein the simulation system is configured to use the new mock VM template for simulating mock VMs in the simulation system that are compatible with the version of the container orchestration software supported and made available by the management cluster.
The disclosure provides a method for diagnosing remote sites of a distributed container orchestration system. The method generally includes receiving a test suite custom resource defining an image to be used for a diagnosis of components of a workload cluster deployed at the remote sites, wherein the image comprises a diagnosis module and/or a user-provided plugin to be used for the diagnosis; identifying a failed component in the workload cluster; obtaining infrastructure information about the workload cluster; identifying the components of the workload cluster for diagnosis based on the failed component, the infrastructure information, and the test suite custom resource; identifying at least one diagnosis site of the remote sites where the components are running using the infrastructure information; and deploying a first pod at the at least one diagnosis site to execute the diagnosis of the one or more components.
Disclosed embodiments pertain to support input/output modules for container volumes. An input/output (I/O) request can be received from a containerized application. A container volume targeted by the I/O request can be identified. A determination is then made that the container volume is associated with one or more I/O modules based on a stored mapping of container volumes to I/O modules. Data associated with the I/O request is sent to the one or more I/O modules for processing. Processed data can be received from the one or more I/O modules, and the I/O request is fulfilled using the processed data. In certain embodiments, a write I/O request is fulfilled by writing the processed data to a virtual disk file for the container volume, and a read I/O request is fulfilled by returning the original data after reversing the processing to the containerized application.
A version control interface for data provides a layer of abstraction that permits multiple readers and writers to access data lakes concurrently. An overlay file system, based on a data structure such as a tree, is used on top of one or more underlying storage instances to implement the interface. Each tree node tree is identified and accessed by means of any universally unique identifiers. Copy-on-write with the tree data structure implements snapshots of the overlay file system. The snapshots support a long-lived master branch, with point-in-time snapshots of its history, and one or more short-lived private branches. As data objects are written to the data lake, the private branch corresponding to a writer is updated. The private branches are merged back into the master branch using any merging logic, and conflict resolution policies are implemented. Readers read from the updated master branch or from any of the private branches.
An example a management node may include a processor and memory coupled to the processor. The memory may include a resource management module to determine a maintenance schedule of a resource in a data center. Prior to the resource entering the maintenance schedule, the resource management module may determine a set of resources having a dependency relationship with the resource based on a preselected category. During the maintenance schedule of the resource, the resource management module may mark that the resource and the set of resources having the dependency relationship with the resource are in a maintenance mode. Upon marking the resource and the set of resources, the resource management module may suspend monitoring of the resource and the set of resources.
The disclosure provides an approach for formally verifying a state machine replication protocol (SMRP) based on a model SMRP, and deploying a distributed system, such as a blockchain, that runs using the formally verified SMRP. The approach provides a verifier that models the SMRP within a model distributed system. Modeling includes modeling actions by model components of the model distributed system so as to transition state of the model SMRP, and then verifying that applicable invariants hold true after the state transition. As long as the model and actual SMRPs are logically equivalent, then launching an actual SMRP based on the model SMRP should preserve formally verified byzantine fault tolerance within the actual SMRP of the distributed system.
Methods and apparatus to manage cloud computing resources are disclosed. An example apparatus includes network interface circuitry; computer readable instructions; and programmable circuitry to instantiate: group management circuitry to determine a group identifier associated with a resource to be provisioned; allocation circuitry to: determine an intersection of placements for resources associated with the group identifier; validate a rule for the intersection; and cause provisioning of the resources when the placement rule passes.
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
33.
OFFLOADING PACKET PROCESSING PROGRAMS FROM VIRTUAL MACHINES TO A HYPERVISOR AND EFFICIENTLY EXECUTING THE OFFLOADED PACKET PROCESSING PROGRAMS
In one set of embodiments, a hypervisor of a host system can receive a packet processing program from a virtual network interface controller (NIC) driver of a virtual machine (VM) running on the hypervisor. The hypervisor can then attach the packet processing program to a first execution point in a physical NIC driver of the hypervisor and to a second execution point in a virtual NIC backend of the hypervisor, where the virtual NIC backend corresponds to a virtual NIC of the VM that originated the packet processing program.
A method of managing a desired state of a software-defined data center (SDDC) includes the steps of: receiving an original desired state document that includes configurations and associated criteria for applying the configurations; evaluating a first criteria to determine that a first configuration associated with the first criteria is applicable to components of the SDDC; evaluating a second criteria to determine that a second configuration associated with the second criteria is not applicable to any components of the SDDC; creating an updated desired state of the SDDC, as a result of the evaluating of the first and second criteria, the updated desired state including the first configuration and excluding the second configuration; and applying the updated desired state to the SDDC.
Methods, apparatus, systems, and articles of manufacture are disclosed. An example system comprises interface circuitry; programmable circuitry; and instructions to program the programmable circuitry to: retrieve metadata associated with a plugin for a cloud resource platform, the plugin to provide a capability to provision a cloud resource of the cloud resource platform; perform a first transformation of the metadata from a first format associated with the plugin to a second format associated with a blueprint service; register the capability into the blueprint service; generate a blueprint including instructions to provision the cloud resource; transform the blueprint from the second format to a third format, the third format at least partially defined by the first format; and provision the cloud resource based on the transformed blueprint.
Various shared infrastructure governance decision-making systems and methods are disclosed. One such method comprises receiving, by at least one blockchain node from a client device, a reconfiguration request for changing infrastructure of a blockchain service by performing a reconfiguration action; triggering, by the at least one blockchain node, an infrastructure governance approval process to approve or deny the reconfiguration request for changing the infrastructure of the blockchain service; and invoking, by the at least one blockchain node, initiation of the reconfiguration action by the blockchain service upon approval of the reconfiguration request. Other methods and system are also disclosed.
Some embodiments of the invention provide a method of remediating anomalies in an SD-WAN implemented by multiple forwarding elements (FEs) located at multiple sites connected by the SD-WAN. The method determines that a particular anomaly detected in the SD-WAN requires remediation to improve performance for a set of one or more flows traversing through the SD-WAN. The method identifies a set of two or more remedial actions for remediating the particular anomaly in the SD-WAN. For each identified remedial action in the set, the method selectively implements the identified remedial action for a subset of the set of flows for a duration of time in order to collect a set of performance metrics associated with SD-WAN performance during the duration of time for which the identified remedial action is implemented. Based on the collected sets of performance metrics, the method uses a machine-trained process to select one of the identified remedial actions as an optimal remedial action in the set to implement for all of the flows in the set of flows. The method implements the selected remedial action for all of the flows in the set of flows.
The current document is directed to contention control for computational resources in distributed computer systems and, in particular, to contention control for memory in distributed metrics collection systems that collect and aggregate metric data in distributed computer systems. In one implementation, parallel metric-data collectors in a first distributed computer system collect metric data and one or more aggregators aggregate collected metric data and forward the aggregated metric data to a second distributed computer system, which uses the metric data for various monitoring, analysis, and management tasks. Each parallel data collector stores received metrics in a metrics container assigned to the parallel collector and a write/read-write lock provides contention control that allows multiple metric-data collectors to concurrently access metrics containers but only a single aggregator to access the metrics containers.
A protocol for federated decision tree learning is provided. In one set of embodiments, this protocol employs a cryptographic technique known as private set intersection (PSI) (and more precisely, a variant of PSI known as quorum private set intersection analytics (QPSIA)) to carry out federated learning of decision trees in an efficient and effective manner.
An example method of managing on-premises software executing in a data center includes: probing, by a connectivity agent executing in the data center, connectivity between a cloud service executing in a public cloud and the data center; storing, by the connectivity agent, probe results in a connectivity store of the data center; reading, by connectivity sensing logic in the on-premises software, a current probe result from the connectivity store; and providing, by the on-premises software to a user, functionality based on the current probe result.
Some embodiments of the invention provide a method for remediating anomalies in an SD-WAN implemented by multiple forwarding elements (FEs) located at multiple sites connected by the SD-WAN. The method is performed for each particular FE in a set of one or more FEs. The method identifies a set of metrics associated with each application of multiple applications for which the particular FE forwards traffic flows. For each particular application of the multiple applications, the method generates a distribution graph that shows the identified set of metrics associated with the particular application for the particular FE over a first duration of time. The method analyzes the generated distribution graphs using a machine-trained process to identify one or more per-application incidents by identifying that a threshold number of metrics associated with the particular application (1) are outliers with respect to the generated distribution graph for the particular application and (2) occurred within a second duration of time.
Improved techniques for testing the effectiveness of signatures used by a signature-based intrusion detection system (IDS) are provided. In one set of embodiments, these techniques involve parsing each signature in the IDS's signature set (or a subset of the signature set) to understand the signature's content and creating a synthetic network traffic flow for the signature that mimics/simulates its corresponding attack. The synthetic network traffic flows can then be replayed against the IDS in order to verify that the correct alerts are generated by the IDS.
An example method of synchronizing a first inventory of a cross-cluster control plane (xCCP) with a second inventory of a cluster control plane (CCP) includes: receiving, at a replication engine of the xCCP from the CCP, a notification of a CCP operation that modified an object in the second inventory; determining, by the replication engine, a first operation to modify the first inventory with the object; identifying, in a buffer of the replication engine, a second operation to modify the first inventory with a related object associated with the object, the related object included in an earlier CCP notification, received at the xCCP before the notification, but not used to modify the first inventory due to an unresolved dependency; and calling, by the replication engine in response to satisfaction of the unresolved dependency, a service of the xCCP to modify the first inventory by performing the first and second operations.
An example system may include a first endpoint executing a remote collector and a second endpoint in communication with the first endpoint. The remote collector may monitor the second endpoint. The remote collector may include an agent installation unit to install a monitoring agent with configuration data on the second endpoint. The configuration data may specify a configuration for the monitoring agent to monitor a first program executing in the second endpoint. Further, the second endpoint may include a buffer limit configuration unit to execute the monitoring agent in a test mode to determine a first number of metrics to be collected in one cycle based on the configuration data. Furthermore, the buffer limit configuration unit may configure a buffer limit of the monitoring agent based on the first number of metrics and, upon configuring the buffer limit, enable the monitoring agent to monitor the first program.
An example system may include a first endpoint and a second endpoint executing a remote collector to monitor the first endpoint. The remote collector may include a buffer limit configuration unit to receive a request to install a monitoring agent on the first endpoint. The request may include an operating system type. Further, the buffer limit configuration unit may determine a first predefined buffer limit corresponding to the operating system type. Furthermore, the remote collector may include an installation unit to install the monitoring agent with configuration data on the first endpoint. The configuration data may specify a configuration for the monitoring agent to monitor an operating system executing in the first endpoint and the first predefined buffer limit as a buffer limit for the monitoring agent. Furthermore, the installation unit may enable the monitoring agent to monitor the operating system based on the configuration data with the buffer limit.
A log is received at a user space process of a host from a logical logging component of a virtual computing instance (VCI), the log generated by a container running on the VCI. The log is communicated from the user space process to a logical logging component of the host. The log is communicated from the logical logging component of the host to a logging process of the host. The log is configured and stored in host storage.
Some embodiments of the invention provide a method of performing layer 7 (L7) packet processing for a set of Pods executing on a host computer, the set of Pods managed by a container orchestration platform. The method is performed at the host computer. The method receives notification of a creation of a traffic control (TC) custom resource (CR) that is defined by reference to a TC custom resource definition (CRD). The method identifies a set of interfaces of a set of one or more managed forwarding elements (MFEs) executing on the host computer that are candidate interfaces for receiving flows that need to be directed based on the TC CR to a layer 7 packet processor. Based on the identified set of interfaces, the method provides a set of flow records to the set of MFEs to process in order to direct a subset of flows that the set of MFEs receive to the layer 7 packet processor.
Example methods and systems for multi-engine intrusion detection are described. In one example, a computer system may configure a set of multiple intrusion detection system (IDS) engines that include at least a first IDS engine and a second IDS engine. In response to detecting establishment of a first packet flow and a second packet flow, the computer system may assign the first packet flow to the first IDS engine and second packet flow to the second engine based on an assignment policy. This way, first packet flow inspection may be performed using the first IDS engine to determine whether first packet(s) associated with the first packet flow are potentially malicious. Second packet flow inspection may be performed using the second IDS engine to determine whether second packet(s) associated with the second packet flow are potentially malicious.
The disclosure provides a method for isolated environments for containerized workloads within a virtual private cloud in a networking environment. The method generally includes defining, by a user, a subnet custom resource object for creating a subnet in the virtual private cloud, wherein defining the subnet custom resource object comprises defining a connectivity mode for the subnet; deploying the subnet custom resource object such that the subnet is created in the virtual private cloud with the connectivity mode specified for the subnet; defining, by the user, a subnet port custom resource object for assigning a node to the subnet, wherein one or more containerized workloads are running on the node; and deploying the subnet port custom resource object such that the node is assigned to the subnet.
Some embodiments provide a novel method of migrating a particular virtual machine (VM) from a first host computer to a second host computer. The first host computer of some embodiments has a physical network interface card (PNIC) that performs at least one of network forwarding operations and middlebox service operations for the particular VM. The first host computer sends, to the PNIC of the first host computer, a request for state information relating to at least one of network forwarding operations and middlebox service operations that the PNIC performs for the particular VM. The first host computer receives the state information from the PNIC. The first host computer provides the state information received from the PNIC to the second host computer as part of a data migration that is performed to migrate the particular VM from the first host computer to the second host computer.
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 41/0897 - Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities by horizontal or vertical scaling of resources, or by migrating entities, e.g. virtual resources or entities
51.
MANAGING CONFIGURATION OF SUPERNETS FOR A ROUTE TABLE BASED ON AVAILABLE CAPACITY IN THE ROUTE TABLE
Described herein are systems, methods, and software to manage prefixes for a route table in a gateway according to an implementation. In one implementation, a management service monitors a quantity of prefix routes associated with a route table in a gateway and determines when the quantity satisfies one or more criteria. When the capacity satisfies the one or more criteria, the management service determines one or more supernets that each represent a subset of the prefix routes and adds the one or more supernets to the route table to replaces the subset of the prefix routes.
An apparatus disclosed herein includes memory; computer readable instructions; and programmable circuitry to be programmed by the computer readable instructions to: generate a reclamation recommendation based on a subset of entities eligible for reclamation, the subset of the entities meeting a resource requirement of a failed entity; reconfigure the subset of the entities to reclaim resources of the subset of the entities based on the reclamation recommendation; and execute the failed entity using the reclaimed resources of the subset of the entities.
Described herein are systems, methods, and software to manage an active/standby gateway configuration using Duplicate Address Detection (DAD) packets. In one implementation, a first gateway determines that a heartbeat connection with a second gateway has failed. In response to the failed heartbeat connection, the first gateway implements a packet filter for the data plane that permits DAD packets but blocks one or more other protocols. The first gateway then determines whether a response is received to the DAD packets within a timeout period. If received, the first gateway will revert to a standby state. If not received, the first gateway will assume the active state in place of the second gateway.
Some embodiments provide a method for monitoring a multi-tenant system deployed in a cloud, at a monitoring service deployed in the cloud. The method deploys a first service instance in the cloud for a first tenant that is based on a monitoring service configuration defined by an administrator of the multi-tenant system. The method collects (i) a first set of metrics of the first service instance and (ii) a second set of metrics of a second, existing service instance deployed in the cloud for a second, existing tenant of the multi-tenant system. The method uses the second set of metrics to determine an effect on the second service instance of the deployment of the first service instance.
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
55.
METHODS AND APPARATUS TO GENERATE CODE AS A PLUG-IN IN A CLOUD COMPUTING ENVIRONMENT
Methods, apparatus, systems, and articles of manufacture are disclosed to generate code as a plug-in in a cloud computing environment. An example system includes at least one memory, programmable circuitry, and machine readable instructions to program the programmable circuitry to introspect code in a library to obtain introspection data, the library corresponding to a resource that is to be deployed in a cloud infrastructure environment, generate a model based on the introspection data, the model to be a representation of the resource, cross-reference the model with a resource meta-model, the resource meta-model to map characteristics of the resource represented by the model to an actual state of the resource, and generate a plug-in based on the cross-referenced model.
An example system includes at least one memory; programmable circuitry; and machine-readable instructions to program the programmable circuitry to: select an orchestration integration based on capability tags of a plurality of orchestration integrations and based on constraints of an internet protocol address management (IPAM) integration; and cause execution of a workflow using the orchestration integration, the workflow to cause an IPAM system to allocate an internet protocol address for a resource of a cloud application.
Some embodiments provide a method for a monitoring service that monitors a multi-tenant system with multiple tenant-specific service instances executing in a cloud. For each tenant-specific service instance monitored by the monitoring service, the method collects values for metrics defined in a declarative configuration file for the tenant-specific service instance and compares the collected values to values specified in the declarative configuration file for the metrics to determine whether deployment of other service instances affects operation of the tenant-specific service instance. The metrics in the declarative configuration file are generated based on a service-level agreement for the tenant.
H04L 41/5009 - Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
H04L 43/55 - Testing of service level quality, e.g. simulating service usage
58.
CREDIT UNITS-BASED ACCESS CONTROL FOR DATA CENTER RESOURCES
An example method may include generating a credit unit defining a value indicating a number of times an operation can be performed on a resource type in a data center. Further, the method may include assigning credits, a credit limit, and the credit unit to a user account. The credit limit may indicate maximum credits that can be used to perform each operation. Furthermore, the method may include receiving a request to perform an operation on a data center resource from a user associated with the user account. Upon receiving the request, the method may include determining whether the user is permitted to perform the operation on the data center resource based on available credits of the assigned credits, the credit limit, and the credit unit. Further, the method may include executing or denying execution of the operation on the data center resource based on the determination.
Improved techniques for compressing gradient information that is communicated between clients and a parameter server in a distributed or federated learning training procedure are disclosed. In certain embodiments these techniques enable bi-directional gradient compression, which refers to the compression of both (1) the gradients sent by the participating clients in a given round to the parameter server and (2) the global gradient returned by the parameter server to those clients. In further embodiments, the techniques of the present disclosure eliminate the need for the parameter server to decompress each received gradient as part of computing the global gradient, thereby improving training performance.
Methods, apparatus, systems, and articles of manufacture are disclosed to dynamically monitor and control compute device identities during operations. Disclosed is an apparatus comprising interface circuitry, machine readable instructions, and processor circuitry to at least one of instantiate or execute the machine readable instructions to generate a unique label for a node from a data plane, the unique label to identify the node, perform an operation on the node, the operation to be performed on the node by identifying the node associated with the unique label, and maintain the unique label until the operation on the node is successful.
G06F 15/173 - Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star or snowflake
H04L 41/5003 - Managing SLA; Interaction between SLA and QoS
H04L 43/0817 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
Some embodiments provide a method for a health monitoring service that monitors a system with a set of services executing across a set of one or more datacenters. For each of multiple services monitored by the health monitoring service, the method (1) contacts an API exposed by the service to provide health monitoring data for the service and (2) receives health monitoring data for the service that provides, for each of multiple aspects of the service, (i) a status and (ii) an explanation for the status in a uniform format used by the APIs of each of the services. At least two different services provide health monitoring data in the uniform format for different groups of aspects of the services.
H04L 43/0817 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
H04L 61/3015 - Name registration, generation or assignment
62.
RAN APPLICATIONS FOR INTER-CELL INTERFERENCE MITIGATION FOR MASSIVE MIMO IN A RAN
Some embodiments of the invention provide a method for operating a first base station of a radio access network (RAN). At the first base station, the method receives a set of allow and block policies for allocating carrier resources to carrier beams utilized by the first base station for mobile devices within a first region serviced by the first base station, said first region located near a second region serviced by a second base station. At the first base station, the method identifies a first mobile device operating in the first region. At the first base station, the method uses the set of allow and block policies to allocate carrier resources to a carrier beam used to communicate with the first mobile device in the first region.
H04W 72/0453 - Resources in frequency domain, e.g. a carrier in FDMA
H04B 7/06 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
Some embodiments of the invention provide a method for detecting port scans in a container orchestration system cluster that includes at least a first machine executing on a host computer. The method identifies a packet stream between the first machine and a second machine operating outside of the host computer. The method determines that the packet stream is potentially part of a port scanning operation based on an assessment that the packet stream includes less than a threshold number of packets during a particular time period. Based on said determination, the method identifies an amount of payload data exchanged between the first and second machines in the packet stream during the particular time period. When the identified amount of payload data is less than or equal to a threshold amount of payload data, the method classifies the stream as a probable port-scanning stream.
Some embodiments provide a method for providing health status for a system implemented in a network. The method displays, in a graphical user interface (GUI), representations of health status for multiple different services of the system. Each representation for a respective service shows health status for the respective service over a first particular time period. Upon receiving selection of a particular service, the method displays representations of health status data for each of multiple different aspects of the particular service. Each representation for a respective aspect of the service shows operational status for the respective aspect of the service over a second particular time period.
Some embodiments provide a method for monitoring a system deployed in a cloud. The method deploys a first health monitoring service that monitors a first set of common services of the system deployed in the cloud by directly communicating with each service in the first set of services to determine whether a respective set of aspects of each respective service are properly operational. The first set of common services are accessed by multiple tenants of the system. Within each respective tenant-specific service instance of multiple tenant-specific service instances deployed in the cloud for the tenants, the method deploys a respective health monitoring service that monitors a respective group of microservices of the service instance by directly communicating with each microservice of the tenant-specific service instance to determine whether a respective set of aspects of each respective microservice are properly operational.
Some embodiments of the invention provide a method for mitigating inter-region interference for multiple regions serviced by multiple RAN (Radio Access Network) base stations. The method is performed for each region serviced by each particular RAN base station. The method identifies a set of one or more sub-regions receiving interfering signals from other RAN base stations. The method specifies, for each particular sub-region in the identified set of sub-regions that receives interfering signals from the particular RAN base station and another RAN base station, (1) an allow policy that identifies an allowed first set of carrier resources of the particular RAN base station that are to be allocated to a set of one or more user equipments operating in the particular sub-region, and (2) a block policy that identifies a blocked second set of carrier resource of the other RAN base station that correspond to the first set of resources and that cannot be allocated to the set of user equipments operating in the particular sub-region. The method distributes the specified allow and block policies to the RAN base stations.
Some embodiments of the invention provide a system for mitigating inter-region interference for multiple regions serviced by multiple RAN (Radio Access Network) base stations. The system includes a first RAN application for generating a map that identifies, for each particular region serviced by each particular RAN base station, a set of one or more sub-regions receiving interfering signals from other RAN base stations. The system includes a second RAN application for (1) using the generated map and a set of input received from the plurality of RAN base stations to define, for each sub-region in the set of sub-regions, policies for allocating carrier resources of the particular RAN base station to carrier beams transmitted by the particular RAN base station to the sub-regions with the interfering signals, and (2) providing the defined policies to the RAN base stations for which the policies are defined.
An example method may include obtaining, at a first instance, first compatibility metadata associated with a product from a webserver, wherein the compatibility metadata includes an indication of compatibility or incompatibility between a plurality of versions associated with the product in a first format. Further, the method may include transforming, using a data structure, the compatibility metadata from the first format to a second format and storing the transformed compatibility metadata on a local datastore. The second format may indicate a list of candidate upgrade versions that are compatible with a current version of the product. Furthermore, the method may include rendering the stored compatibility metadata including the list of candidate upgrade versions that are compatible with the current version of the product on a user interface of a client device in response to receiving an upgrade request.
Described herein are systems, methods, and software to manage internet protocol (IP) address allocation for tenants in a computing environment. In one implementation, a logical router associated with a tenant in the computing environment requests a public IP address for a new segment instance from a controller. In response to the request, the controller may select a public IP address from a pool of available IP addresses and update networking address translation (NAT) on the logical router to associate the public IP address with a private IP address allocated to the new segment instance.
Some embodiments of the invention provide a method of detecting and remediating anomalies in an SD-WAN implemented by multiple forwarding elements (FEs) located at multiple sites connected by the SD-WAN. The method receives, from the multiple FEs, multiple sets of flow data associated with application traffic that traverses the multiple FEs. The method uses a first set of machine-trained processes to analyze the multiple sets of flow data in order to identify at least one anomaly associated with at least one particular FE in the multiple FEs. The method uses a second set of machine-trained processes to identify at least one remedial action for remediating the identified anomaly. The method implements the identified remedial action by directing an SD-WAN controller deployed in the SD-WAN to implement the identified remedial action.
H04L 41/0604 - Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
H04L 41/0654 - Management of faults, events, alarms or notifications using network fault recovery
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
71.
USING DIFFERENT EVENT-DISTRIBUTION POLICIES TO STREAM EVENT DATA TO DIFFERENT EVENT CONSUMERS
Some embodiments provide a novel policy-driven method for providing event data to several event consumers. An event server stores in a set of one or more data storages event data published by a set of one or more event data publishers. The event server receives first and second different event-distribution policies from first and second event consumers for first and second streams of event data tuples for which the first and second event consumers register with the event server to receive. Each event consumer has multiple consumer instances. The event server uses the first and second event-distribution policies to differently distribute the first and second streams of event data tuples to the consumer instances of the first and second event consumers.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
Techniques for implementing a hardware-based cache controller in, e.g., a tiered memory computer system are provided. In one set of embodiments, the cache controller can flexibly operate in a number of different modes that aid the OS/hypervisor of the computer system in managing and optimizing its use of the system's memory tiers. In another set of embodiments, the cache controller can implement a hardware architecture that enables it to significantly reduce the probability of tag collisions, decouple cache capacity management from cache lookup and allocation, and handle multiple concurrent cache transactions.
A memory hierarchy includes a first memory and a second memory that is at a lower position in the memory hierarchy than the first memory. A method of managing the memory hierarchy includes: observing, over a first period of time, accesses to pages of the first memory; in response to determining that no page in a first group of pages was accessed during the first period of time, moving each page in the first group of pages from the first memory to the second memory; and in response to determining that the number of pages in other groups of pages of the first memory, which were accessed during the first period of time, is less than a threshold number of pages, moving each page in the other group of pages, that was not accessed during the first period of time from the first memory to the second memory.
G06F 12/0891 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
G06F 12/1009 - Address translation using page tables, e.g. page table structures
74.
PROVISIONING IMAGES TO DEPLOY CONTAINERIZED WORKLOADS IN A VIRTUALIZED ENVIRONMENT
A method for provisioning images to deploy containerized workloads in a virtualized environment can include bringing up a containerized workload in a virtualized computing environment responsive to receiving a request to run a containerized workload in the virtualized computing environment. Bringing up the containerized workload can include creating a VMDK that includes a container image in shared storage of an image registry responsive to authenticating with the image registry, attaching the VMDK to a virtual computing instance, responsive to receiving a request, made by a container running in the VCI, for a file of the container image in the attached VMDK, retrieving the file from the shared storage, and bringing up the containerized workload using the file.
A system and computer-implemented method for managing lifecycles of network functions in multiple cloud environments uses declarative requests to execute lifecycle management operations for network functions running in the multiple cloud environments, which have been transformed from imperative requests to execute the lifecycle management operation at a declarative service. Execution of the lifecycle management operations at the multiple cloud environments are managed from a central network function lifecycle orchestrator based on the declarative requests.
System and method for backing up management components of a software-defined data center (SDDC) managed by a cloud-based service uses backup rules for the SDDC, which are used to configure a backup manager agent in the SDDC. The backup rules are then used by the backup manager agent to determine whether at least one of system logs generated by the management components in the SDDC, which are monitored by the backup manager agent, satisfies the backup rules to initiate a backup operation for at least one of the management components of the SDDC.
G06F 16/14 - File systems; File servers - Details of searching files based on file metadata
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
77.
FULLY ASSOCIATIVE CACHE LOOKUP WITH MULTIPLE CHOICE HASHING
Techniques for implementing a hardware-based cache controller in, e.g., a tiered memory computer system are provided. In one set of embodiments, the cache controller can flexibly operate in a number of different modes that aid the OS/hypervisor of the computer system in managing and optimizing its use of the system's memory tiers. In another set of embodiments, the cache controller can implement a hardware architecture that enables it to significantly reduce the probability of tag collisions, decouple cache capacity management from cache lookup and allocation, and handle multiple concurrent cache transactions.
G06F 12/0864 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using pseudo-associative means, e.g. set-associative or hashing
G06F 12/0895 - Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
78.
DECOUPLING CACHE CAPACITY MANAGEMENT FROM CACHE LOOKUP AND ALLOCATION
Techniques for implementing a hardware-based cache controller in, e.g., a tiered memory computer system are provided. In one set of embodiments, the cache controller can flexibly operate in a number of different modes that aid the OS/hypervisor of the computer system in managing and optimizing its use of the system's memory tiers. In another set of embodiments, the cache controller can implement a hardware architecture that enables it to significantly reduce the probability of tag collisions, decouple cache capacity management from cache lookup and allocation, and handle multiple concurrent cache transactions.
G06F 12/0846 - Cache with multiple tag or data arrays being simultaneously accessible
G06F 12/0891 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
A method of reducing data transmission between neural networks in a distributed or federated learning environment, includes the steps of: training a quantization neural network by using a plurality of training vectors each having a dimension k, wherein the quantization neural network is configured to, based on said training, output quantization levels for approximating input vectors having the dimension k; after training the quantization neural network, randomly sampling coordinates of a vector having a dimension d, to provide a first set of k coordinates, wherein d is greater than k; inputting the first set of k coordinates to the quantization neural network to determine first quantization levels for approximating the first set of k coordinates; quantizing the vector having the dimension d based on the determined first quantization levels; and using the quantized vector in the distributed or federated learning environment.
A distributed file system operating over a plurality of hosts is built on top of a tree structure having a root node, internal nodes, and leaf nodes. Each host maintains at least one node and non-leaf nodes are allocated buffers according to a workload of the distributed file system. A write operation is performed by inserting write data into one of the nodes of the tree structure having a buffer. A read operation is performed by traversing the tree structure down to a leaf node that stores read target data, collecting updates to the read target data, which are stored in buffers of the traversed nodes, applying the updates to the read target data, and returning the updated read target data as read data.
A method of managing a network file copy (NFC) operation, includes the steps of: transmitting a request to execute a first NFC operation on at least a first data store, wherein the first NFC operation comprises creating a full copy of a file that is stored in the first data store; after transmitting the request to execute the first NFC operation, determining that the first NFC operation should be stopped; and based on determining that the first NFC operation should be stopped: transmitting a request to stop the first NFC operation, selecting a second data store, and transmitting a request to execute a second NFC operation on at least the second data store, wherein the second NFC operation comprises creating a copy of at least a portion of the file.
Some embodiments provide a novel method for providing data regarding events that is stored in a data store. An event server receives a registration to receive data for a first event from a particular consumer. The event server uses an identity associated with the particular consumer to identify a set of one or more partitions of the data store that store data for the first event. The event server provides, to the particular consumer, a stream of data regarding the first event that is stored in the identified partition set for the particular consumer to process.
Disclosed are various embodiments for binding the configuration state of client devices to the blockchain and utilizing the binding for managing compliance. A management agent can send a request to a smart contract hosted by a blockchain network for a configuration state for a computing device, the state including data sovereignty and governance policies of the computing device. The management agent can update the configuration of the computing device based upon the configuration state obtained from the blockchain network.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
84.
TRUSTED PLATFORM MODULE ATTESTATION FOR SOFT REBOOTS
TPM attestation for soft reboots is described herein. One embodiment includes instructions to receive a request to perform a soft reboot of a computing device executing an existing OS instance and having a TPM, and perform a soft reboot process on the computing device responsive to receiving the request. The soft reboot process can include loading a new kernel and boot modules associated with a new OS instance into a memory of the computing device, measuring the boot modules into PCRs of the TPM, generating entries in an event log of the TPM corresponding to the boot modules and the new kernel, exporting the event log and a metadata file associated with the existing OS instance to storage, importing the event log from storage to the new kernel, copying the metadata file from storage to a server, and storing a new metadata file created from manifests of the new OS instance at the server.
Systems and methods are included for causing a computing device to boot by retrieving hardware information from a device tree and further properties by utilizing a native access method call identified in the device tree. The access method can allow for getting a property, getting a property length, or setting a property. A table within firmware can identify the method, which then can retrieve the property information from memory. This Device tree Runtime (“DTRT”) mechanism can allow the computing device to retrieve the hardware configuration and act as a power management interface for turning on the correct hardware and hardware properties on the computing device.
Methods, system, and articles of manufacture are disclosed to provide high availability to a cluster of nodes. Example apparatus disclosed herein are to identify member nodes of a cluster, determine whether an instance of an infrastructure supervisor is operating on any of the nodes, when an infrastructure supervisor is determined to not be operating, instantiate an infrastructure supervisor, and broadcast a discovery message to other nodes.
The current document is directed to an improved communications protocol that encompasses XOR-based forward error correction and that uses dynamic check-packet graphs that provide for efficient recovery of packets for which transmission has failed. During the past 20 years, XOR-based forward-error-correction (“FEC”) communications protocols have been developed to provide reliable multi-packet message transmission with relatively low latencies and computational complexity. These XOR-based FEC communications protocols, however, are associated with a significant amount of redundant-data transmission to achieve reliable multi-packet message transmission. The currently disclosed XOR-based FEC communications protocol employs dynamic, sparse check-packet graphs that provide for receiver-side packet recovery with significantly less redundant-data transmission. Because less redundant data needs to be transmitted in order to guarantee reliable multi-packet message delivery, the currently disclosed XOR-based FEC communications protocols are associated with significantly smaller temporal latencies and provide for greater data-transmission bandwidth.
An example method of beacon probing in a computing system includes: sending, by cross-host beacon probing (CHBP) software executing in a first host of the computing system, a first beacon probe from a first network interface controller (NIC) of the first host to NICs on a same layer 2 domain as the first NIC, the NICs including a second NIC of the first host and cross-host NICs of at least one host other than the first host; receiving, at the CHBP software through the first NIC, acknowledgements (ACKs) to the first beacon probe from the cross-host NICs; and determining, in response to the first beacon probe, connectivity statuses of the first NIC and the second NIC by the CHBP software based on the ACKs and on whether the second NIC receives the first beacon probe.
Methods, apparatus, systems, and articles of manufacture are disclosed to predict power consumption in a server. An example apparatus includes interface circuitry to obtain a power prediction request corresponding to the server range determiner circuitry to divide a training data set into a first sub-range of data and a second sub-range of the data; a data point in the training data set representative of resource utilization of a workload and a corresponding power consumption metric of the workload; model trainer circuitry to train first candidate models based on the first sub-range of the data and second candidate models based on the second sub-range of the data; and prediction selector circuitry to: select a first prediction model from the first candidate models; and select a second prediction model from the second candidate models, outputs of the first and the second prediction models to predict the power consumption of the server.
G06F 30/27 - Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
90.
STUN FREE SNAPSHOTS IN VIRTUAL VOLUME DATASTORES USING DELTA STORAGE STRUCTURE
The disclosure provides a method for virtual volume snapshot creation by a storage array. The method generally includes receiving a request to generate a snapshot of a virtual volume associated with a virtual machine, in response to receiving the request, preparing a file system of the storage array to generate the snapshot, wherein preparing the file system comprises creating a delta storage structure to receive write input/output (I/O) requests directed for the virtual volume when generating the snapshot of the virtual volume, deactivating the virtual volume, activating the delta storage structure, generating the snapshot of the virtual volume, and during the generation of the snapshot of the virtual volume: receiving a write I/O directed for the virtual volume and committing the write I/O in the delta storage structure.
The disclosure provides a method for virtual volume (vvol) recovery. The method generally includes determining to initiate recovery of a compromised vvol associated with a virtual machine (VM), transmitting a query requesting a list of snapshots previously captured for the compromised vvol, receiving the list of the snapshots previously captured for the compromised vvol and information about one or more snapshots in the list of snapshots, wherein for each of the snapshots, the information comprises an indication of at least one change between the snapshot and a previous snapshot, determining a recovery point snapshot among snapshots in the list of the snapshots based, at least in part, on the information about the one or more snapshots, creating a clone of the recovery point snapshot to generate a recovered virtual volume, creating a virtual disk from the recovered virtual volume, and attaching the virtual disk to the VM.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 21/78 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
92.
ONLINE FORMAT CONVERSION OF VIRTUAL DISK FROM REDO-LOG SNAPSHOT FORMAT TO SINGLE-CONTAINER SNAPSHOT FORMAT
System and method for converting a storage object in a redo-log snapshot format to a single-container snapshot format in a distributed storage system uses a temporary snapshot object, which is created by taking a snapshot of the storage object, and an anchor object, which points to a root object of the storage object. For each object chain of the storage object, each selected object is processed for format conversion. For each selected object, difference data between the selected object and a parent object of the selected object is written to the anchor object, a child snapshot of the anchor object is created in the single-container snapshot format, and the anchor object is updated to point to the selected object. The data of the running point object of the storage object is then copied to the anchor object, and each processed object and the temporary snapshot object are removed.
Systems and methods for dynamic migration between Receive Side Scaling (RSS) engine states include monitoring a traffic load of a first shared RSS engine of a physical network interface card (PNIC) of a host machine, the first shared RSS engine being shared among a first plurality of virtual machines (VMs) running on the host machine, determining the traffic load of the first shared RSS engine exceeds a threshold, and, in response to determining that the traffic load of the first shared RSS engine exceeds the threshold, migrating a first VM of the first plurality of VMs to either a dedicated RSS engine of the PNIC or to a second shared RSS engine of the PNIC.
Systems and methods for unified virtual infrastructure and containerized workload deployment via a deployment platform include receiving, at the deployment platform, a definition of the virtual infrastructure and the containerized workload, sending, by the deployment platform, first information comprising the definition of the virtual infrastructure to an infrastructure manager configured to deploy the virtual infrastructure including a container orchestrator, and sending, by the deployment platform, second information comprising the definition of the containerized workload to the container orchestrator configured to deploy the containerized workload on the deployed virtual infrastructure.
In one set of embodiments, a computer system can receive a request to insert or delete a key into or from a plurality of keys maintained by a dynamic search data structure, where the dynamic search data structure is implemented using a balanced binary search tree (BBST) comprising a plurality of nodes corresponding to the plurality of keys, where a first subset of the plurality of nodes are stored in the first memory tier, and where a second subset of the plurality of nodes are stored in the second memory tier. The computer system can further execute the request to insert or delete the key, where the executing results in a change in height of at least one node in the plurality of nodes. In response to the executing, the computer system can move one or more nodes in the plurality of nodes between the first and second memory tiers, the moving causing a threshold number of nodes of highest height in the BBST to be stored in the first memory tier.
In one set of embodiments, a computer system can receive a request to insert or delete a key into or from a plurality of keys maintained by a dynamic search data structure, where the first memory tier is faster than the second memory tier, where the dynamic search data structure is implemented using a treap comprising a plurality of nodes corresponding to the plurality of keys, and where each node in the plurality of nodes is identified by a key in the plurality of keys and a random priority. The computer system can then execute the request in a manner that causes a threshold number of nodes of highest priority in the treap to be stored in the first memory tier.
The present disclosure relates to moving workloads between cloud providers. A traceability application can receive a request to register a workload from a first virtualization service associated with a first cloud computing environment. To register the workload, the traceability application can generate an identification token in a distributed data store and an asset record corresponding to the identification token. The identification token can uniquely identify the workload among a plurality of workloads associated with a plurality of cloud computing environments. The traceability application can detect a migration of the VM from the first virtualization service associated with the first cloud computing environment to a second virtualization service associated with the second cloud computing environment. The traceability application can cause a transfer of an ownership of the identification token from the first virtualization service to the second virtualization service. The traceability application can update the asset record to reflect the transfer of the ownership of the identification token from the first virtualization service to the second virtualization service.
The technology disclosed herein enables. In a particular example, a control plane for a software-defined data center performs a method including identifying a tenant network address space for use by a tenant of the software-defined data center. The method further includes generating a filter rule for a tenant gateway between the tenant network address space and a provider gateway outside of the tenant network address space. Also, the method includes implementing the filter rule in the tenant gateway, wherein the filter rule prevents the tenant gateway from advertising network addresses outside of the tenant network address space.
This disclosure is directed to automated computer-implemented methods and systems for prioritizing recommended suboptimal resources of a data center. Methods and system described herein save time and increase the accuracy of identifying actual suboptimal resources and executing remedial measures to correct the suboptimal resources.
G06F 9/48 - Program initiating; Program switching, e.g. by interrupt
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
Disclosed are various embodiments for determining whether to initiate a remote device wipe in a mobile device management context. In one example, a system comprises a computing device configured to identify a device wipe condition for a client device and determine a wipe policy associated with the device wipe condition. A time for a time delay is initiated for a device wipe action of the client device. A wipe instruction is transmitted to execute the device wipe action based on an expiration of the time delay for the device wipe action.