A cloud computing infrastructure hosts a web service with customer accounts. In a customer account, files of the customer account are listed in an index. Files indicated in the index are arranged in groups, with files in each group being scanned using scanning serverless functions in the customer account. The files in the customer account include a compressed tar archive of a software container. Member files of a compressed tar archive in a customer account are randomly-accessed by way of locators that indicate a tar offset, a logical offset, and a decompressor state for a corresponding member file. A member file is accessed by seeking to the tar offset in the compressed tar archive, restoring a decompressor to the decompressor state, decompressing the compressed tar archive using the decompressor, and moving to the logical offset in the decompressed data.
Anomalous activities on a computer network are detected from audit or sign-in activity information of a target entity as recorded in an audit or sign-in log. A baseline graph of the target entity is generated using information on activities of the target entity during a collection period. A predict graph of the target entity is generated with information on activities of the target entity during another collection period, which follows and is shorter than the earlier collection period. A residual graph that indicates nodes or edges that are in the predict graph but not in the baseline graph is generated. The residual graph is scored and the score is compared to a threshold to determine whether the target entity has performed an anomalous activity.
Systems and methods for Internet access control are presented. A third-party application is hosted by a third-party server on the Internet. The third-party application has third-party data of a user. An Internet access control device detects an Internet access by the user to a target server on the Internet. The Internet access control device allows or blocks the Internet access depending on whether the Internet access is permitted or prohibited based on the third-party data.
Behavior report generation monitors the behavior of unknown sample files executing in a sandbox. Behaviors are encoded and feature vectors created based upon a q-gram for each sample. Prototypes extraction includes extracting prototypes from the training set of feature vectors using a clustering algorithm. Once prototypes are identified in this training process, the prototypes with unknown labels are reviewed by domain experts who add a label to each prototype. A K-Nearest Neighbor Graph is used to merge prototypes into fewer prototypes without using a fixed distance threshold and then assigning a malware family name to each remaining prototype. An input unknown sample can be classified using the remaining prototypes and using a fixed distance. For the case that no such prototype is close enough, the behavior report of a sample is rejected and tagged as an unknown sample or that of an emerging malware family.
G06F 18/23213 - Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
5.
Automated mitigation of cyber threats using a semantic cybersecurity database
Systems and methods are presented for mitigating cyber threats. Cybersecurity-related data are stored in a semantic cybersecurity database. A user interface converts a user input to a command utterance. A command node that corresponds to the command utterance is identified in the cybersecurity database. The command node is resolved to one or more action nodes that are connected to the command node, and each action node is resolved to one or more parameter nodes that are connected to the action node. The command node has a command that implements actions indicated in the action nodes. Each action can have one or more required parameters indicated in the parameter nodes. The values of the required parameters are obtained from the command utterance, prompted from the user, or obtained from the cybersecurity database. Actions with their parameter values are executed to mitigate a cyber threat in accordance with the user input.
A method for preventing spam comments from populating a web site is provided. The method includes intercepting a HTTP (Hypertext Transfer Protocol) response, which includes a web page with a form for enabling a client's general comments to be published on the web site. The method also includes modifying the web page with the form to create a modified web page with a randomized form. The modifying includes randomly adding a set of randomized variable names to the web page with the form. The set of randomized variable names is a set of randomly generated character strings. The method further includes forwarding the modified web page with the randomized form to the client. The method yet also includes adding the set of randomized variable name to a form database, which is configured for storing data about the modified web page with the randomized form.
G06F 16/958 - Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
G06F 21/54 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by adding security routines or objects to programs
G06F 21/36 - User authentication by graphic or iconic representation
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
7.
Systems and methods for preventing information leakage
A system for preventing information leakage due to access by an application to a file is provided. The system for preventing information leakage includes an application identification module configured to obtain data associated with the application. The system for preventing information leakage also includes an association table containing file-type data and trusted-application data. In addition, the system also includes an access control module to determine the application identifier and the association table. The system for preventing information leakage is configured to determine whether to deny content access by the application to content of the file as saved in the file.
Features of sample files that are known to be normal are extracted by random projection. The random projection values of the sample files are used as training data to generate one or more anomaly detection models. Features of a target file being inspected are extracted by generating a random projection value of the target file. The random projection value of the target file is input to an anomaly detection model to determine whether or not the target file has features that are novel relative to the sample files. The target file is declared to be an outlier when an anomaly detection model generates an inference that indicates that the target file has novel features.
Methods and apparatus for detecting, in a gateway device configured for facilitating communication between an intranet and an external network, the existence of an unauthorized wireless access point in the intranet. The method includes determining whether a packet received at the gateway originates from one of the wireless devices. If a received at the gateway originates from a wireless device, the method includes determining whether a source MAC address associated with the packet is one of the set of known MAC addresses. If not, the method further includes taking a remedial action to prevent the unauthorized wireless access point from accessing one of the intranet and the external network.
A multiclass classifier generates a probability vector for individual data units of an input data stream. The probability vector has prediction probability values for classes that the multiclass classifier has been trained to detect. A class with the highest prediction probability value among the classes in a probability vector is selected as the predicted class. A confidence score is calculated based on the prediction probability value of the class. Confidence scores of the class are accumulated within a sliding window. The class is declared to be the detected class of the input data stream when the accumulated value of the class meets an accumulator threshold. A security policy for an application program that is mapped to the class is enforced against the input data stream.
A computer network includes a camera node, a network access node, a verification node, and a display node. Video content recorded by a camera at the camera node is transmitted to the display node and to the verification node for verification. The video content is verified at the display node and at the verification node. Recording metadata of the video content is stored in a distributed ledger and retrieved by the display node to verify the video content. The verification node receives, from the network access node, verification data for verifying the video content.
G06F 16/787 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
A scam detection system includes a user computer that runs a security application and a backend system that runs a scam detector. An email is received at the user computer. The security application extracts and forwards a content of the email, which includes a body of the email, to the backend system. The email body of the email is anonymized by removing personally identifiable information from the email body. A hash of the anonymized email body is generated and compared against hashes of a whitelist and a blacklist. The anonymized email body is classified. A segment of text of the anonymized email body is identified and provided to the user computer when the anonymized email body is classified as scam.
A system for stateful detection of cyberattacks includes an endpoint computer and a backend computer system. The endpoint computer monitors for cyberattacks and sends out queries for detected security events. The backend computer system stores observation data that are included in the queries. The backend computer system combines current observation data from a current query, relevant observation data from previous queries, and relevant cybersecurity data. The combined data are evaluated for cyberattacks.
A login authentication process to access a computer service includes displaying a virtual keyboard on a display screen of a computer. A user enters a password by clicking on the virtual keyboard. The manner the user clicked on the virtual keyboard to enter the password is compared to the manner an authorized user of the computer service clicked on the virtual keyboard to enter an authorized password during a learning phase. The login authentication is deemed to be a success when the password matches the authorized password, and the manner the user clicked on the virtual keyboard to enter the password matches the manner the authorized user clicked on the virtual keyboard to enter the authorized password.
G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
G06F 21/36 - User authentication by graphic or iconic representation
G06F 3/0354 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
A system for facilitating Internet security for devices on a local area network (LAN) is disclosed. The LAN may connect to a rating server through the Internet and may including at least an anti-malware application for detecting malware. The system may include a black list for being implemented on the LAN for storing identifiers of a set of forbidden sites. The devices may be prevented from accessing content provided by each of the forbidden sites. The system may also include a profiler for being implemented on the LAN for updating the black list utilizing a set of result data. The data may include scan result data and rating result data. The scan result data may pertain to results of scans performed by the anti-malware application; the rating result data may pertain to results of rating performed by the rating server.
A target binary file is clustered by reducing the target binary file to its architecture-agnostic functions, which are converted into an input string. The target digest of the input string is calculated and compared to digests of malicious binary files. A cluster having digests of malicious binary files that are similar to the target digest is identified. In response to identifying the cluster, the target binary file is detected to be malicious and of the same malware family as the malicious binary files of the cluster.
An endpoint system receives a target file for evaluation for malicious scripts. The original content of the target file is normalized and stored in a normalized buffer. Tokens in the normalized buffer are translated to symbols, which are stored in a tokenized buffer. Strings in the normalized buffer are stored in a string buffer. Tokens that are indicative of syntactical structure of the normalized content are extracted from the normalized buffer and stored in a structure buffer. The content of the tokenized buffer and counts of tokens represented as symbols in the tokenized buffer are compared against heuristic rules indicative of malicious scripts. The contents of the tokenized buffer and string buffer are compared against signatures of malicious scripts. The contents of the tokenized buffer, string buffer, and structure buffer are input to a machine learning model that has been trained to detect malicious scripts.
A file is stored in a public cloud storage. A serverless computing platform receives an event notification that the file has been stored and, in response, creates an instance of an ephemeral environment wherein a security module is executed. The security module creates a memory-mapped space with memory locations that are mapped to the entire content of the file but does not allocate memory for all of the memory locations. Instead, the security module retrieves sections of the file from the public cloud storage as these sections are accessed in their designated memory locations in accordance with the memory mapping, allocates memory for the retrieved sections, stores the retrieved sections in their designated memory locations, and scans the retrieved sections in their designated memory locations for malicious code. The security module continues scanning the file in sections until relevant sections of the file have been scanned.
A machine learning system includes multiple machine learning models. A target object, such as a file, is scanned for machine learning features. Context information of the target object, such as the type of the object and how the object was received in a computer, is employed to select a machine learning model among the multiple machine learning models. The machine learning model is also selected based on threat intelligence, such as census information of the target object. The selected machine learning model makes a prediction using machine learning features extracted from the target object. The target object is allowed or blocked depending on whether or not the prediction indicates that the target object is malicious.
A virtual keyboard rendered on a separate computing device is independent of the user's computer. A virtual keyboard displayed on the user's computer screen is blank without any alphanumeric characters. Another virtual keyboard displayed on the user's independent computing device has a randomly generated layout of alphanumeric characters on a keypad. The user enters a password by pressing the blank keys of the blank keyboard on his computer screen with reference to the other virtual keyboard. The position sequence of these entered keys is sent to an application on a remote server computer. The remote server computer shares a virtual keyboard having the randomly generated layout of characters with the independent computing device via an online or off-line technique. When online, an encoded image of the encrypted layout is sent to the client computer and displayed for scanning by the device. When off-line, both the application and the device generate the same random key sequence by using the same pseudo random number generator and the same seed value.
G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06F 21/42 - User authentication using separate channels for security data
G06F 21/34 - User authentication involving the use of external additional devices, e.g. dongles or smart cards
21.
Decryption of encrypted network traffic using an inline network traffic monitor
An inline network traffic monitor is deployed inline between two endpoints of a computer network. A particular endpoint of the two endpoints works in conjunction with the inline network traffic monitor to decrypt encrypted network traffic transmitted between the two endpoints. A series of Change Cipher Spec (CCS) messages is exchanged between the inline network traffic monitor and the particular endpoint during a Transport Layer Security (TLS) handshake between the two endpoints. The series of CCS messages allows the particular endpoint and the inline network traffic monitor to detect each other on the computer network. After detecting each other's presence, the particular endpoint sends the inline network traffic monitor a session key that is used by the two endpoints to encrypt their network traffic. The inline network traffic monitor uses the session key to decrypt encrypted data of the network traffic transmitted between the two endpoints.
An attachment to an e-mail message received at an e-mail gateway is scanned by a scan server and then is converted into an HTML file. The HTML file includes preview data of the attachment (minus any macro scripts), the entire original data of the attachment, scan functionality enabling a user to send the attachment back to a scan server for a second scan, or extract functionality enabling a user to extract the original attachment data for saving or opening in an application. The recipient is able to open or save the attachment directly if he or she believes it comes from a trusted sender. If the attachment seems suspicious, the recipient previews the attachment first before performing a scan, opening the attachment or deleting it. The recipient performs a scan of the attachment by clicking a “scan” button to send the attachment to a backend server for a second scan where an updated virus pattern file may be available to detect any zero-day malware.
A pause command is sent to a Subscriber Identity Module (SIM) card of a cellular device in response to detecting a cyberattack against the cellular device on the cellular network. To mitigate the cyberattack, the SIM card temporarily disconnects the cellular device from the cellular network for a pause time. The SIM card prohibits the cellular device from connecting to the cellular network during the pause time and automatically allows the cellular device to reconnect to the cellular network after the pause time.
A locality-sensitive hash value is calculated for a suspect file in an endpoint computer. A similarity score is calculated for the suspect hash value by comparing it to similarly-calculated hash values in a cluster of known benign files. A suspiciousness score is calculated for the suspect hash value based upon similar matches in a cluster of benign files and a cluster of known malicious files. These similarity score and the suspiciousness score or combined in order to determine if the suspect file is malicious or not. Feature extraction and a set of features for the suspect file may be used instead of the hash value; the classes would contain sets of features rather than hash values. The clusters may reside in a cloud service database. The suspiciousness score is a modified Tarantula technique. Matching of locality-sensitive hashes may be performed by traversing tree structures of hash values.
A method protects a daemon in an operating system of a host computer. The operating system detects that there is an access of a plist file of a daemon by a process in the computer. If so, then it executes a callback function registered for the plist file. The callback function sends to a kernel extension a notification of the attempted access. The kernel extension returns a value to the operating system indicating that the access should be denied. The operating system denies access to the plist file of the daemon by the process. The extension may also notify an application which prompts the user for instruction. The kernel extension also protects itself by executing its exit function when a command is given to unload the extension, and the exit function determines whether or not the command is invoked by an authorized application, such as by checking a flag.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
26.
Automatic charset and language detection with machine learning
Language-based machine learning approach for automatically detecting universal charset and the language of a received document is disclosed. The language-based machine learning approach employs a plurality of text document samples in different languages, after converting them to a selected Unicode style (if their original encoding schemes are not the selected Unicode), to generate a plurality of language-based machine learning models during the training stage. During the application stage, vector representations of the received document for different combinations of charsets and their respective applicable languages are tested against the plurality of machine learning models to ascertain the charset and language combination that is most similar to its associated machine learning model, thereby identifying the charset and language of the received document.
An automation task program is inspected for unsecure data flow. The task program is parsed to generate a parse tree, which is visited to generate control flow graphs of functions of the task program. The control flow graphs have nodes, which have domain-agnostic intermediate representations. The control flow graphs are connected to form an intermediate control flow graph. The task program is deemed to have an unsecure data flow when data is detected to flow from a data source to a data sink, with the data source and the data sink forming a source-sink pair that is indicative of an unsecure data flow.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
Systems and methods are presented for performing sandboxing to detect malware. Sample files are received and activated individually in separate sandboxes in one mode of operation. In another mode of operation, sample files are assigned to pools. Sample files of a pool are activated together in the same sandbox. The sample files of the pool are deemed to be normal when no anomalous event is detected in the sandbox. Otherwise, when an anomalous event is detected in the sandbox, the sample files of the pool are activated separately in separate sandboxes to isolate and identify malware among the sample files.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
A mobile app is in a form of a package file. A structural feature digest is generated from contents of a manifest part, bytecode part, and resource part of the package file. A mobile device receives an unknown mobile app, generates a structural feature digest of the unknown mobile app, and sends the structural feature digests to a backend system over a computer network. In the backend system, the structural feature digest of the unknown mobile app is compared to structural feature digests of known malicious mobile apps. The unknown mobile app is detected to be malicious when its structural feature digest is similar to that of a known malicious mobile app.
An agent on an endpoint computer computes a locality-sensitive hash value for an API call sequence of an executing process. This value is sent to a cloud computer which includes an API call sequence blacklist database of locality-sensitive hash values. A search is performed using a balanced tree structure of the database using the received hash value and a match is determined based upon whether or not a metric distance is under or above a distance threshold. The received value may also be compared to a white list of locality-sensitive hash values. Attribute values of the executing process are also received from the endpoint computer and may be used to inform whether or not the executing process is deemed to be malicious. An indication of malicious or not is returned to the endpoint computer and if malicious, the process may be terminated and its subject file deleted.
Taint is dynamically tracked on a mobile device. Taint virtual instructions are added to virtual instructions of a control-flow graph (CFG). A taint virtual instruction has a taint operand that corresponds to an operand of a virtual instruction and has a taint output that corresponds to an output of the virtual instruction in a block of the CFG. Registers are allocated for the taint virtual instruction and the virtual instructions. After register allocation, the taint virtual instruction and the virtual instructions are converted to native code, which is executed to track taint on the mobile device.
A system includes Internet of things (IOT) devices that are paired with corresponding edge computers. Smart contracts are generated for edge computers, and deployed in a blockchain. Upon receipt of a message, a smart contract compares a sender of the message to a designated owner of the smart contract. The smart contract has a privilege checker that allows a message from the owner of the smart contract to initiate execution of a function that modifies a variable of the smart contract, but prevents messages from non-owners from initiating execution of the function.
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
The system executes online on corporate premises or in a cloud service, or offline. An e-mail message is received at a server within a corporate network or cloud service. A header of the e-mail message is parsed to determine locations of server computers through which the e-mail message has traveled. Geographic locations are placed into a routing map. A banner is inserted into the e-mail message that includes the routing map or a link to the routing map. The routing map is stored by the e-mail gateway server at a storage location identified by the link. The modified e-mail message is delivered or downloaded from the e-mail server to a user computer in real time. The sender Web site is parsed to identify sender domain information to be inserted into the banner. If offline, a product fetches and modifies the e-mail message using an API of the e-mail server.
A system is implemented in browser plug-in software or in endpoint agent software on a user computer. The user accesses a Web site and fills in a login request form and submits it to the Web site. The system triggers a “forgot password” feature and detects a phishing Web site by determining that it does not send a reset link to a valid user e-mail address, or, the system detects a phishing Web site by determining that it does send a reset link to an invalid e-mail address. Or, the system detects a phishing Web site by determining that it sends a reset link to a user e-mail address from a domain different from the domain of a login request form. Or, the system fills in an incorrect account name or password in a login request form and detects a phishing Web site by determining that the Web site does not indicate that the incorrect user name or incorrect password are incorrect. Or, the system submits incorrect credentials and detects a phishing Web site by determining that the Web site does not implement any way to reset the account name or password.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 16/954 - Navigation, e.g. using categorised browsing
G06F 11/32 - Monitoring with visual indication of the functioning of the machine
35.
System and method for detecting leakage of email addresses
A system for detecting leakage of email addresses generates an alias email address that will be used by a user to register with a web service. The alias email address is an alias for a primary email address of the user, and is paired with the web service. The web service is included in a whitelist upon confirmation from the web service that the alias email address has been registered with the web service. Emails that are addressed to the alias email address and from the web service are forwarded to the primary email address. Emails that are addressed to the alias email address but is not from the web service are detected to be suspicious.
Critical network assets of a private computer are automatically identified by training a machine learning model with histograms of features obtained by aggregating data of log entries. The model is deployed in a private computer network and retrained using training data set of the private computer network. Data from log entries of a target network asset are aggregated, numerically transformed, and converted into features histograms. The features histograms are concatenated into a single file, which is provided to the machine learning model for prediction. The machine learning model outputs a prediction score that gives an indication of whether or not the target network asset is critical.
Network attacks are detected by a protocol engine that works in conjunction with one or more streaming protocol analyzers. The protocol engine receives network packets over a computer network and generates metadata of the network packets. The metadata are placed in a transport envelope, which is streamed over the computer network. The transport envelope is received over the computer network. After receiving the transport envelope over the computer network, the metadata are extracted from the transport envelope and provided to the one or more streaming protocol analyzers, which analyze the metadata to detect network attacks.
A cyber threat intelligence of a cyber threat includes a threat chain that describes objects involved in the cyber threat and relationships between the objects. A related object hash of an object is calculated by calculating a hash of one or more objects that are linked to the object as indicated in the cyber threat intelligence. A related object sequence hash of the threat chain is generated by calculating a total of the related object hashes. The related object sequence hash of the threat chain is compared to a related object sequence hash of another threat chain to detect cyber threats.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
39.
Generation of file digests for detecting malicious executable files
A cybersecurity server receives an executable file that has bytecode and metadata of the bytecode. Strings are extracted from the metadata, sorted, and merged into data streams. The data streams are merged to form a combined data stream. A digest of the combined data stream is calculated using a fuzzy hashing algorithm. The similarity of the digest to another digest is determined to detect whether or not the executable file is malware or a member of a malware family.
G06F 21/51 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems at application loading time, e.g. accepting, rejecting, starting or inhibiting executable software based on integrity or source reliability
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
40.
Systems and methods for distributed digital rights management with decentralized key management
One embodiment disclosed relates to a system for digital data distribution with decentralized key management. The system utilizes a data provider, a data demander, cloud storage, a blockchain, and a smart contract registered with the blockchain. The data provider encrypts the digital data using a session key and puts the encrypted digital data to the cloud storage, which returns a URL for the stored digital data. In addition, the session key is itself encrypted using the public key of the data demander. The access data at the smart contract is updated with the encrypted session key and the URL. The data demander uses its own private key to decrypt the session key and then uses the session key to decrypt the digital data. Other embodiments and features are also disclosed.
A cybersecurity system includes sensors that detect and report computer security events. Collected reports of computer security events are formed into state sequences, which are used as training data to train and build a prediction model. A current computer security event is detected and used as an input to the prediction model, which provides a prediction of a next computer security event. A monitoring level of a cybersecurity sensor is adjusted in accordance with the predicted next computer security event.
An e-mail message is sent from a public e-mail address via the e-mail account of a user and delivered to an e-mail gateway. The message is destined for the e-mail account of a recipient. The gateway determines that the public e-mail address is on a list of users desiring two-factor authentication. The gateway determines that the message contains an anomaly indicating fraud or possible forgery. The gateway sends a two-factor authentication message to a hidden e-mail account of the user. The user reviews the message and responds with a confirmation message either confirming that the message is legitimate or indicating that it is a forgery. If the message is legitimate the gateway allows the message to be delivered to the recipient; if not, the message remains in quarantine and is not delivered. The gateway exists at the user's corporation, the recipient's corporation or is hosted at a third-party cloud service.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
A system for evaluating files for cyber threats includes a machine learning model and a locality sensitive hash (LSH) repository. When the machine learning model classifies a target file as normal, the system searches the LSH repository for a malicious locality sensitive hash that is similar to a target locality sensitive hash of the target file. When the machine learning model classifies the target file as malicious, the system checks if response actions are enabled for the target file. The system reevaluates files that have been declared as normal, and updates the LSH repository in the event of false negatives. The system disables response actions for files that have been reported as false positives.
A network security device has a local area network (LAN) interface and a wide area network (WAN) interface, with a capability to route packets of a network connection along a fast path that bypasses a network stack of an operating system of the network security device. A packet of a network connection that is received at the LAN interface is routed to a virtual network interface. A packet inspector reads the packet from the virtual network interface, inspects the packet, and writes the packet back to the virtual network interface after inspection. The packet is routed from the virtual network interface to the WAN interface, and exits the WAN interface towards the destination network address of the packet. After inspecting one or more packets of the network connection, subsequently received packets of the network connection are routed along the fast path.
A global locality sensitive hash (LSH) database stores global locality sensitive hashes of files of different private computer networks. Each of the private computer networks has a corresponding local LSH database that stores local locality sensitive hashes of files of the private computer network. A target locality sensitive hash is generated for a target file of a private computer network. The global and local LSH databases are searched for a locality sensitive hash that is similar to the target locality sensitive hash. The target file is marked for further evaluation for malware or other cybersecurity threats when the target locality sensitive hash is not similar to any of the global and local locality sensitive hashes.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 16/22 - Indexing; Data structures therefor; Storage structures
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
46.
Systems and methods for data certificate notarization utilizing bridging from private blockchain to public blockchain
One embodiment disclosed relates to a system for managing data for logistics, sourcing and/or production. The system includes: a private blockchain maintained by a first network of nodes; a trusted public blockchain maintained by a second network of nodes; a private agent system; and a bridge system connected to both the private blockchain and the public blockchain. The private agent system operates to extract blocks of metadata from the private blockchain and utilize a hash tree structure to generate a first root hash value from the blocks of metadata. The bridge system operates to verify the first root hash value and store the first root hash value as a notarized data certificate in the trusted public blockchain. Another embodiment disclosed relates to a method for data certificate notarization utilizing a bridging system from a private blockchain to a trusted public blockchain. Other embodiments and features are also disclosed.
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
An attachment to an e-mail message is replaced with a URL before that message is delivered to an end user, thus providing more time to perform a better scan at a cloud server computer. The attachment is removed from the e-mail message and sent to the cloud server computer for a dynamic scan and a static scan which will likely include updates better able to detect malicious software. The e-mail message with the URL is delivered to the end user and there is a delay before the end user reads the message or attempts to open the attachment. An artificial delay may be introduced at an e-mail gateway before the message is delivered to the end-user. If the attachment is benign then the end user is allowed to download it via the URL; if the attachment is malicious then the end user is only given a warning message.
One embodiment of the presently-disclosed invention relates to an intrusion prevention system that includes a plurality of FPGA instances and a plurality of compute instances in a cloud network. The plurality of FPGA instances perform pre-processing that determines whether data packets received from the network gateway are associated with suspicious flows. The data packets associated with the suspicious flows are communicated from the plurality of FPGA instances to a plurality of compute instances in the cloud network. The plurality of compute instances perform post-processing that determines whether a suspicious flow is malicious. Other embodiments, aspects and features are also disclosed.
One embodiment disclosed relates to a system for detecting anomalous messaging, discovering compromised accounts, and generating responses to threatened attacks. The system utilizes API commands and log forwarding for interaction and communication between a messaging and account hunting platform, other hunting platforms, an action center, and a security operations center. Another embodiment relates to a method of, and system for, performing a complete root cause analysis. Another embodiment relates to a method of, and system for, anomaly discovery which may advantageously utilize reference data to correlate different anomalies for reporting as a single incident.
An intrusion prevention system includes a machine learning model for inspecting network traffic. The intrusion prevention system receives and scans the network traffic for data that match an anchor pattern. A data stream that follows the data that match the anchor pattern is extracted from the network traffic. Model features of the machine learning model are identified in the data stream. The intrusion prevention system classifies the network traffic based at least on model coefficients of the machine learning model that are identified in the data stream. The intrusion prevention system apples a network policy on the network traffic (e.g., block the network traffic) when the network traffic is classified as malicious.
A computer network includes a camera node, a network access node, a verification node, and a display node. Video content recorded by a camera at the camera node is transmitted to the display node and to the verification node for verification. The video content is verified at the display node and at the verification node. Recording metadata of the video content is stored in a distributed ledger and retrieved by the display node to verify the video content. The verification node receives, from the network access node, verification data for verifying the video content.
H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors
G06F 16/787 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
A cybersecurity server receives an executable file. The executable file is disassembled to generate assembly code of the executable file. High-entropy blocks and blocks of printable American Standard Code for Information Interchange (ASCII) characters are removed from the assembly code. Instructions of the assembly code are normalized, chunked, and merged into a data stream. The digest of the data stream is calculated using a fuzzy hashing algorithm. The similarity of the digest to a malicious digest is determined to evaluate the executable file for malware.
In one embodiment, a network security device monitors network communications between a computer and another computer. A periodicity of transmissions made by one computer to the other computer is determined, with the periodicity being used to identify candidate time point pairs having intervals that match the periodicity. A graph is constructed with time points of the candidate time point pairs as nodes and with intervals of time point pairs as edges. A longest path that continuously links one time point to another time point on the graph is compared to a threshold length to verify that the transmissions are periodic, and are thus potentially indicative of malicious network communications.
A network device has a Local Area Network (LAN) port and several Wide Area Network (WAN) ports. The network device detects a computing device that is connected to the LAN port initiating establishment of a TCP connection. The network device creates a TCP socket that establishes the TCP connection with the computing device and inspects TCP packets on the TCP connection to identify a cloud application associated with the TCP packets. The network device creates another TCP socket that establishes a TCP connection to the identified cloud application by way of a WAN port that is designated to be an output port for the identified cloud application. A routing path is created between the LAN port and the designated WAN port. Subsequent TCP packets originated by the computing device for the identified cloud application are forwarded along the routing path.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
H04L 12/28 - Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 12/741 - Header address processing for routing, e.g. table lookup
H04L 29/12 - Arrangements, apparatus, circuits or systems, not covered by a single one of groups characterised by the data terminal
55.
Methods and apparatus for intrusion prevention using global and local feature extraction contexts
In one embodiment, local begin and end tags are detected by a network security device to determine a local context of a network traffic flow, and a local feature vector is obtained for that local context. At least one triggering machine learning model is applied by the network security device to the local feature vector, and the result determines whether or not deeper analysis is warranted. In most cases, very substantial resources are not required because deeper analysis is not indicated. If deeper analysis is indicated, one or more deeper machine learning model may then be applied to global and local feature vectors, and regular expressions may be applied to packet data, which may include the triggering data packet and one or more subsequent data packets. Other embodiments, aspects and features are also disclosed.
An attachment to an e-mail message received at an e-mail gateway is scanned by a scan server and then is converted into an HTML file. The HTML file includes preview data of the attachment (minus any macro scripts), the entire original data of the attachment, scan functionality enabling a user to send the attachment back to a scan server for a second scan, or extract functionality enabling a user to extract the original attachment data for saving or opening in an application. The recipient is able to open or save the attachment directly if he or she believes it comes from a trusted sender. If the attachment seems suspicious, the recipient previews the attachment first before performing a scan, opening the attachment or deleting it. The recipient performs a scan of the attachment by clicking a “scan” button to send the attachment to a backend server for a second scan where an updated virus pattern file may be available to detect any zero-day malware.
A smart home includes Internet of things (IOT) devices that are paired with an IOT gateway. A backend system is in communication with the IOT gateway to receive IOT operating data of the IOT devices. The backend system generates a machine learning model for an IOT device. The machine learning model is consulted with IOT operating data of the IOT device to detect anomalous operating behavior of the IOT device. The machine learning model is updated as more and newer IOT operating data of the IOT device are received by the backend system.
The presently-disclosed solution provides an innovative system and method to protect a computer user from a phishing attack. Computer vision is effectively applied to match identifiable key information in suspect content against a database of identifiable key information of legitimate content. In one embodiment, the presently-disclosed solution converts suspect content to a digital image format and searches a database of logos and/or banners to identify a matching logo/banner image. Once the matching logo/banner image is found, the legitimate domain(s) associated with the matching logo/banner image is (are) determined. In addition, the presently-disclosed solution extracts all the URLs (universal resource links) directly from the textual data of the suspect content and further extracts the suspect domain(s) from those URLs. The suspect domain(s) is (are) then compared against the legitimate domain(s) to detect whether the suspect content is phishing content or not. Other embodiments and features are also disclosed.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/955 - Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
A method protects a daemon in an operating system of a host computer. The operating system detects that there is an access of a plist file of a daemon by a process in the computer. If so, then it executes a callback function registered for the plist file. The callback function sends to a kernel extension a notification of the attempted access. The kernel extension returns a value to the operating system indicating that the access should be denied. The operating system denies access to the plist file of the daemon by the process. The extension may also notify an application which prompts the user for instruction. The kernel extension also protects itself by executing its exit function when a command is given to unload the extension, and the exit function determines whether or not the command is invoked by an authorized application, such as by checking a flag.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
60.
Methods and apparatus for controlling internet access
Apparatus and methods for controlling access by a browser to one or more Internet servers are disclosed. Access control is performed by ascertaining an IP address of an internet server that the user is trying to access and performing lookup of the IP address in an IP address rating database. If the lookup reveals that the IP address to be suspicious and data received from the internet server is encrypted, block the access to the internet server. Alternatively, if the lookup reveals the IP address to be suspicious, block the access to the first internet server by the browser without first performing content analysis on the data from the internet server.
A cybersecurity server receives an executable file to be classified. A call graph of the executable file is generated. Functions of the executable file are represented as vertices in the call graph, and a vertex value is generated for each vertex. The vertex values are arranged in traversal order of the call graph to generate a call graph pattern. A digest of the call graph pattern is calculated and compared to one or more malicious digests.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
G06F 16/14 - File systems; File servers - Details of searching files based on file metadata
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
62.
Anomalous logon detector for protecting servers of a computer network
A server hosted by a server computer is protected against anomalous logons. A working time profile is generated from an access log that has a record of logons to the server. Counts of access events per time period (e.g., per hour) are parsed from the access log, and processed using statistical procedures to find candidate working hours. A working time range includes candidate working hours. An account logging on the server is detected. The logon by the account is deemed to be anomalous when the logon is at a time outside the candidate working hours.
The present disclosure provides effective solutions to security inspection and monitoring of operations within security containers. The solutions overcome the challenges and difficulties caused by the isolation of the containers. One embodiment relates a computer-implemented method in which a security agent is migrated between one or more containers and the host machine by changing its namespace. Another embodiment relates to a computer-implemented method for user-mode object monitoring of one or more containers in which a security agent migrates serially to multiple containers while keeping user-mode object-monitoring handles for the containers. Thereafter, the security agent may migrate into the host machine and continue monitoring events within the containers using the user-mode object-monitoring handles. Another embodiment relates to a host machine which includes a master agent that communicates with multiple security agents holding user-mode object-monitoring handles for corresponding containers. Other embodiments and features are also disclosed.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
64.
Detection of abusive user accounts in social networks
Abusive user accounts in a social network are identified from social network data. The social network data are processed to compare postings of the user accounts to identify a group of abusive user accounts. User accounts in the group of abusive user accounts are identified based on posted message content, images included in the messages, and/or posting times. Abusive user accounts can be canceled, suspended, or rate-limited.
An email attempting to perpetrate a business email compromise (BEC) attack is detected based on similarity of the email to a known BEC email and on similarity of the email to a user email that would have been sent by the purported sender of the email. Metadata of the email is extracted and input to a BEC machine learning model to find the known BEC email among BEC email samples. The extracted metadata are also input to a personal user machine learning model of the purported sender to generate the user email.
A method in an internet server for implementing internet service, the method including exclusively binding a first socket handle object of a first process with a first port. The method also includes generating a first child process from the first process and creating a first duplicate socket handle of the first socket handle object in a first file, the first file being associated with an id of the first child process. The method further includes forming, using the first child process, a first child socket handle object from the first duplicate socket handle in the first file, thereby causing the first child socket handle object to be associated with the first port.
Encrypted network traffic between a server device and an application program running on a client device is monitored by a network security device in an enterprise computer network. Metadata of the application program is sent to a cloud security system to generate a reputation of the application program. The encrypted network traffic is decrypted and inspected for conformance with security policies when the application program is determined to be a browser application. When the application program is determined to be a non-browser application, the reputation of the application program is determined and the encrypted network traffic is blocked when the application program has a bad reputation. In a bypass mode of operation, the encrypted network traffic is allowed to pass through without inspection when the application program is determined to be a non-browser application.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
68.
Virtualization of smartphone functions in a virtual reality application
A mobile virtualization application allows a VR application user to access mobile telephone basic functions in a third-party VR application. This virtualization application may be a virtualization plugin or an independent application which virtualizes mobile functions and creates VR models. The virtualization plugin bridges between the VR application and the mobile telephone operating system allowing the user to use directly mobile telephone basic functions in the VR application. VR application users can read directly their incoming text messages, e-mail messages, application notifications, etc., in the form of VR model, and, they can use a VR application input device to control their mobile telephone basic functions in order to send messages, control a camera, etc.
Executable files are evaluated for malware in one or more lightweight executors, such as lightweight executor processes. An executable file is loaded and executed in a lightweight executor. Instructions in an execution path of the executable file are executed. Instructions in another execution path of the executable file are executed in another lightweight executor when a conditional branch instruction in an execution path has a suspicious condition. A fake kernel that mimics a real operating system kernel receives system calls, and responds to the system calls without servicing them in a way the real operating system kernel would. Runtime behavior of the executable file is observed for malware behavior. A response action, such as preventing the executable file from subsequently executing in a computer, is performed when the executable file is detected to be malware.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
70.
Automatic credential input to a user interface of a remote mobile app
A server computer runs several remote mobile operating systems. A remote mobile app running on one of the remote mobile operating systems generates a user interface that includes an input field for receiving a credential. The user interface is displayed on a touchscreen of a mobile device that is in communication with the server computer. A touchscreen keyboard with an autofill button is displayed on the touchscreen. When a user of the mobile device clicks on the autofill button, the credential of the user is retrieved and sent from the mobile device to the server computer, where the credential is entered into the input field.
G06F 3/0489 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
H04W 4/20 - Services signalling; Auxiliary data signalling, i.e. transmitting data via a non-traffic channel
H04W 12/00 - Security arrangements; Authentication; Protecting privacy or anonymity
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Examples of implementations relate to metadata extraction. For example, a system of privacy preservation comprises a physical processor that executes machine-readable instructions that cause the system to normalize a network traffic payload with a hardware-based normalization engine controlled by a microcode program; parse the normalized network traffic payload, as the network traffic payload passes through a network, by performing a parsing operation of a portion of the normalized network traffic payload with a hardware-based function engine of a plurality of parallel-distributed hardware-based function engines controlled by the microcode program; and provide the hardware-based function engine with a different portion of the normalized network traffic payload responsive to an indication, communicated through a common status interface, that the different portion of the normalized network traffic payload is needed to complete the parsing operation.
In one embodiment, local begin and end tags are detected by a network security device to determine a local context of a network traffic flow, and a local feature vector is obtained for that local context. At least one triggering machine learning model is applied by the network security device to the local feature vector, and the result determines whether or not deeper analysis is warranted. In most cases, very substantial resources are not required because deeper analysis is not indicated. If deeper analysis is indicated, one or more deeper machine learning model may then be applied to global and local feature vectors, and regular expressions may be applied to packet data, which may include the triggering data packet and one or more subsequent data packets. Other embodiments, aspects and features are also disclosed.
Examples relate to organizing and storing network communications. In one example, a programmable hardware processor may: receive a first set of network packets; identify, for each network packet included in the first set, a network flow, each network flow including at least one related packet; store each network packet included in a subset of the first set in a first data storage device; for each network packet included in the subset, organize the network packet according to the network flow identified for the network packet; identify, from the network flows, a set of network flows that each have at least one characteristic of interest; and store, in a second data storage device, each network packet included in each network flow of the set of network flows.
Targeted email attacks are detected using feature combinations of known abnormal emails, interflow shapes formed by an email with other emails, or both. An email received in an endpoint computer system is scanned to identify abnormal features indicative of a targeted email attack and the abnormal features of the email are checked against abnormal feature combinations. The email can also be scanned to identify an interflow shape formed by the email with other emails and the interflow shape is checked against interflow shapes of known targeted email attacks.
A computer-implemented method for detecting a phishing attempt by a given website is provided. The method includes receiving a webpage from the given website, which includes computer-readable code for the webpage. The method also includes ascertaining hyperlink references in the computer-readable code. Each hyperlink reference refers to at least a component of another webpage. The method further includes performing linking relationship analysis on at least a subset of websites identified to be referenced by the hyperlink references, which includes determining whether a first website is in a bi-directional/uni-directional linking relationship with the given website. The first website is one of the subset of websites. The method yet also includes, if the first website is in the bi-directional linking relationship, designating the given website a non-phishing website. The method yet further includes, if the first website is in the uni-directional linking relationship, performing anti-phishing measures with respect to the given website.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 21/51 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems at application loading time, e.g. accepting, rejecting, starting or inhibiting executable software based on integrity or source reliability
G06F 21/55 - Detecting local intrusion or implementing counter-measures
A method for determining which web page among multiple candidate web pages is similar to a given web page. For each candidate web page, a set of scoring rules is provided to score the components therein. When the given web page is compared against a candidate web page, each component that is found in both the given web page and the candidate web page under examination is given a score in accordance with the set of scoring rules that is specific to that web page under examination. A composite similarity score is computed for each comparison between the given webpage and a candidate web page. If the composite similarity score exceeds a predefined threshold value for a comparison between the given webpage and a candidate web page, that candidate web page is deemed the web page that is similar.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 21/51 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems at application loading time, e.g. accepting, rejecting, starting or inhibiting executable software based on integrity or source reliability
A computer-implemented method for generating a first set of longest common sequences from a plurality of known malicious webpages, the first set of longest common sequences representing input data from which a human generates a set of regular expressions for detecting phishing webpages. There is included obtaining HTML source strings from the plurality of known malicious webpages and transforming the HTML source strings to reduce the number of at least one of stop words and repeated tags, thereby obtaining a set of transformed source strings. There is further included performing string alignment on the set of transformed source strings, thereby obtaining at least a scoring matrix. There is additionally included obtaining a second set of longest common sequences responsive to the performing the string alignment. There is further included filtering the second set of longest common sequences, thereby obtaining the first set of longest common sequences.
A method for designating a given image as similar/dissimilar with respect to a reference image is provided. The method includes normalizing the image. Normalizing includes performing pre-processing and a lossy compression on the given image to obtain a lossy representation. The pre-processing includes at least one of cropping, fundamental extracting, gray scale converting and lower color bit converting. The method also includes comparing the lossy representation of the given image with a reference representation, which is a version of a reference spam image after the reference spam image has undergone a similar normalizing process as normalizing. The method further includes, if the lossy representation of the given image matches the reference representation, designating the given image similar to the reference image. The method yet also includes, if the lossy representation of the given image does not match the reference representation, designating the given image dissimilar to the reference image.
G06K 9/36 - Image preprocessing, i.e. processing the image information without deciding about the identity of the image
G06K 9/64 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix
79.
Detection and prevention of malicious remote file operations
A detection module monitors, at a network layer, the network traffic between a client computer and a server computer. The detection module extracts application layer data from the network traffic and decodes the application layer data to identify a remote file operation that targets a shared file stored in the server computer. The detection module evaluates the remote file operation to determine if it is a malicious remote file operation. The detection module deems the remote file operation to be malicious when the remote file operation will corrupt the shared file.
A sample program being evaluated for malware is scanned for presence of a critical code block. A path guide is generated for the sample program, with the path guide containing information on executing the sample program so that an execution path that leads to the critical code block is taken at runtime of the sample program. The path guide is applied to the sample program during dynamic analysis of the sample program so that behavior of the sample program during execution to the critical code block can be observed. This advantageously allows for detection of malicious samples, allowing for a response action to be taken against them.
G06F 12/14 - Protection against unauthorised use of memory
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/55 - Detecting local intrusion or implementing counter-measures
81.
Method and system to identify and rectify input dependency based evasion in dynamic analysis
The present disclosure provides an automated technique to detect and rectify input-dependent evasion code in a generic manner during runtime. Pattern-based detection is used to detect the evasion code and trigger an identification process. The identification process marks the evasion code and rectifies the execution flow to a more “significant” path. The execution then moves on by following this path to bypass the evasion code. Other embodiments, aspects and features are also disclosed.
G06F 21/00 - Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
Adaptive network security policies can be selected by assigning a number of risk values to security intelligence associated with network traffic, and identifying a number of security policies to implement based on the risk values.
A proxy server is implemented between a user computer and the Web. The user accesses an IAM service and selects a cloud service. The proxy server intercepts the login form from the user, stores the identifier and password, and replaces the identifier and password. The proxy server allows the form to continue to the IAM service which registers the cloud service. Later, the user accesses the IAM service and selects the cloud service. The IAM service returns a login form for the cloud service with the identifier and password and redirects the user's computer to the cloud service. The proxy server intercepts the form and replaces the identifier and password with the correct identifier and password. The proxy server then allows the form to continue to the cloud service. The user is then authenticated by the cloud service and receives a Web page from the cloud service indicating logged in.
Applications running in an API-proxy-based emulator are prevented from infecting a PC's hard disk when executing file I/O commands. Such commands are redirected to an I/O redirection engine instead of going directly to the PC's normal operating system where it can potentially harm files in on the hard disk. The redirection engine executes the file I/O command using a private storage area in the hard disk that is not accessible by the PC's normal operating system. If a file that is the subject of a file I/O command from an emulated application is not in the private storage area, a copy is made from the original that is presumed to exist in the public storage area. This copy is then acted on by the command and is stored in the private storage area, which can be described as a controlled, quarantined storage space on the hard disk. In this manner the PC's (or any computing device's) hard disk is defended from potential malware that may originate from applications running in emulated environments.
A behavior of a computer security threat is described in a root-cause chain, which is represented by a detection rule. The detection rule includes the objects of the root-cause chain and computer operations that represent links of the root-cause chain. An endpoint computer establishes a link between objects described in the detection rule when a corresponding computer operation between the objects is detected. Detected computer operations are accumulated to establish the links between objects. The threat is identified to be in the computer when the links of the detection rule have been established.
A computer-implement method of detecting malware apps includes receiving a sample app for a mobile operating system. The sample app is executed in an emulator of the mobile operating system. The behavior of the sample app in the emulator is monitored to collect a string that the sample app uses to detect whether or not a target app is running in a foreground of the emulator. A bait app, which is generated using the collected string, is switched to run in the foreground. The sample app is deemed to be a malware app when the sample app instead of the bait app is running in the foreground.
Interprocess communication between processes that run on a host operating system of a computer is performed by way of a protected temporary file. File access operation on the temporary file is hooked to detect writing to the temporary file. When a process writes a message to the temporary file, a verification is performed to determine whether or not the process is authorized to access the temporary file. When the process is authorized to access the temporary file, the process is allowed to write the message to the temporary file. This allows another process that is intended to receive the message to read the message from the temporary file and act on the message. Otherwise, when the process is not authorized to access the temporary file, the process is blocked from writing the message to the temporary file.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
A system for detecting computer security threats includes a machine learning model that has been trained using sequence codes generated from malware process chains that describe malware behavior. An endpoint computer monitors the behavior of a process and constructs a target process chain that describes the monitored behavior. The target process chain includes objects that are linked by computer operations of the monitored behavior. The target process chain is converted to a sequence code that is input to the machine learning model for classification. A response action is performed against one or more objects identified in the target process chain when the machine learning model deems the target process chain as describing malware behavior.
An email inspection system receives emails that are addressed to recipients of a private computer network. The emails are inspected for malicious content, and security information of emails that pass inspection is recorded. When an email is detected to have malicious content, the recorded security information of emails is checked to identify compromised emails, which are emails that previously passed inspection but include the same malicious content. A notification email is sent to recipients of compromised emails. The notification email includes Simple Mail Transfer Protocol headers that facilitate identification of the recipients, blocking of incoming emails with the same malicious content, and identification of public Mail Transfer Agents that send malicious emails, i.e., emails with malicious content.
Disclosed are a method and system for static behavior-predictive malware detection. The method and system use a transfer learning model from behavior prediction to malware detection based on static features. In accordance with an embodiment, machine learning is used to capture the relations between static features, behavior features, and other context information. For example, the machine learning may be implemented with a deep learning network model with multiple embedded layers pre-trained with metadata gathered from various resources, including sandbox logs, simulator logs and context information. Synthesized behavior-related static features are generated by projecting the original static features to the behavior features. A final static model may then be trained using the combination of the original static features and the synthesized features as the training data. The detection stage may be performed in real time with static analysis because only static features are needed. Other embodiments and features are disclosed.
A virtual mobile infrastructure includes mobile devices and server computers. A server computer runs multiple mobile operating systems. A quick response (QR) scan app runs on one of the mobile operating systems. A mobile device takes a photo of a QR code, decodes the QR code to generate a QR scan result, and provides the QR scan result to the server computer. There, the QR scan result is encoded into another QR code and camera data of the other QR code is provided to the remote QR scan app for scanning and processing.
H04M 1/02 - Constructional features of telephone sets
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
H04M 1/03 - Constructional features of telephone transmitters or receivers, e.g. telephone hand-sets
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
92.
Anti-malware system with evasion code detection and rectification
A malware detection system for evaluating sample programs for malware incorporates an evasion code detector. The evasion code detector includes semantic patterns for identifying conditional statements and other features employed by evasion code. The system inserts breakpoints at conditional statements, compares expected and actual evaluated values of conditional variables of the conditional statements, and changes the execution path of the sample program based on the comparison. Changing the execution path of the sample program to an expected execution path counteracts the evasion code, allowing for the true nature of the sample program to be revealed during runtime.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
93.
Methods and systems for identifying legitimate computer files
A system for evaluating a target file includes an endpoint computer that receives similarity digests of legitimate files, receives a target file, and generates a similarity digest of the target file. The endpoint computer determines whether or not the target file is legitimate based on a comparison of the similarity digest of the target file against the similarity digests of the legitimate files. The system further includes a backend computer system that receives the legitimate files, generates the similarity digests of the legitimate files, and provides the similarity digests of the legitimate files to the endpoint computer.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06N 99/00 - Subject matter not provided for in other groups of this subclass
94.
Policy management in software container computing environments
A system for managing computer security policies includes a policy management system that provides computer security policies to container host machines. The policy management system retrieves images of software containers from an image registry and generates computer security policies that are specific for each image. A container host machine informs the policy management system when an image is pulled from the image registry into the container host machine. The policy management system identifies a computer security policy that is applicable to the image and provides the computer security policy to the container host machine. The container host machine can also locally identify the applicable computer security policy from among computer security policies that are received from the policy management system. The container host machine enforces the computer security policy and other currently existing computer security policies.
Abusive user accounts in a social network are identified from social network data. The social network data are processed to compare postings of the user accounts to identify a group of abusive user accounts. User accounts in the group of abusive user accounts are identified based on posted message content, images included in the messages, and/or posting times. Abusive user accounts can be canceled, suspended, or rate-limited.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
Examples of implementations relate to metadata extraction. For example, a system of privacy preservation comprises a physical processor that executes machine-readable instructions that cause the system to normalize a network traffic payload with a hardware-based normalization engine controlled by a microcode program; parse the normalized network traffic payload, as the network traffic payload passes through a network, by performing a parsing operation of a portion of the normalized network traffic payload with a hardware-based function engine of a plurality of parallel-distributed hardware-based function engines controlled by the microcode program; and provide the hardware-based function engine with a different portion of the normalized network traffic payload responsive to an indication, communicated through a common status interface, that the different portion of the normalized network traffic payload is needed to complete the parsing operation.
Examples relate to identifying a signature for a data set. In one example, a computing device may: receive a data set that includes a plurality of data units; iteratively determine a measure of complexity for windows of data units included in the data set, each window including a distinct portion of the plurality of data units; identify, based on the iterative determinations, a most complex window of data units for the data set; and identify the most complex window as a data unit signature for the data set.
Examples relate to identifying signatures for data sets. In one example, a computing device may: for each of a plurality of first data sets, obtain a data set signature; generate a first data structure for storing each data set signature that is distinct from each other data set signature; for each of a plurality of second data sets, obtain at least one data subset; generate a second data structure for storing each data subset; remove, from the first data structure, each data set signature that matches a data subset included in the second data structure; and for each data set signature removed from the first data structure, identify each first data set from which the data set signature was obtained; and for each identified first data set, obtain a new data set signature.
Examples relate to identifying malicious activity using data complexity anomalies. In one example, a computing device may: receive a byte stream that includes a plurality of bytes; determine, for a least one subset of the byte stream, a measure of complexity of the subset; determine that the measure of complexity meets a predetermined threshold measure of complexity for a context associated with the byte stream; and in response to determining that the measure of complexity meets the threshold, provide an indication that the byte stream complexity is anomalous.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
100.
Methods and systems for finding compromised social networking accounts
Social messages sent or posted by users of a social networking service are collected. Compromised social networking accounts are identified from the collected social messages. Keywords indicative of compromised social networking accounts are extracted from social messages of identified compromised social networking accounts. The keywords are used as search terms in a search query for additional social messages. Additional compromised social networking accounts are identified from search results that are responsive to the search query.