In today’s ever-changing business landscape, those that operate using a software-driven model will be the most successful. These businesses recognize the power of transforming enormous volumes of data generated by digital operations into real-time insights that propel further success. The ability to do this in real-time, all the time, across multiple functional disciplines, lies at the heart of continuous intelligence.
A number of domain “forgeries” or tricky, translated look-alikes have been observed recently. These attack campaigns cleverly abuse International Domain Names (IDN) which, once translated into ASCII in a standard browser, result in the appearance of a corporate or organization name that allows the targeting of such organization’s domains for impersonation or hijacking. This attack has been researched and defined in past campaigns as an IDN homograph attack.
Today’s modern deployment pipeline is arguably one of the most important aspects of an organization’s infrastructure. The ability to take source code and turn it into a production application that’s scalable, reliable and highly available has become an enormous undertaking due to the pervasiveness of modern application architectures, multi- or hybrid-cloud deployment strategies, container orchestration and the leftward movement of security into the pipeline.
The ability of an actor to remain undiscovered or obfuscating its doings when driving a malicious campaign usually affects the gains of such campaigns. These gains can be measured in different items such as time to allow completion of operations (exfiltration, movement of compromised data), ability to remain operative before take down notices are issued, or ability to obtain gains based on for-profit driven crimeware (DDoS for hire, Crypto mining).
Edge computing is likely the most interesting section of the broader world of IoT. If IoT is about connecting all the devices to the Internet, edge computing is about giving more processing power to devices at the edge. Edge computing views these edge devices as mini clouds or mini data centers. They each have their own mini servers, mini networking, mini storage, apps running on top of this infrastructure, and endpoint devices. Rather than sending data to the cloud for processing and receiving already-processed data from a central hub in the cloud, in edge computing all the processing happens on the edge device itself, or close to the edge device.
Our digital surface is expanding rapidly and threats are becoming more sophisticated day by day. This is putting enormous strain on security teams, which have already been stretched to the limits. Nonetheless, organizations are skeptical of relieving this cybersecurity strain with AI and automation. Why does this situation persist when it’s simply against the logic?
A type of credential reuse attack known as credential stuffing has been recently observed in higher numbers towards industry verticals. Credential stuffing is the process of automated probing of and access to online services using credentials usually coming from data breaches, or bought in the criminal underground.
An ever-increasing number of organizations are working in the cloud. It depends on their business model what cloud delivery model they use. The three most common deployment models for cloud services are software-as-a-service (SaaS), platform-as-a-service (PaaS) and infrastructure-as-a-Service (IaaS).
At Sumo Logic, we manage petabytes of unstructured log data as part of our core log search and analytics offering. Multiple terabytes of data are indexed every day and stored persistently in AWS S3. When a query is executed against this data via UI, API, scheduled search or pre-installed apps, the indexed files are retrieved from S3 and cached in a custom read-through cache for these AWS S3 objects.
Implementing and operationalizing the best practices and capabilities of DevOps into an organization is a key predictor for increased customer satisfaction, organizational productivity and profitability. Doing so successfully can be a challenging endeavour. Implementing DevOps can be particularly difficult because it oftentimes requires technology changes, process changes and a drastic change in mindset. Overcoming all three of these obstacles in a way that knocks down traditional barriers across development and operations teams in each stage of the software delivery lifecycle raises the bar even further.
MySQL has been one of the leading open source databases for the last couple of decades, and it underpins potentially millions of applications, from tiny prototypes to internet-scale ecommerce solutions. The beauty of MySQL is that it can be tuned as the application grows. For example, you can add higher availability options like clustering without having to refactor the application.
Logs are valuable. Logs generated by a major backend resource that provides clients with access to crucial data are more than just valuable; knowing where they are and being able to manage and understand the information that they contain can mean the difference between smooth, secure operation and degraded performance or even catastrophic failure for your application.
As the shift to cloud, modern app architectures and technology stacks continue to accelerate, the demand for real-time analytics to monitor, troubleshoot, secure and speed new innovations to these environments is also accelerating. So, we're not surprised to see demand for continuous intelligence—what we define as: real-time analytics from a cloud-native platform, supporting multiple use cases—is also accelerating.
As service architectures have transitioned from the monolith to microservices, one of the tougher problems that organizations have had to solve is service discovery and load balancing. The advent of service mesh technologies seeks to solve these and other problems exacerbated by the number of hosts that has grown exponentially.
In this post, we continue our discussion of use cases involving account take over and credential access in enterprise data sets. In the first part of this series, we introduced the definition of a VIP account as any account that has privileged or root level access to systems/services. These VIP accounts are important to monitor for changes in behavior, particularly because they have critical access to key parts of the enterprise. As a follow up to our first post, this blog will describe a real-time approach for automatically profiling VIP accounts and detecting when they are potentially being misused.
You probably know that automation is an important component of DevOps. But how do you actually put automation into practice in order to advance the goals of DevOps? Let's answer that question by exploring what automation means in the context of DevOps, why automation is important to DevOps and which processes to prioritize to achieve DevOps automation.