In early March, the zero-day breach of the Microsoft Exchange Server instantly became the cybersecurity story of 2021 so far. Along with the SolarWinds breach of late 2020, this represents the second suspected state-sponsored cyberattack in quick succession, continuing to provide a wakeup call to many organizations.
When news broke about the four Exchange vulnerabilities on March 2, Pratum consultants immediately began contacting clients and instructing them to update their servers with Microsoft’s available patches as soon as possible. However, it’s crucial to understand that hackers exploited the vulnerability before the patches were released. So even if your servers have been patched, this remains a live situation as Pratum’s cybersecurity experts continue to determine exactly what the attackers accomplished with the zero-day attack. The following summary covers what we know so far about the situation. We will continue to update this blog as more information becomes available.
The new Exchange Server vulnerabilities primarily affect on-premises e-mail servers frequently used by small- and medium-size businesses. This was a widespread attack that sought to compromise any Exchange server it could find through online scans. When the attackers located a vulnerable Exchange Server, they typically inserted malware that would allow them to develop full attacks on compromised organizations at a later date.
The breach impacts on-premises Exchange Server 2013, 2016 and 2019 and can give attackers access to e-mail accounts, as well as a foothold to act within the targeted environments over the long term. Microsoft stated that the attack was initially traced to HAFNIUM, a state-sponsored group operating out of China. The United States has seen the highest number of attacks.
The vulnerability was initially identified in January and became widely known when Microsoft announced its patches on March 2. As news of the vulnerability spread, attackers worldwide quickly began to exploit the vulnerability by implanting ransomware and other malware. In the second week of March, reports indicated that the number of attacks was doubling every few hours. Experts estimate that as many as 60,000 organizations have been hacked so far.
When the vulnerability was publicized, Pratum’s incident response team began working around the clock to help clients investigate their systems to identify when their system was compromised and what type of activity took place during the compromise.
Here are Pratum’s key recommendations as of this writing:
If you need assistance in understanding exactly what vulnerabilities still exist in your system because of this breach, please contact Pratum to talk with one of our advisors.
Because the bad guys never sit still, your threat-hunting system can’t afford to either. Managed XDR (Extended Detection and Response) service delivers the latest advances in endpoint protection and threat-hunting to keep up with new attack vectors. Managed XDR provides multiple advantages over traditional security stacks made up of several loosely connected systems. In this blog, we’ll focus specifically on managed XDR’s ability to track suspicious activity and decide when it’s time to intervene and stop a potential threat. Using a combination of machine learning and XDR rules programmed by analysts, these systems correlate actions across all corners of your technology stack to recognize threats that may have slipped by unnoticed before.
To help you understand the threat-hunting capabilities, let’s look at a day in the life of a managed XDR system as if it were the world’s most secure airport.
In our scenario, John Doe drives to the airport to catch the same 10:07am flight he’s taken every Monday for the last month. At the airport, he checks into the flight via a kiosk, makes his way through the TSA security checkpoint and heads to his assigned gate.
That seems routine enough. But if John Doe were living within an XDR-supervised system, the situation would look more like the following. (Keep in mind that while our airport example involving humans plays out over a couple of hours, this sequence may happen almost instantly in a managed XDR setting.)
Because our airport features highly enhanced security, anyone entering the property must pass through a manned checkpoint. At the guard shack, Agent Chuck Norris greets John Doe and asks to see his ID. Chuck recognizes John from his visits every Monday and notes that he’s arriving at his usual time. But Chuck notices some changes. John is entering via the west gate instead of the north gate that he typically uses, and he has a passenger with him. (Because our agent is Chuck Norris, he personally mans every gate every day.)
XDR parallel: XDR monitors every aspect of your technology stack and recognizes John as a known system user. But it notes that his pattern has changed. He’s trying to log in from a different web browser and IP address. Is John working somewhere else today, or is this a hacker who stole John’s credentials trying to log in from their location? Since the login info is correct and the login time matches John’s typical pattern of starting work each day, XDR lets him proceed.
Chuck finds something else sketchy. He knows that John took a different route from his house to the airport today. (Chuck gets a lot of intel.) But with a quick check of traffic reports, Chuck sees that there is road construction on the interstate. That would explain John’s atypical route to the airport.
XDR parallel: The system’s job is to stop threats while avoiding false positives. If a quick check can explain why a user may be entering through an unusual server, the system will allow them to proceed. But John’s activities have triggered enough rules for anomalous behavior that his session has moved to a higher alert level. XDR is now watching him more closely.
John makes his way through the main airport terminal to a kiosk, where he checks into his flight. His ID and flight reservation check out. A camera in the kiosk snaps a picture of his face, and his name and photo are instantly compared to every known no-fly list in the world. He comes back clear. He receives a boarding pass and heads to the security screening line.
XDR parallel: XDR systems are constantly improving their rules based on global threat analysis information. For example, Microsoft (Pratum’s XDR platform of choice) analyzed 31,700 indicators per second in 2020 and uses all of that information to constantly screen for emerging threats. If an attack happened on the other side of the globe last week, Microsoft’s XDR platform has probably taken note and learned to watch out for that technique.
At the entrance to the TSA screening area, John runs into—who else?—Chuck Norris. Chuck reviews John’s boarding pass and notices that John is entering the regular screening line, even though he has TSA Precheck. Anybody can mistakenly get into the wrong line. But Chuck knows that A) John flies every single week and B) Anybody with PreCheck takes advantage of it every time. Chuck points John to the proper line but makes another mental note. John hasn’t tried anything dangerous, but he’s not quite acting normal.
XDR parallel: Getting into the wrong line equates to a failed login attempt. XDR knows we all mistype passwords, but it looks at how many failed attempts were made—and how quickly. In John’s case, enough low-level indicators of anomalous events are adding up to the fact that he increasingly looks like a real potential threat. Without XDR, your security stack may not be coordinating all those seemingly disconnected events into an overall image of a suspicious actor. In many security stacks built on solutions from multiple vendors, the TSA agent, for example, wouldn’t know what the guard shack saw. But just like Chuck is everywhere in this airport, XDR sees everything happening in your system.
John successfully passes through the check of his ID and boarding pass. At the podium, Chuck confirms that John has Precheck on his boarding pass and allows him to proceed to that screening area.
XDR parallel: Precheck is the equivalent of a known user coming from a known IP address or trusted device. Because the system recognizes John’s identity and device, they get less scrutiny than other logins. In TSA terms, he can leave his shoes on during screening.
As John exits the screening area, Chuck remembers something from earlier today. A local cop—OK, that was actually Chuck, too—pulled John over for speeding on the way to the airport. Chuck had given John only a warning, but Chuck noticed at the time that John seemed very nervous during the conversation. Chuck decides he won’t let John out of his sight until John is on his plane and headed away from Chuck’s airport.
XDR parallel: It’s all adding up to the fact that John is acting strangely—and may not even really be the John Doe he’s claiming to be. His activities are now considered high-risk.
At the TSA checkpoint, Chuck is running the X-ray (Does this guy ever take a coffee break?). He spots a suspicious object in a bag ahead of John’s. Chuck inspects the bag and finds a pocketknife. If Chuck were a rookie, he might tackle the guy, handcuff him and haul him away for this. But Chuck’s no rookie, and he knows that people forget pocketknives in bags all the time. That doesn’t make them terrorists. So Chuck confiscates the knife and sends the man on his way.
XDR parallel: Good XDR rules don’t overreact. Locking out a user or shutting down a system over such a small infraction causes significant inconvenience and business interruption for no good reason
John makes it through the X-ray screening with no red flags, but Chuck notices him walking toward a restricted area. John types a code into the door’s keypad, and it opens. Why would a passenger have the code to a door leading to the runway? Chuck decides that John is launching some kind of attack. Chuck runs toward John, wrestles him to the floor and slaps on the cuffs. John is neutralized as a threat. But Chuck recalls that when John went through the guard shack earlier today, he had a passenger in the car. Where is that person now? Chuck knows the other man has already checked in for a different flight leaving from Terminal A, so Chuck orders Terminal A locked down until he can find John’s accomplice.
XDR parallel: When enough anomalous activities add up, XDR shuts down the perceived threat. In this case, John had a valid access code, but nothing in his normal profile indicates that he SHOULD have that code. So XDR would declare him a threat and shut down his access before he can do any damage.
Just as Chuck knew about our suspect’s associate, XDR can scan for other entities, such as users and IPs, that are associated with the threat.
Chuck made the key decision to lock down only one terminal, not the entire airport. If XDR makes a habit of shutting down entire systems, productivity comes to a standstill. So managed XDR rules are designed to make good decisions and to confine quarantines to the minimum elements necessary.
As you can see, XDR provides powerful abilities to build awareness of an emerging threat and take action to stop it. Most of this happens only after Security Operations Center (SOC) analysts have customized the XDR tool to recognize the kinds of patterns shown here and prevent false negatives. For more information on how managed XDR could make your environment more secure and efficient, contact us today.
In security information and event management (SIEM) we rely on software to help identify patterns which indicate security threats. A series of failed login attempts, for example, will generate a ticket alerting a Security Operations Center (SOC) analyst that someone may be trying to hack into the system. (Note that SIEM solutions are increasingly being incorporated into overall Extended Detection and Response (XDR) solutions. Read this article for an overview of Managed XDR.
With any monitoring solution, one of the biggest challenges is the dreaded false positive. A false positive is any alert triggered by a rule that’s written too broadly, causing it to issue a ticket over an event that’s not a legitimate security threat. A false positive is the equivalent of a home motion-sensor alarm that goes off every time the wind blows through the backyard trees. Before long, the homeowner ignores the alarms, leaving them off-guard when it really IS a burglar setting off the alarm.
For IT teams that don’t have an in-house SOC or a managed service supporting them, the daily stream of false positives from a SIEM leads to alert fatigue, which produces frustration and growing inattention to alerts in general. One major IT survey found that 44% of alerts go uninvestigated.
Clearly, narrowing the focus to real threats raises an IT team’s chances of spotting problems and fixing them.
Discovering false positives using SIEM can be a lot like playing the game Guess Who. The player’s objective is to guess the Mystery Person on the opponent’s card by asking one question per turn (Such as, “Are they a man?”) and eliminating any gameboard faces that don’t fit the Mystery Person’s description. In a SIEM setting, we are working to eliminate false positives so that the only alerts we see represent actual threats.
Players usually start with generic questions, but broadstroke guesses still leave us with a board full of faces. On the other hand, asking questions that are too specific takes a long time to narrow down the options. In SIEM, if we write rules that are too generic, we’ll face numerous false positives that only cause clutter and confusion. If we write rules that are too specific, we may miss critical incidents that leave our systems vulnerable. The key is to make educated decisions based on the data (or gameboard faces) in front of us. We start with a wide data set and use logic to narrow the results.
Continuing with our Guess Who analogy, let’s say we’ve narrowed the field to two options. Our final choices look very similar: Both are male, Caucasian, and bald, and both have orange hair. But we know they aren’t the same. If a SIEM solution’s rule is searching for Bill using the criteria listed above, Herman represents a false positive. Herman and Bill meet all of the same “threat” criteria we’ve listed so far. The solution is finding a factor unique to Bill, such as a small nose. If we add this final condition to the original filter criteria, the false positive disappears.
When dealing with a SIEM solution, this shows the value of an experienced, well-trained security analyst. As good as machines are with calculations and patterns, they often need the human element to spot a real threat and a false positive. At Pratum, we constantly upgrade the ruleset of our SIEM solution based on the expertise of our security analysts and consultants.
Our security analysts examine event logs to identify pieces of information that the software wasn’t considering. For example, in a case of failed logons, an analyst would look in the raw log for the error code that gives the reason for the authentication failure. If the error code indicates that the password has expired, the analyst could typically conclude that it is not a serious security incident. By adding that insight to the existing rule, the analyst can eliminate future false positives from this kind of event.
Although most false positives don’t pose an immediate security threat, any false positive can be a major distraction from threatening incidents. For example, a DNS configuration problem might constantly produce authentication issues on a network. It may be tempting to ignore an alert once you’ve decided it’s a false positive. But if you do that with several false positives you’ve learned to ignore, and several of them generate multiple alerts each day, you’ll soon get lost in daily noise that distracts you from legitimate security problems.
Remember that it costs the same amount of money to license a poorly tuned SIEM system as a well-tuned one. It’s worth investing in a managed service that can help you get the most from the tool you’re paying for.