Pratum Blog

Computer caught by fishing hook

Phishing has pumped up its frequency to being present in 36% of breaches (up from 25% last year).

2021 Verizon Data Breach Investigations Report

Network end users are frontline defenders that form a critical component of an organization's information security program. That’s why all cybersecurity training materials include sections on how to spot phishing, which is both rampant and increasingly sophisticated in the methods used to lure victims. When our consultants evaluate risk within an organization and discuss their phishing awareness and training efforts, we typically see advice such as “Don't click on suspicious links” and “Hover the mouse pointer over links in an email to check whether it is legitimate.” But how do you know whether a link and the associated Uniform Resource Locator (URL) lead to a legitimate site?

To evaluate links and URLs, you should understand generic Top-Level Domains (gTLDs), country code TLDs (ccTLDs), and other types of Internet domains. This article covers the basics about reading and interpreting links/URLs.

What Does "www1" Mean?

A web address looks pretty suspicious if you see “www1” or “www2” (or some other number) in the URL. But that's not a definite red flag. Some web sites may be very popular and, therefore, have multiple servers working in a load-balancing configuration to serve content when requested. Some companies choose to number their servers. So, if you see a www1 or www2, you’re just seeing which server # among multiple servers is providing the content. Seeing a www1, www2, etc., is not in itself an indicator of a phishing site.

One way to teach users to look for indicators like these is by developing a customized training program that includes phishing awareness and testing. A training consultant can develop a set of simulated phishing messages that help users learn to spot red flags. When users click the simulated malicious links, the program can point them to additional training.

How Links/URLs are Formed

So what’s the key to reading URLs in links? The basic answer is that interpreting the URL means focusing on the important stuff between the double forward-slash “//” and the first single slash, primarily in the highlighted area shown below.

The structure of a link/URL

Note: The framework above is the basic URL breakdown. In place of http:// or https://, you may see ftp:// or news://. These are different types of transfer protocols. In addition, though “www” appears in many URLs, it is not a required component. You may see additional fields prior to the generic top-level domain and secondary domain/server name. (After the first single forward slash, you’ll find less critical things such as directories, subdirectories, filenames and file types.)

Example Links/URLs

With that background information in mind, let’s look at some examples.

1. http://www.amazon.com

This is a well-known site, and the URL doesn’t include any suspicious modifications.
Assessment: LEGIT!

2. http://www.ama.zon.com/gp/cart/view.html/ref=nav_cart

URLs can be formed in almost any fashion, which makes it easy for site owners to build unique site names. It also makes it easy for phishers to build site names that closely approximate legitimate site names.

In this example, a period makes all the difference. If a person clicked on the link above, they wouldn’t go to amazon.com. The link leads to the site zon.com, which could be a site registered by phishers.
Assessment: SUSPECT!

3. http://This email address is being protected from spambots. You need JavaScript enabled to view it./catalog

In this case, a person would be directed to IP address 66.161.153.155, not amazon.com. If you see a link/URL with an “@” sign, be particularly careful. Phishers routinely use this URL-manipulation tactic.
Assessment: SUSPECT!

4. http://209.131.36.158/amazon.com/index.jsp

This URL is somewhat similar in function to #3 above. It leads to the IP address, not amazon.com, which is listed after the first single forward slash.
Assessment: SUSPECT!

5. http://www.google.com/url?q=http://www.badsite.com

This URL would refer a person from one site (in this case, google.com) to another site, badsite.com (note the “=http://” nomenclature that allows this). Referrals are not in themselves bad, but a referral could lead to a phishing site. In this case, badsite.com doesn’t look legitimate.
Assessment: SUSPECT!

To help users quickly determine the top-level and secondary domains within a URL, some companies and organizations have started to use “domain highlighting.” When a user visits a site, part of the URL will dim after a few seconds, leaving the top-level and secondary domains dark. For example:

PayPal domain

It’s always good to look for these signs of a legitimate, secure site:

  • closed padlock
  • https://
  • company name highlighted in green within the URL (such as in the PayPal example above).

If a site’s certificate is expired or otherwise invalid, some browsers, such as Internet Explorer and Firefox, or security services, will warn users. Is it safe to proceed through the warning? In this case, use other available indicators (review the URL again) to help determine whether the site is legitimate. If in doubt, do not proceed.

Why Country Domains Matter

Fifty-four countries have chosen to allow their ccTLDs to be used for commercial purposes. For example, .co, the ccTLD for Colombia, can be used in place of .com. It’s very popular, due to the burgeoning .com domain, and allows businesses to have alternative ways to form website names.

Have you seen the URL http://o.co? That’s Overstock.com providing an alternate way for you to get to the company through your browser.

You may have seen youtu.be. That’s a legitimate URL, registered by Google using Belgium’s ccTLD, .be.

Much of the entertainment industry uses Tavalu’s ccTLD, .TV. It’s a great way for the island nation to make money.

When trying to determine whether a site is legitimate, realize that many ccTLDs are also used for commercial purposes. What looks like a suspicious site could be, in fact, legitimate. However, ccTLDs can also be used to form names for phishing sites, so when in doubt, don’t click!

A Short History of Generic Top-Level Domains

We are all used to seeing gTLDs. We use them almost every day, including familiar ones such as .com, .gov and .edu. They are a key part of the structure of the Internet. They are also well understood by phishers, who manipulate URLs for fraudulent use. To best assess links within emails, as well as URLs within browsers, it’s good to know how the various domains have evolved and how they work.

In 1984, Request for Comments (RFC) 920 was used to define the original “general purpose domains”: .com, .gov, .mil, .edu, and .org. Another domain, .net, was added in early 1985 and is also considered one of the “original” domains. In 1988, .int (international) was added to meet the North Atlantic Treaty Organization’s request for a domain. Over the years, other domains were added, such as .biz and .info (2001). By early 2011, 22 gTLDs had been established. In June 2011, the Internet Corporation for Assigned Names and Numbers (ICANN) voted to remove many of the restrictions on gTLD applications and implementation, effectively opening the door for almost any gTLD to be used. Under the new rules, there are currently about 1,500 gTLDs, including .auto, .computer, .network, .social, .pizza, .organic, registered and cleared for use on the Internet. According to some security experts, this evolution in gTLDs is considered a gift to phishers because it will allow them to form a multitude of new phishing websites. For a full listing of the expanded gTLDs, see the Internet Assigned Numbers Authority (IANA) Root Zone Database (https://www.iana.org/domains/root/db).

Country Code TLDs

Country code TLDs are also part of many URLs, and, therefore, one can expect to see them in links on occasion. Countries have ccTLDs to help distinguish what country a site is registered in or originates from. For example, the ccTLD for the United States, .us, is often used by state and local governments. Other ccTLD examples are Australia, .au; Japan, .jp; and United Kingdom, .uk. When reading a link or URL, realize that the location of the ccTLD within the URL could shift (at the end of a URL, such as http://www.gov.uk, or earlier in a URL, such as https ://uk.news.yahoo.com).

Conclusion

Phishing continues to be a global problem, exacerbated by users who are unaware of phishing tactics, increasingly sophisticated phishing methods, and now, an increasing set of generic Top-Level Domains. Though links in emails aren’t phishers’ only method, they’re very common. To reduce the risks posed by phishing, you should know how to interpret links and the associated URLs.

If you are interested in learning more about social engineering, awareness and training, and risk assessment services, please contact us today.

Can You Spot E-mail Phishing? Awareness Poster

PhishingPoster

There are a number of key issues to search for in an e-mail to help identify if it is malicious. This poster will help your employees learn how to spot them.

Get Poster
Intrusion Detection System

At one time, everyone considered intrusion detection (IDS) or prevention (IPS) systems critical to overall information security success. But in recent years, observers keep declaring IDS/IPS dead, only to see it keep hanging on. And while we’re still not ready to bury IDS/IPS today, we DO urge you to consider how you’re deploying these tools within your overall information security strategy. Without proper tuning and deployment, IDS/IPS solutions can't do their jobs properly. And the current landscape of cloud computing and dispersed workforces means protection tied to a firewall misses a lot of activity. Read on to learn how to properly leverage IDS/IPS in a modern environment.

How IDS/IPS Works

The goal of IDS is to detect cyberattacks by analyzing the signature of data packets as they traverse the network. When the system detects a suspicious packet, it generates an alert. IDS is a passive tool that simply detects and alerts. IPS goes a step farther by adding an active protection method of adapting to the threat and blocking the traffic from reaching the intended victim host. Most IDS/IPS solutions are now available as a bundle with your firewall subscription.

Intrusion Detection System Diagram showing how Endpoint Detection and Response will protect workstations that bypass company firewall

Weaknesses in IDS and IPS Systems

To effectively use IDS/IPS systems, you should be aware of a couple of inherent limitations:

  • They rely on signatures, which means they only watch for what you tell them to. These systems require constant tuning to keep up with changing attack vectors used by cybercriminals. Tuning signatures to eliminate false positives and alert fatigue is a full-time job. In fact, there’s an entire industry providing these services. Even if you purchase these feeds of updated signatures, you still need to test and tweak them to match each unique environment. This explains why most IT teams use IDS rather than IPS. They don’t have time to tune the system, so they just skip the protection tools rather than risk constant business interruptions caused by false positives.
  • They can see only traffic that passes by them. All too often, we see IDS/IPS implementations provide a false sense of security to an organization because of poor network design. Organizations frequently rely on a unified threat management (UTM) type of firewall to provide their IPS. In that setup, the IPS sees only the traffic that is routed through the firewall. Most of the time, this is only internet traffic to the DMZ servers (such as websites and email) and outbound traffic to the internet from the workstations on the local network.

    While a UTM setup is a start, it leaves major gaps in coverage. The setup typically lacks monitoring within security zones or between local workstations, servers and remote workforces. You may have compromised systems attempting to breach other internal systems, but you can’t see it because the IPS isn’t privy to the traffic on those network segments without it passing through the IDS/IPS.

How to Use IDS/IPS Effectively

Follow these steps to ensure that these tools provide the protection you’re expecting:

  • Get a risk assessment. Many organizations implement IDS/IPS simply to fulfill a compliance checkbox. But you need a full information security risk assessment to get a true picture of your organizational risk. Plus, you may still be non-compliant with IDS/IPS in place because most compliance requirements such as HIPAA, PCI, FISMA, etc. require a risk assessment.
  • Ingest IDS/IPS data into your SIEM. Your SIEM provides a centralized log and alerting system for the entire environment. An IDS keeps its own logs, but how often are you looking at them? By ingesting the IDS/IPS data into your SIEM, you’ll have a clear look at what’s happening. This process will probably show you just how noisy most IDS/IPS’ are in terms of alerts generated, which will probably motivate you to do some tuning.
  • Add EDR (endpoint detection and response). Protection tied to your firewall doesn’t account for today’s distributed workforces. Many of your users now work remotely, which means their activities never pass through your corporate firewall. The solution is EDR, which bundles active detection and response into each workstation. A full Managed Extended Detection and Response (XDR) system protects workstations, IoT devices, BYOD issues and more.
  • Leverage XDR to make IDS/IPS more effective. With the detailed information and correlation provided by XDR, you’ll be able to spot poorly tuned IDS/IPS, antivirus and other tools and make the right adjustments.

For help reviewing your security system’s architecture, contact us today.

Image of computer alerts over dark background

Here’s the hard truth about monitoring solutions: Most companies haven’t properly configured their SIEM/XDR system. Logging millions of events per day may seem productive. But what good does it do if an IT team is overwhelmed with alert fatigue and learns to ignore most of notifications they get?

“The basic rules in your SIEM may be functioning, but they often aren’t functioning well,” says Pratum Chief Technology Officer Steve Healey. Read on to learn how trained SOC analysts leverage SIEM/XDR tuning to turn out-of-the-box rules into meaningful tools for reducing noise and alert fatigue while stopping attacks before they gain a foothold.

The Problem with Out-of-the-Box SIEM Rules

All SIEM solutions come pre-loaded with a large number of rules. Alert fatigue happens because standard rules can’t possibly work equally well in every environment. “The idea behind those rules is solid, but they’re generic,” Steve says. “The execution will lead to an enormous number of false positives and alert fatigue. You’ll have to tune the rules with additional logic specific to your business to create exceptions without impeding the rule’s original intent.”

Beyond SIEM vendors, many other tech vendors regularly issue new detection rules to close gaps discovered in their own products. Many of those rules also generate a flood of false positives. Pratum’s SOC analysts (who have managed multi-tenant SIEM/XDR solutions for more than a decade) review each new rule’s goal and customize it for every customer’s environment. “We don’t just disable ineffective rules,” Steve says. “We take the core intent of the rule and build it out to get high-fidelity results.” With this kind of tuning, Pratum recently turned 266 million monthly security events in one client’s environment into just 41 alerts sent to the client’s IT team.

Reducing Alert Fatigue

The real art of creating SIEM/XDR rules lies in finding the sweet spot of writing rules sensitive enough to detect real threats but not so sensitive that they cause constant false positives. Nobody wants to get an alert every time someone logs in from a coffee shop using a different IP address. But if a legitimate user who normally uses an iPhone suddenly logs in through an Android device in a new geographic location, that’s worth an alert.

The solution is a team of SOC analysts trained to create models of normal activity. By identifying patterns of typical activity, analysts help the system recognize a scenario that checks all the boxes to be suspicious—but actually isn’t. “We can create threat models based on baseline behavior so we know what’s normal and only send an alert when the pattern changes,” Steve says. “Machine learning can figure that out over time.”

(This blog provides a summary of the logic used to eliminate false positives.)

The following real-world scenarios illustrate how SIEM tuning modified standard rules into more accurate reporting tools that stop the alert fatigue.

Use Case #1:

Fighting Business Email Compromise

Pratum recently revised one rule intended to deal with the growing threat of business email compromise (BEC) attacks. In these situations, hackers take over a legitimate user account. Then they often create email forwarding rules that let them intercept a user’s messages and conceal the fact that the account has been compromised. Many SIEM solutions now include a stock alert designed to watch for the creation of suspicious forwarding rules. But Pratum’s analysts recognized that the stock rule wasn’t catching the forwarding rule hackers are using most right now. So Pratum’s SOC team wrote a new rule, had the Pratum penetration testing team attempt an exploit to validate the rule, then rolled the rule out to Pratum’s entire client base. The new rule not only identifies the activity, but can also automatically orchestrate a response to contain the threat.

Use Case #2:

Eliminating False Positives

“The intent of most rules is terrific. A lot of rules would be amazing if they were accurate 100% of the time. But they aren’t,” Steve says. Pratum’s SOC team noticed that one stock rule started generating 50 tickets a day for every organization Pratum manages. Less than 5% of the alerts were legitimate threats because the rule kept triggering when normal software operations took place.

The analysts disabled the rule to stop the flood of unactionable data, then rewrote it with complex logic that cut the false positives to almost zero. “Within 72 hours of enabling the new rule, it saved one of our customers from an intrusion that the stock rule missed,” Steve says.

Use Case #3:

Tailoring Rules for SMBs

SIEM developers rightfully talk a lot about their solutions’ machine learning capabilities. But the developers tend to focus their machine learning work on big customers, which means some of the tools don’t do much for small organizations generating a limited amount of monthly data. So Pratum’s analysts devote a lot of attention to modifying rule logic so that companies with, say, 30 employees benefit from the next-gen tools as much as companies with 1,000 employees.

For more information on how Pratum’s custom SIEM/XDR rules could make your organization more secure and efficient, contact us today.

The information we track while users are on our websites helps us analyze site traffic, optimize site performance, improve our services, and identify new products and services of interest to our users. To learn more please see our Privacy Policy.