Pratum Blog

Every advance in security technology reinforces a favorite industry cliché: It’s easier to hack people than servers. Clever code exploits may earn hackers bragging rights, but it’s a lot simpler to trick one user into clicking a bogus link and letting you in the front door.

That’s why social engineering continues to be the leading vector for cybersecurity incidents. Industry sources estimate that 80% of security breaches stem from phishing attacks and that 94% of malware arrives via e-mail.

It’s basic math for hackers. A bad actor can easily send out 1,000 e-mails at a time. If you assume an average ransom of $40,000, a success rate of even 1% yields $400,000. And in reality, phishing attacks can work 25% of the time.

Why do so many people fall for phishing e-mails every week? Human nature explains a lot of it. We’re baited with messages that threaten the loss of a service, promise a financial windfall, hint at an important message from our employer or play off our basic confusion about technical terms. All this is dangled in a familiar-looking message promising resolution with a single click.

And we shouldn’t stereotype all phishing victims as that co-worker who just doesn’t get it. Hackers have honed their game massively since the days of foreign princes asking you to help transfer money. Modern phishing e-mails often include your company logo, a trusted partner logo (such as Dropbox), your colleague’s authentic-looking e-mail address, details about your specific business unit and more. In some of our phishing tests, for example, Pratum uses this convincing-looking Dropbox knockoff:

To stop e-mail phishing attacks, you must continually train your team to keep a wary eye on their inbox. That requires a combination of ongoing training and phishing simulations that keep everyone sharp. The first time you test employees with an internal phishing campaign, the results may be surprising. Pratum’s phishing campaigns often hook 20% of the recipients into clicking the link, and we regularly see 10-15% of recipients giving up their credentials in a simulated attack.

How to Run A Phishing Campaign

Here are some key questions to help you plan an effective phishing campaign:

  • Should I do it myself or hire a vendor? Multiple services let you create and execute your own campaigns. But hiring a partner lets you tap the expertise of teams who run dozens of campaigns each year.
  • How often should I conduct campaigns? In short, regularly. Many organizations run them monthly, but quarterly tests are a good standard. It takes a lot of repetitions to keep your employees on the cutting edge of hackers’ tricks. Plus, you should be measuring your team’s progress in spotting phishing messages.
  • Who should receive the test? In larger organizations, it often makes sense to target campaigns at specific departments using custom messages. One Pratum client, for example, targets the IT team with decidedly more deceptive test messages, assuming these users should be far savvier. You also can customize messages to simulate the kinds of attacks you expect your team to see, such as fake invoices aimed at Accounts Payable.
  • What are next steps for users who fail the test? Make sure follow-up messages take a tone of coaching, not shaming. And consider the timing of training. You can set up a campaign to send someone straight to training if they click a fake link, for example. But if you’re in the midst of a larger campaign, that spreads the word that a phishing test is underway, tainting the results. Our experts frequently delay e-mails announcing required training until the test is over.

Creating Effective Phishing Test Messages

Once you’ve planned the test logistics, it’s time for the art of the project. Your phishing campaign is all about testing users’ ability to spot a fake, which makes the quality of test messages central to the process. Here are several tips for effective test messages:

  • Use a tool with plenty of templates. Top-level phishing campaign platforms like Pratum’s offer hundreds of e-mail templates rated by difficulty. They can take an official angle, such as asking someone to log into a fake company VPN. Or they can use a personal focus, such as telling users an Amazon delivery failed or offering them a great rate on a loan from a local bank.
  • Mix up the templates. If you follow the advice above and test regularly, you need a wide selection of templates to keep it fresh.
  • Test multiple attack vectors. Good tests dangle a variety of lures. Ask users to click on a link in the message. Try to get them to open an attachment. Request that they enter their credentials. A good campaign report shows how many users fell for each approach. A well-rounded test also includes pretexting phone calls and SMiShing (phishing via texts, or SMS messages).
  • Tie tests to current events. Hackers love capitalizing on the news with offers of Covid-related financial relief, promises of free sports tickets, etc. Your tests should simulate these tactics.
  • Customize templates with convincing details. If you’re testing savvy users, put your company logo in a message asking them to log into the company VPN. You can even add a downloadable file that displays a message on the users’ screen when they run the bogus .EXE file.
  • Target specific user groups. Hackers will do this, so your tests/training should cover this tactic. Use simulated spearfishing attacks, which include user-specific information. (If you’re testing executives, you can use the cool term of “whaling.”)

What to Learn From a Phishing Campaign

A good phishing campaign report includes a detailed summary like this:

Some phishing solutions include an Outlook extension that users can click to report a phishing e-mail. Your IT team can track how many users report the test message as potential phishing, letting you measure their growing participation in spotting the problem. This kind of detailed information also lets you provide training targeted at the groups and individuals in your organization who are struggling to spot bogus e-mails.

Are you ready to learn how Pratum’s experts can help your organization train and test your team’s security awareness? Check out our services or contact us today!

We’re roughly six months into the world’s sudden, unplanned leap into a work-from-home (WFH) lifestyle. And most of the IT policies thrown together to handle the switch are definitely showing their gaps. We recently had a conversation with Pratum CEO Dave Nelson and PC Matic FederalPC Matic Federal President Terry McGraw to hear their insights on what we’ve learned so far and how organizations should be adapting to the new threat landscape.

Here we share the second of two blogs featuring an edited transcript of the conversation with these cybersecurity leaders. You can watch the full video through the player on this page.

What legal concerns arise with increased use of personal devices?

Terry: I’m not a lawyer, so I’m not providing legal advice. But anytime you’re extending a corporate view into a privately owned device, there are innumerable complications that open both the business and the person up to liability.

I think we need to train, coach and even enable. Provide for your home users’ antivirus or VPN solution. Help them lock down their own environment with training tips. Make sure the basic blocking and tackling is done. But when you try to extend your visibility down to that device, that’s when you open yourself up to a morass of liability concerns. Employees may wonder, “Who’s accessing my camera? Did they access my files?” As a business leader, I don’t think I want to get into that environment. I think the better way is to just assume everything is dirty and lock down your internal data systems rather than trying to play Whack-A-Mole at the end user level.

Dave: I totally agree. I’m also not an attorney, so this isn’t legal advice. But you see companies that have been sued because they wiped a phone when an employee left or they took a laptop that they thought had data on it, and it turned out to be an employee’s private laptop.

What happens when something doesn’t work at someone’s house? Are you going to send a tech out there to fix it? Is that a new job description that you’re supporting all that hardware?

Dave Nelson CEO- Pratum

Once you start taking some responsibility, you’re taking some liability. What if that network becomes part of a breach and you had responsibility for keeping it secure? You have some liability for that breach.

Get your HR team involved. Get your legal team involved. Don’t just rely on your general business legal counsel. You need somebody who really understands tech law and how it’s applied. Also look at employment law as well, because in certain states you can do things with employees you can’t do in other states.

For example, how do you deal with break time or with leave? Do you disable somebody’s access when they’re on leave so that they can’t claim later that they weren’t really on leave because you made them check their e-mail?

Terry: You have to be sensitive to data being removed from your controlled environment. Put policies in place that say, “Don’t download docs.” You can’t always prevent it, but you can have a policy in place that says if you do this, we’ll terminate you.

There are clever ways you can control data at very low costs. A lot of the Google and Microsoft Office solutions are very inexpensive for small business. And using those can help your business efficiency as well as improving your security position.

What training should organizations be offering remote workers?

Terry: Don’t assume people know the control measures you have in the office. I guarantee they probably don’t. And if you have teenagers, I know doggone well security is probably the last thing on anybody’s mind at the house.

Don’t assume that workers’ home systems are patched, that they have a good antivirus solution or that their router is locked down. If you can’t control that physically, then you have to do it through policy and training.

Terry McGraw Presiden- PC Matic

If you don’t have a formal training program, leverage one like KnowBe4. In my last organization, we sent out a newsletter with best practices to leverage with your family. Give it to mom. Give it to the kids. Show them things to do to be safe at home.

I would be sure to use some specific social engineering training in there.

Dave: The best scams? You won’t see them coming. The best con men are the ones you trust and believe. That’s where people get taken for big, huge scores. Train your employees so they understand they’re under attack and being targeted.

What are some new social engineering threats you’re seeing?

Terry: Sadly, some of the things are playing on human nature. At a company in the Middle East, one of their young IT members was blackmailed for his credentials by an e-crime group because they hacked into his home computer and found illegal material on it. In the end, he was caught both for giving up his credentials and for having the illegal material on his computer.

You’ve seen deep fake videos. Well, there’s enough of my voice samples out there that you can string together a deep fake of my voice, and it will sound like a nearly normal conversation. Let’s say one of my subordinates gets a call from me where I say, “I need you to wire this money for a merger and acquisition conversation we’re having, and I need you to do it by end of day today. Don’t let me down.” Click. My team should know the business process is to send a text or e-mail to me validating the request. Conversely, if I get a text or e-mail, I pick up the phone and confirm it.

With those two-party check systems, a lot of the social engineering stuff comes apart.

What new technologies have the most potential for helping us spot emerging threats?

Dave: We’ve relied a lot on patterns in the past. Those are still useful. But we need to start looking more at user entity behavior analytics. What’s outside of the normal pattern? Maybe I just got an e-mail from Terry, but I’ve never gotten an e-mail from Terry before. The content of that message is that he’s asking for a specific transaction. That makes it a greater risk. So now I can assign a risk score that says this e-mail from Terry is really high risk, so we need to evaluate it.

It’s also about patterns on devices. If a device usually comes in through this API and accesses this data, but all of a sudden it tries to access other data or comes through another API, all of these things can increase the score of the riskiness of that behavior. How do I analyze that? What threat patterns do we have?

The machine learning and AI that can predict some of that are very much in their infancy and won’t change the world tomorrow. But there are some really good prospects for how we can start seeing these behaviors in a new light and not rely on humans’ little hairs on the backs of our neck going up.

What is your biggest concern over the next 3-6 months?

Dave: Distractions. People don’t know what they’re doing with their kids in terms of school. They don’t know if they’re ever going back to the office. They don’t know if the temporary processes they’re struggling with now will remain. All of those distractions take us away from security stuff.

Terry: Small businesses, especially, were already under pressure from the economic impact of COVID. Ransomware attacks and the prevalence thereof will put way more of them out of business. The average ransom last year was $63,000. If you add in remediation and containment costs, most small businesses will never recover from that.

I don’t think time is on our side. Now is the time to come together as a community. We always give lip service to it. But we’re still fighting in our own foxholes, and the enemy plays across that entire infrastructure.

What are you optimistic about?

Dave: I think the same thing that’s made it difficult is the same thing that will make it better. When things are going well, there’s this idea of “Let’s not change what’s working.”

It typically takes some kind of catastrophic event for businesses to stop and say, “Where do we go from here?” No one was willing to upset the apple cart before. Now business leaders are saying, “If I’m facing a massive transformation, let’s put it all on the table.”

Terry: I agree: Never let a good crisis go to waste. This is going to force us to accelerate things we’ve known about for a long time. Zero trust architectures are not new. Multifactor authentication is not new. Distributed environments are not new.

In the larger sense, I’ve always been impressed by the human spirit’s ability to endure and overcome the greatest hardships. It’s time for us as a community to do more sharing and communication and be less risk-adverse about sharing communication across party lines. But at the end of the day, this too shall pass.

Are you ready to learn how Pratum’s experts can help your organization adjust to the fast-changing world of remote workforces? Check out our services or contact us today!

We’re roughly six months into the world’s sudden, unplanned leap into a work-from-home (WFH) lifestyle. And most of the IT policies thrown together to handle a sprint are showing clear gaps now that we’re running a marathon. To see what we’ve learned so far and how organizations should be adapting, we talked with Pratum CEO Dave Nelson and PC Matic Federal President Terry McGraw.

Here we share the first of two blogs featuring an edited transcript of the conversation with these cybersecurity leaders. You can watch the full video below.

What are the key threats at this point?

Dave: An uptick in social engineering attacks. With this shift to remote work, a lot of informal approvals in the office went away. Now you can’t just check in with your boss down the hall about a transaction they want you to make. There’s a lot of confusion, and processes weren’t solidified during the WFH transition. Attackers are capitalizing on the chaos with spearfishing, pretexting, and other attacks.

Terry: This scenario has accelerated and exacerbated parts of the cyberthreat landscape that have been there for a while but had a limited vector.

The measures we took to ensure the mobile workforce was secure now have to apply to your general organization, and I don’t think our architectures were well equipped to do this at scale.

Terry McGraw President - PC Matic Federal

Social engineering and deep fakes still work because people lack two-party check systems. If I get an e-mail or a phone call that seems a little suspect, I should have a two-party check to verify it.

How have the threats changed?

Terry: The barrier to entry to being an e-criminal now is just a desire to commit crime. Five, eight years ago, people needed to know how to craft and employ these tools. Now you can lease the infrastructure to create an attack. The rapidity with how quickly tradecraft becomes commoditized and then reused in the e-crime environment is one of the biggest upticks we’ve seen.

Dave: When you think of the physical tools you need to carry out a war, the U.S. was well-equipped with the infrastructure to build the tools for that. But in cyberspace, a small organization that’s not even backed by a nation state but wants to rain down terror can lease resources and target and overwhelm someone in a very short period of time.

How should we adjust IT architecture for this environment?

Dave: We have to move to an environment where I don’t care what device you’re accessing data from or what location you’re accessing it from. I need to protect data because that’s what moves around in a vendor environment or a client environment.

In WFH, we sent a lot of people home without a laptop. They went home, and we turned on VPNs we didn’t have turned on before. We allowed use a personal computer that’s used by everyone in the home and that probably has viruses running around all over it. We allowed that to connect into the corporate network or the corporate cloud. Now we have all these unknown devices and unknown threats sitting there unmanaged.

We figured we could do that for 90 days. But now we’re in September, and we’re thinking it may be next June before people go back, if we’re lucky. So we have to reevaluate those risks we took early on.

We have to think about moving to a data-centric model. If you haven’t even begun, you’re behind the eight ball already.

Dave Nelson CEO - Pratum

Terry: I’m a big fan of zero trust architecture, which, at its core, is being as granular as you can be in user object permission schema and validating that the data and the user are scoped to the exact access they need and validated every time.

I can’t tell you how many times I’ve walked into an organization that swore they had multifactor authentication (MFA), and it’s nowhere in sight. Sadly, the percentage barely moves year over year.

It’s primarily an architectural problem, but it’s exacerbated by the fact that we don’t have basic blocking and tackling in place. We don’t have MFA involved. We don’t have a good handle on our data. We don’t have full asset enumeration. Those were all problems we could gloss over because we had a somewhat contained office environment. But now you’ve broadened the aperture. You have to just assume everything is dirty. You have to look at containerization and segmentation and MFA.

What are the first steps for tackling these challenges?

Terry: I like to start with a macro model. There are lots of frameworks that deal with pieces of the problem. But if you raise it up one level, I need three major things to reduce my business risk in a cyber environment:

  1. I need to have sensing technologies that determine adversary access to my environment.
  2. I need a view of myself. I need to understand the limits of my environment and have good eyes on things accessing my data.
  3. I need to have a good handle on all the things that are mission-critical in my environment and those I do business with.

The one thing I would do today if I hadn’t already done it is implement MFA. I need to make sure everyone touching my environment is authenticated from the system they’re working on.

Dave: If you did a risk assessment before, the environment has changed. So ask four main questions:

  1. What data do I have? Assess your risk based on the confidentiality, integrity and accessibility of that data in the whole life cycle.
  2. Where does it come from?
  3. What do I do with it while I have it? Does it go outside the organization?
  4. What happens when I’m done with it? Does it need to be saved somewhere? Destroyed?

In each piece of the life cycle, your risk changes because different people and systems have different access. Assessing risk continually is really critical.

How much can we realistically expect end users to maintain home routers and handle other IT tasks?

Dave: I don’t think it’s realistic to expect anything out of them. So it goes back to zero trust architecture. Say, “I don’t trust you or the devices you’re coming from, even if it’s a device I manage.” If I assume that I’ve been breached, then I don’t care anymore about the workstation. I care about the user and what they can do. So the key is really restricting down the user access.

Let’s say I click a link and get ransomware. Anything I have access to is subject to being encrypted. If we restrict Dave’s access to only what he absolutely needs to do his job, then we can restrict the depth to which ransomware gets into our organization and starts encrypting files, which reduces the cost.

Terry: Even in traditional networks, limiting lateral scope is important. Microsegmentation has been growing for a while, but it’s been cost-prohibitive. Now with more cloud data environments and need being the mother of invention, I think we’ll see more microsegmentation solutions hitting the market soon.

You should also validate that what you think about your environment is true. That probably means having a third-party organization doing a pen test.

Are you ready to learn how Pratum’s experts can help your organization adjust to the fast-changing world of remote workforces? Check out our services or contact us today!

Get our blog posts delivered to your inbox:

The information we track while users are on our websites helps us analyze site traffic, optimize site performance, improve our services, and identify new products and services of interest to our users. To learn more please see our Privacy Policy.