If you’ve never taken the time to create a data flow diagram (DFD) of your system, you should make that investment. The value doesn’t lie simply in having the diagram in your files. The actual process of creating the diagram will almost certainly reveal new things about how your data ecosystem works.
That’s why many IT auditors and frameworks (such as PCI DSS) consider network diagrams and DFDs so critical that they expect to see them as part of the audit process. Cybersecurity consultants frequently use DFDs to drive detailed conversations about the environment, knowing that the clarity they provide can be a deciding factor in your network’s security.
In this blog, we’ll explain how a DFD helps ramp up your overall cybersecurity and how to get started creating one.
A DFD shows exactly where your data is going so that you can make sure every step is secure. Diagramming your process may show, for example, that your data is far more widespread and replicated than you previously thought. Replicating data in multiple places, or passing it through several resources while in transit, can represent a security risk you need to analyze.
In large or complex organizations, the leadership team may rely upon a DFD to help them understand the entirety of what’s going on across what could add up to hundreds of business processes. Some organizations require several DFDs to accurately portray all the operations.
Be careful of telling yourself, “We get a good view of our system through other tools.” Don’t think you’re covered just because someone on the team can verbally explain the system. They’re probably leaving things out, and that person may not be around forever. And don’t count on a typed explanation of the system. A written description almost certainly includes enough ambiguity to let problems hide in the fine print.
A thorough diagram follows your data beyond the limits of your own system. Here’s an example of the kind of process red flags that a DFD can reveal:
While making the diagram, you start asking how your payroll processor sends and receives sensitive data. It turns out that they’re sending it over unencrypted channels. If that’s happening, even the best cybersecurity posture on your end won’t protect sensitive data such as employees’ personal information. If client information were compromised through a similar process weakness, your organization could face immediate financial losses and long-term damage to your reputation.
In some cases, Pratum’s consultants create DFDs specifically for each third-party system connecting to the client organization. This clearly shows how data flows in/out of the third-party’s environment and where it may need additional protection.
Pratum follows three core steps to build an organization’s DFD, with the details varying based on the organization’s overall cybersecurity maturity.
1. Discussion – Leadership, possibly guided by cybersecurity consultants, reviews the potential risks facing an organization. They identify protective systems already in place, cybersecurity protocols, corporate governance, connected vendors, and key business processes. As you start building the DFD, be sure to look over the results of your most recent risk assessment for information on what you should map. This helps leadership identify risks, plan mitigating controls and evaluate whether the remaining risk is acceptable.
2. Asset Inventory – Now the organization documents the hardware, software, and data used throughout the business. That includes both at-rest and in-transit systems.
3. Develop Diagrams – With all of the information gathered, you can start mapping out the DFD.
A DFD uses a universally accepted set of symbols to portray information flow within and between network segments as well as through the institution's perimeter to external parties. A basic Level 0 diagram shows the overall system, while Level 1 diagrams drill down into individual processes.
DFDs should identify:
While creating the DFD, you’ll look at how two types of users touch your data:
Employees - Look at the roles of each employee involved at the different steps of data flow. All employees have some level of responsibility for information, communication, and reporting between each level. Evaluating employee roles and access helps you spot gaps in the process. For example, organizations frequently realize that they should reduce the network access employees have at several steps, which reduces the potential for a breach and pivot into the larger system.
Vendors - Working with vendors inevitably poses a threat by allowing outside access to internal processes. Consider these factors about vendor access:
Determine whether employees, vendors/partners or systems are writing, modifying, storing, or processing data. The answer is crucial to protecting data and mapping out the process with a DFD.
Knowing where individuals, resources, and entities are located helps you judge the risk of an interaction. For example, data moving between two resources in the same building may be less risky than data traveling from an office in the United States to resources in a foreign country, or vice versa.
For more information on how to properly establish a DFD, reach out to a Pratum consultant to find out how the process can look for your organization.
You’re listening to best practices, and you care about protecting your organization’s operations. So you’ve written a solid incident response plan and shared it with your team.
But does it work in the real world? To quote the philosopher Mike Tyson, “Everyone has a plan until they get punched in the mouth.” And when a real data security incident occurs, the punches will be flying. So you need to regularly test your cybersecurity incident response plan, along with the humans and technology that will carry it out. In this blog, we’ll share the three most common testing approaches, from basic discussions to full simulations.
If you’re not motivated to do regular testing, others may provide the incentive you need. Major third-party compliance frameworks such as SOC 2 and PCI DSS, for example, require an annual test of your incident response plan, even though they rarely specify an exact testing approach. And your organization’s cybersecurity maturity and risk level may indicate that you need semi-annual or quarterly tests. You’ll get more practice and more frequent opportunities to spot parts of the plan that have gone out of date.
Your biggest customers may be pickier than the frameworks about the testing approach. At Pratum, we increasingly see contracts requiring vendors to run annual incident response tests to prove they can deliver their services, even if they face a major cybersecurity incident. We’ve seen some companies recently tell potential vendors that their testing process isn’t rigorous enough. That left the vendor to decide whether the contract was valuable enough to justify the time and expense of running a thorough simulation every year to satisfy the customer.
Below are the three most common levels of incident response plan testing, arranged by order of escalating realism and, thus, ability to truly reveal your plan’s value.
This is the most basic and theoretical test of your incident response plan. You’ll gather all the key players in a conference room, throw out several breach scenarios and have everyone talk through their part of the response, as dictated by the plan.
You’ll get definite value from this approach. In most organizations, questions like the ones below tend to highlight some holes and generate some meaningful to-do items after the meeting:
The tabletop exercise’s weakness, of course, is its theoretical nature. When someone answers a question confidently, do you trust that they’ve done the homework to prove that their answer is correct and capturing the full implications? Pratum vCISO Jeff Franklin points out that, “Colonial Pipeline paid the ransom to get the decryption key, and it still took them days to decrypt their data.” You could get a false sense of security if your only testing comes from conversations in a meeting room with no pressure bearing down as your organization goes dark.
In this test, your team actually steps through the plan as it’s designed, stopping short only at the point of actually flipping a switch to do something like turning on the building’s alternative power source. Typical walkthrough activities include:
With all of these tests, remember to include the human resources aspects of the situation. If you plan to use a text alert to find out who is out of the building, do you expect people on vacation to answer that text? If your operations go down for a half day, do you send everyone home? Do they still get paid?
In this test, you’ll pull out all the stops to truly find out how your team handles a crisis. In a cutover, an incident response leader might walk into the IT department and declare that the team needs to switch from on-prems servers to the cloud. Right now. Then you see what happens. Does everything actually come on through the alternate system? Does it take longer than expected?
Some Pratum customers go so far as killing the power to their building to force the generator to kick in and see what happens. (Note that no one does a cutover of all systems simultaneously.)
One big eye-opener for most organizations comes when they try to restore data from backup. Many teams find that the restore takes far longer than they expected, that the team isn’t fully trained to perform the backup quickly or that your equipment can’t handle moving that much data in the required timeframe.
Clearly, a cutover test will create an operational interruption. If your plan allows eight hours to restore critical data, you could be down that long during a cutover test. For that reason, we’ve seen some clients turn down big contracts that required an annual cutover test. With that downtime and staff commitment factored in once a year, the contract’s profit margin wasn’t compelling enough.
No matter which test you choose, it’s wise to have outside observers/advisors present. If you believe that you’ve achieved a high level of cybersecurity maturity, you can probably manage the tests internally. But it’s still wise to have a third party watch, take notes and share feedback during the post-game analysis. An expert cybersecurity observer will always see issues that you don’t even know about.
“One side knows the business, and one side knows incident response planning,” says Pratum’s Franklin. “You want to marry those two to manage that responsibility.”
Even if you do hire a third-party advisor to run your tests, Pratum recommends that someone from your team provide the hands-on leadership during the test. This gives your team critical experience that will benefit your organization for a long time.
If you need help identifying the right testing method for you, contact Pratum’s consulting team today.
When spare bedrooms worldwide transformed into home offices in 2020, our physical workspaces finally caught up to our data habits. The line between professional and personal blurred years ago, thanks to laptop computers and smartphones that made company data accessible anywhere. And long after this work approach has gone mainstream (78% of companies report BYOD activities), many companies continue to limp along with half-baked Bring Your Own Device (BYOD) policies.
Connecting personal devices to your IT infrastructure creates a new world of security exposure and legal questions. So getting your BYOD policy right is critical to protecting your data—and potentially avoiding lawsuits from inadvertently violating an employees’ civil rights. In this post, we’ll cover key concepts that should guide every organization’s BYOD policy.
Despite widespread BYOD usage, only 39% of companies report having a formal BYOD policy. Start developing your policy by conducting an accurate inventory of all hardware and software assets in your environment. Without a survey, you probably don’t even know what’s connecting to your network. Then establish official policies for handling it all properly. (This blog provides advice on the first steps in creating an effective mobile device management policy.)
Personal smartphones and tablets are the obvious subject in BYOD. But your policy should specify which ones are allowable, including brand, model and operating system version. Some organizations set up a DMZ (demilitarized zone) that devices must pass through before getting access to the network. This review process can flag devices with outdated operating systems, etc. and require patching prior to connecting to the network.
And at some companies, an employee logging onto the company VPN through their home computer must agree to let the company’s IT team manage the device. That could include wiping data off the device if it gets compromised. Does your policy specifically address these situations?
Always remember the classic information security triad of confidentiality, integrity and availability. Every policy should include basics like multifactor authentication (MFA) and strong password requirements, but if your BYOD policies are overly restrictive or hard to use, you’ll frustrate users on the availability front and motivate them to look for workarounds. So consider the human factor as you establish your policies. Determine whether you have different groups of users that could require different levels of security. A mobile e-mail user probably doesn’t require the kind of heavy-handed security necessary for someone who regularly accesses sensitive company information remotely.
Recognize that your policy has far-reaching HR and legal ramifications and get stakeholders from those areas involved in writing the policies. Most IT teams working in isolation probably won’t take all the relevant factors into consideration.
For example, if a manager asks the IT team to go hunting on a device for unapproved activities or if the team decides to wipe a device remotely without solid reasons, they may trigger a lawsuit. All data investigations and strong actions involving a personal device should require the approval of at least a second manager, or even a member of the executive team.
Carve out some boundaries for what your call center can help with when it comes to personal devices. As with anything in BYOD, the lines get fuzzy between troubleshooting that’s limited to the company-focused aspects of a device and the device’s general operation. If a personal app is interfering with a company-focused operation on the phone, how deeply do you expect your help desk team to wade into the problem? If an employee takes their device to the local cellular store for tech support, do you have systems in place to protect the data that those technicians could potentially access?
Some companies outlaw certain apps on personal devices on the grounds that the apps compromise the overall security of the device and data it accesses. (TikTok has been at the center of many of these reviews.) Some company policies state that BYOD devices should never be connected to public WiFi networks. Will you take those stances? How do you plan to enforce them?
In some cases, you may decide that certain company information is simply too sensitive to allow any access from a personal device.
Once your policy is ready, make sure that your HR onboarding process includes a walk-through of a clearly worded, written copy of your BYOD policy. For example, your organization may want to reserve the right to remote wipe a personal device if it gets lost or stolen and has a company e-mail account attached to it. Employees need to understand that possibility and acknowledge that they may lose all their personal data if they agree to abide by the BYOD policy.
Most BYOD policy issues come from surprising employees by looking at data on their personal device, threatening to wipe the device’s data if it’s compromised, etc. So it’s critical to build clear discussion of the BYOD policy into your HR process before employees connect their personal device to your systems. One of the best ways to head off lawsuits over civil rights violations is to prove that you require all employees to sign a document stating that they have read and accepted the BYOD policy.
When an employee leaves your organization, the process for a company-owned device is straightforward: Turn your devices into HR, along with your ID badge and company credit card. But if an employee has been using their own phone, how do you ensure that you’ve shut down their access to your data? Your policy should include a clear deprovisioning process.
Don’t assume a BYOD policy is set in stone. Your policy should evolve over time as your company evolves and adjusts its IT practices and as available technology changes.
If you need help writing a BYOD policy that makes sense for your organization, contact Pratum today.