When spare bedrooms worldwide transformed into home offices in 2020, our physical workspaces finally caught up to our data habits. The line between professional and personal blurred years ago, thanks to laptop computers and smartphones that made company data accessible anywhere. And long after this work approach has gone mainstream (78% of companies report BYOD activities), many companies continue to limp along with half-baked Bring Your Own Device (BYOD) policies.
Connecting personal devices to your IT infrastructure creates a new world of security exposure and legal questions. So getting your BYOD policy right is critical to protecting your data—and potentially avoiding lawsuits from inadvertently violating an employees’ civil rights. In this post, we’ll cover key concepts that should guide every organization’s BYOD policy.
Despite widespread BYOD usage, only 39% of companies report having a formal BYOD policy. Start developing your policy by conducting an accurate inventory of all hardware and software assets in your environment. Without a survey, you probably don’t even know what’s connecting to your network. Then establish official policies for handling it all properly. (This blog provides advice on the first steps in creating an effective mobile device management policy.)
Personal smartphones and tablets are the obvious subject in BYOD. But your policy should specify which ones are allowable, including brand, model and operating system version. Some organizations set up a DMZ (demilitarized zone) that devices must pass through before getting access to the network. This review process can flag devices with outdated operating systems, etc. and require patching prior to connecting to the network.
And at some companies, an employee logging onto the company VPN through their home computer must agree to let the company’s IT team manage the device. That could include wiping data off the device if it gets compromised. Does your policy specifically address these situations?
Always remember the classic information security triad of confidentiality, integrity and availability. Every policy should include basics like multifactor authentication (MFA) and strong password requirements, but if your BYOD policies are overly restrictive or hard to use, you’ll frustrate users on the availability front and motivate them to look for workarounds. So consider the human factor as you establish your policies. Determine whether you have different groups of users that could require different levels of security. A mobile e-mail user probably doesn’t require the kind of heavy-handed security necessary for someone who regularly accesses sensitive company information remotely.
Recognize that your policy has far-reaching HR and legal ramifications and get stakeholders from those areas involved in writing the policies. Most IT teams working in isolation probably won’t take all the relevant factors into consideration.
For example, if a manager asks the IT team to go hunting on a device for unapproved activities or if the team decides to wipe a device remotely without solid reasons, they may trigger a lawsuit. All data investigations and strong actions involving a personal device should require the approval of at least a second manager, or even a member of the executive team.
Carve out some boundaries for what your call center can help with when it comes to personal devices. As with anything in BYOD, the lines get fuzzy between troubleshooting that’s limited to the company-focused aspects of a device and the device’s general operation. If a personal app is interfering with a company-focused operation on the phone, how deeply do you expect your help desk team to wade into the problem? If an employee takes their device to the local cellular store for tech support, do you have systems in place to protect the data that those technicians could potentially access?
Some companies outlaw certain apps on personal devices on the grounds that the apps compromise the overall security of the device and data it accesses. (TikTok has been at the center of many of these reviews.) Some company policies state that BYOD devices should never be connected to public WiFi networks. Will you take those stances? How do you plan to enforce them?
In some cases, you may decide that certain company information is simply too sensitive to allow any access from a personal device.
Once your policy is ready, make sure that your HR onboarding process includes a walk-through of a clearly worded, written copy of your BYOD policy. For example, your organization may want to reserve the right to remote wipe a personal device if it gets lost or stolen and has a company e-mail account attached to it. Employees need to understand that possibility and acknowledge that they may lose all their personal data if they agree to abide by the BYOD policy.
Most BYOD policy issues come from surprising employees by looking at data on their personal device, threatening to wipe the device’s data if it’s compromised, etc. So it’s critical to build clear discussion of the BYOD policy into your HR process before employees connect their personal device to your systems. One of the best ways to head off lawsuits over civil rights violations is to prove that you require all employees to sign a document stating that they have read and accepted the BYOD policy.
When an employee leaves your organization, the process for a company-owned device is straightforward: Turn your devices into HR, along with your ID badge and company credit card. But if an employee has been using their own phone, how do you ensure that you’ve shut down their access to your data? Your policy should include a clear deprovisioning process.
Don’t assume a BYOD policy is set in stone. Your policy should evolve over time as your company evolves and adjusts its IT practices and as available technology changes.
If you need help writing a BYOD policy that makes sense for your organization, contact Pratum today.
This is the true e-mail phishing story of a recent Pratum case. While the hacker got away with a partial victory this time, the case study can help you understand and prevent similar cyber attacks.
On a recent Friday afternoon (because isn’t it always?) Pratum’s phone rang with a desperate-sounding professional services provider on the other end. “I think one of my clients just sent $400,000 to a hackers’ account. Can you help? ”It turns out they had sent the money. And we could help. And to help others learn from the very bad day experienced by one Accounts Payable employee (let’s call him Finance Guy) and his company (let’s call it Acme Corp), we’re providing a behind-the-scenes look at this phishing attack. Read on to see how the hacker got away with a small fortune for nearly a week—and how good incident response and digital forensics work got most of it back.
Like most hackers, this one didn’t give himself away by grabbing the loot as soon as he got into the victim’s system. He slipped into an e-mail account at the victim company via a phishing attack, tricking Finance Guy into divulging his login credentials. (The hacker probably targeted Finance Guy specifically, knowing from his job title and LinkedIn profile that he had the ability to move money.)
After gaining access, the hacker spent several weeks monitoring Finance Guy’s e-mail messages to get a sense of his typical communication patterns and normal workflows.
After a few weeks, the hacker spotted their opportunity: an e-mail thread in which Finance Guy was discussing a payment of nearly $400,000 with a vendor. This information was time-sensitive, and the hacker understood that his window of opportunity was fleeting.
The hacker purchased a web domain that, if given only a quick glance, looked just like the vendor’s actual domain. Then the hacker created an e-mail address that closely matched the vendor’s, like this:
The hacker copied the real wire transfer e-mail thread out of Finance Guy’s e-mail account. Switching to the spoofed e-mail account, the hacker pasted the e-mail thread into a new e-mail and matched the subject line. The hacker sent the e-mail to the victim explaining that the previously shared bank account couldn’t receive payment due to an audit. The hacker helpfully offered a new bank account that could accept the payment.
So Finance Guy unknowingly wired $400,000 to the hacker’s account.
The transfer was made on a Tuesday. And like all criminals that get caught, the hacker got greedy and decided to try for another payout. In the next couple of days, the hacker e-mailed Finance Guy(who still didn’t know he had been duped) to say that the original transfer didn’t work, and that Finance Guy needed to send the $400,000 to this “updated routing and account number.”
After seeing the third account number come through, Finance Guybecame alarmed and called the vendor who was originally supposed to receive the transfer. The vendor, who hadn’t received any payment yet, knew nothing about the routing and account number changes. That’s when Acme’s team realized they had a big problem and called one of their financial service providers, who then called Pratum.
Pratum’s incident response team jumped into action. They immediately instructed the company to disable the compromised e-mail account and have the employee change all passwords. Examination of the e-mail chain discovered the spoofed e-mail address, and review of the e-mail logs revealed that the compromised e-mail account had logged in from Arizona. That’s a problem since Finance Guy works in Iowa.
Pratum then assisted Acme with enabling multifactor authentication on all company e-mail accounts to prevent this type of incident in the future.
By working with law enforcement and the hackers’ bank, Acme recovered $320,000 of the $400,000 transferred to the hackers’ account.
Several cybersecurity best practices could have saved Acme the $80,000 that couldn’t be recovered, the cost of a digital forensics investigation and the notable stress of thinking they had lost $400,000. To prevent this kind of attack, Pratum recommends:
To help ensure your company doesn’t fall for this kind of phishing attack (or to get your response team in place for the day it does), contact Pratum today.
This year’s non-stop ransomware wakeup call has motivated many organizations to dust off their incident response (IR) plans—or create one for the first time. If you’ve ever endured a breach, you know the value of a well-designed IR plan. By guiding decisions in the critical first hours of an incident, the IR plan can keep a minor situation from turning into an operational shutdown, as well as help your team track down the breach’s root cause, file cyber insurance claims, manage messages to customers and more. Use the following guidelines to make sure your IR plan includes all the essentials.
Start by determining what others require of you. In many industry sectors, IR plans are mandated by state law, federal guidelines (such as HIPAA) or your biggest customers’ vendor contracts. For example, more than a dozen states require any company in the insurance industry to maintain a written IR plan, among other best practices. And your cyber insurance underwriter will almost certainly offer you a better rate if you have policies such as an IR plan in place.
One go-to standard for IR plans is NIST publication 800-61, known as the “Computer Incident Handling Guide.” This 79-page document walks you through all the elements required in an IR plan considered up to industry standards. The guide provides details on tasks such as structuring an IR team, handling incidents as they occur and coordinating responses across departments and organizations. NIST’s approach boils down to this four-part Incident Response Life Cycle:
You should also review the SANS Institute’s more concise guide, known as the Incident Handlers Handbook. SANS recommends that every plan provide a specific process for these six areas:
Based on these resources and other industry guidelines, these are the key elements to include in your IR plan:
– An incident coordinator tasked with managing meetings, keeping notes and documenting actions.
– People with strong tech skills, IR experience and an understanding of the business.
– Multiple people with strong communications skills they can use to share information clearly and efficiently in the right directions.
– Representation from key related areas such as legal, HR, and the physical facilities team.
– An executive sponsor who can champion the team’s concerns up the ladder and provide visibility to the overall business.
– A system for rotating IR team members on a planned basis to avoid burnout and promote fresh perspectives.
As you write the plan, remember Einstein’s rule that “Everything should be as simple as possible, but no simpler.” It’s easy for IR plans to get very long and complex, especially as you continue to revise it over the years. But in the excitement and confusion a real incident, people can only follow so many policies. So streamline your plan to the essentials so that it’s more likely to see real-world use.
Just as critical as your organization’s internal team is the lineup of external service providers you’ll call on in an emergency. It’s essential to identify and get to know your providers in advance for two reasons. First, service providers that get to know your organization in normal times will be prepared to spring into action with an informed point of view at a moment’s notice. Second, securing the providers ahead of time will help you use your preferred vendors rather than being stuck with an unknown company from your cyber insurance carrier’s preferred provider list. Once you’ve picked a vendor, ask your cyber insurance company to add them to the preferred list to ensure you get to work with your selected partners.
Your external vendor team should include:
Your IR plan isn’t a set-it-and-forget-it proposition. You won’t know if it works unless you test it. And you won’t know if it continues to work unless you incorporate a specific, regular schedule for review. At minimum, review it once a year. If your business is highly dynamic, it may require more frequent review. Common elements that prompt plan updates include:
Along with annual reviews, you should plan annual testing exercises that apply the plan to a specific simulated scenario. And whenever a breach actually occurs, review your IR plan at a “lessons learned” meeting to identify areas that need revision.
If you need help creating an IR plan tailored for your specific situation, contact Pratum today.