No risk has a bigger potential impact on business and its technology than an IT security breach. From shutting down national infrastructure such as with Colonial Pipeline, to losing the private information of 150 million consumers in Equifax’s breach in 2017, a security incident can create instant infamy for a company of any size. As we see from these breaches, occurring now almost weekly, the material risk is exfiltration of your company’s data, your customer’s data, or shutting down infrastructure and demanding large ransoms. Beyond this, though, is the less tangible risk of damage to reputation. A major breach could be the tipping point for many organizations to lose key customers.
Breaches are only increasing in frequency. The Identity Theft Resource Center’s Q1 2021 Data Report shows that breaches are up 12 percent from Q4 of 2020, but the number of individuals impacted is up 564%! Supply chain attacks are a leading reason for this increase. The SolarWinds breach showed us how the breach of a “supply chain vendor” can then have wide ranging impacts to countless companies – even the major players like Microsoft. When companies that provide the backbone of our IT infrastructure can be breached by their suppliers, how can we trust any of our IT systems? We’re rapidly approaching a point in history where the short answer to that is – you can’t. Now the question becomes, if I can’t trust my systems and my software, how can I protect my business, my data, and my customer’s data? This is where new improvements in IT security come into play, and new approaches to understanding the activity within your IT world.
In this article, we will look at how we got to this point in history, with some of the common security concerns that led the first decade of this century. We’ll also provide some thoughts on what it means to have “good” security in the modern era. And finally, we will close things out with a brief glimpse of what is to come in the next decade and how securing our IT may change.
What We Have Been Doing
In the early days of Internet development, teams were pushing new technologies and techniques out quickly. This led to code being published to the Internet that had little or no security design built in and written by developers with little or no security training. SQL injections, bad authentication, no sanitization of user input, and many other vulnerabilities were common in websites. As these breaches became commonly exploited, web developers and sensitive data industries such as payment cards, began to put together best practices and recommended (or required) steps to write secure software. In 2004, the Payment Card Industry’s (“PCI”) founding members of American Express, Discover, JCB International, Mastercard, and Visa, published the first ever “Data Security Standard” – the “PCI DSS”. This standard became a requirement for all companies accepting payment via credit or debit cards.
Around the same time, the ISO 27001 standard was developed, and then published in 2005. It provided a strong information security framework that organizations could follow, whether or not they process payment cards. ISO 27001, PCI-DSS, and other standards, became the baseline for organizations to follow when dealing with sensitive information. Even earlier in IT security, the Open Web Application Security Project (“OWASP”) was formed in 2001 to promote secure web development. It maintains a top 10 vulnerability list, many of which date back to the origin – such as injection vulnerabilities – and remain a top problem today. Standards like PCI DSS lean on OWASP as a component of ensuring that software is written in a secure fashion.
In addition to leveraging OWASP as a practice for software development, organizations also have to follow many other security principles – such as “least privilege,” network segmentation, multiple firewall layers (including web application firewalls), and multi-factor authentication into remote systems. These have become part of an overall security standard over the last 10 to 15 years. Over the last few years, we have begun to see, while all of these steps are required, they are becoming insufficient to prevent modern breaches, which find novel and creative ways to circumvent traditional security.
What You Should Do Today
No target is too small; there’s a price put on every size organization. It doesn’t matter the type of data you store. If it’s not valuable to steal and sell on the black market, such as credit cards or personal identities, it’s still valuable to you. As such, ransomware, often distributed automatically, is floating throughout the Internet, just waiting to strike and lock up that key information. Once that happens, and the hackers see their hook has caught something, they then determine the right-sized ransom for the data and the business. Automated phishing viruses or links, distributed via email or chat, represent a common means of gaining access to perform these attacks. Larger companies are often directly targeted, with “hackers” intentionally looking for weak spots within the security, and trying to break in.
Because this activity is going on at all times, and at every level of the business, we are at the point of IT security where you should assume you are breached. For a while, the phrase was “assume you will be breached,” but the guidance is now “assume you are breached.” Someone in your organization has an email with a phishing link that they’ve already clicked and handed over their passwords. Someone else is using a password that is in the 8.4 billion passwords leaked in the RockYou2021 compilation. If you have VPN’s, there are automated and manually targeted attacks always trying to see if they can gain access to your network with these compromised accounts. So, if this is all happening right now, how would you know? The answer is that you must have good monitoring, logging, and analysis of the logging, already setup, preferably before any of the “bad” activity started. If your systems are not logging, they need to start. And if they are logging, you need to be aggregating those logs into a Security Information and Event Management (“SIEM”) product.
A SIEM allows you to take all the data from all your systems, and then analyze the trends and behaviors of the people and the systems. Early SIEMs were primarily advanced log searching and analysis – allowing IT security teams to monitor activity and look for anomalies. Modern SIEMs now leverage machine learning to watch your environment, identify baselines, and then report on any activity that falls outside of the baseline. Many common attacks also have their own unique signatures, and SIEMs are able to look for activity matching that signature and highlight the suspicious activity. To be fully effective, a SIEM should be monitoring every corner of your enterprise – from email activity to file share activity, and inbound and outbound network traffic. It can be extended to the endpoints of your users, to make sure that what software your employees are using, and how they use it, fits with typical and expected patterns for your business.
By watching all activity, understanding what “good” usage looks like, and having warnings on any anomalies, it takes you a long way toward knowing where the “bad” actors are focused, and where they are starting to find their way in. One maxim in security is that the “bad guys” only have to find one weakness in your security, while your team has to find and fix all of them. The advantage will always skew toward the hackers.
Since we must now assume that we are breached, and with SolarWinds and the Exchange Hafnium attacks showing that the breach may be within our core infrastructure, we also have to bring a second key concept for modern security into play: Zero Trust. Zero Trust is a new standard in security that basically works just like it sounds – we, as IT architects, should have zero trust that a person or a system, is who it claims to be. Instead, we use several different factors (like multi-factor authentication) to understand the connection request and whether it fits expected behavior and meets our requirements. On top of this, we only validate access at the point of request, and don’t assume any subsequent access is valid.
The easiest way to understand zero trust is to compare it to the old way of securing access. In traditional remote access to a secure network, we would require users to connect to a VPN to gain access to secure systems. A username and password would be required, and sometimes a certificate or a multifactor token is used to provide authentication. Once that is passed, the VPN connection is established, and that user is now trusted to have access to any part of the network the VPN is configured to access. The user is allowed to connect to resources on any server. They may have to login to the server with separate credentials, but that’s a disconnected authentication from the VPN – the VPN lets you in the building and now you’re free to try any of the doorknobs inside.
Contrast this with a zero-trust model, such as is used for Office 365. In its case, when you try to login to your email, you are still prompted to enter your username and password, and hopefully you also must pass some multifactor step. But in addition, Microsoft is running additional heuristics on the login, noting where the traffic is coming from, the time of day, and other behavior characteristics. All of this then feeds into algorithms that can either make more information be required to authenticate or block the access because it is suspicious. Additionally, the zero-trust model means that even after you login to your email, when you later try to access SharePoint Online – even though you’re already logged in, the access can go through the same hurdles behind the scenes, and make sure that access is allowed as well. Much of this zero-trust activity is transparent to the end users – unless they try to access resources while on vacation.
Assuming breach, and having the systems in place to detect it, combined with following zero-trust standards, is the best option for keeping systems secure in the modern world. These approaches are built upon the traditional security layers, such as good firewalls, good network and application architecture and design, but supplement these practices with additional layers of both keeping hackers out and knowing when they are present. Following this layered approach helps build a “defense in depth” practice of securing your IT resources. Improving as your trusted consulting partner can help you assess your current IT environment and provide recommended baselines and changes to be better protected against modern threats.
Where It's Headed
Zero trust and an “assume breach” mindset are a key addition to maintaining a secure IT environment. However, history has shown us that hackers are more motivated to find a way in, than many organizations are to keep them out. Maintaining extensive IT security programs is expensive, and often is seen as slowing down the business. This is highlighted even more so by companies that have not been breached (or don’t realize that they have been). In order to make IT security less complex (and thus less expensive), researchers are leaning heavily on machine learning powered security – to build Artificial Intelligence (“AI”) security systems that can monitor traffic and not only alert, but also react and defend targeted resources.
As discussed with SIEM products, these now are watching all activity for suspicious patterns. Machine learning algorithms are used to understand what “normal” is and then when the behavior deviates. Newer AI systems will begin to go beyond monitoring, and themselves become active searchers for vulnerabilities and exploits. Automated vulnerability scanning already exists but is not AI powered in most cases – instead more of a brute force method looking for common attack signatures. AI powered vulnerability scans would operate far more like people and be able to look for connections between systems that people may not think to look for. The flip side of this is while AI is a powerful tool for securing your infrastructure, it’s also a powerful tool for the hackers. We’re likely to see in the near future AI-vs-AI attacks and defenses happening, as each side ups the game in their AI power.
The other aspect of security that is rapidly getting better is the world of Identity Management. Authentication systems are getting far more advanced in their ability to know when a login attempt is coming from the right person (or system) or from a bad location. Passwordless authentication relies on this fact and becomes more secure than even traditional multifactor authentication. As we know, passwords are now a weak point in security. Users are forced to record their passwords, since complexity requirements mean they cannot be remembered. Some users do use secure password vaults, but that is often protected by a single password – so this still represents a weak point to protect.
Multifactor using email or SMS is easily compromised by phishing attempts, as they can simply request that you put in the code, and then pass that along to the target. Passwordless and app-based multifactor goes beyond just the something you know and something you have, and have back-end data, such as geographic location, that become additional factors, transparent to the user, and can’t be faked by bad actors. By improving identity management and authentication, we can then have more trust that the users are who they claim to be, and also that their actions are correct and not ill-intended.
These trends build upon what exists today, extending “assume breach” and zero-trust with more powerful versions of the same concepts. They also continue to extend all the other best practices and help move IT security forward. The hope is that it can move forward faster than the “bad guys,” as they too are always moving their techniques forward as well.
Securing your IT infrastructure is not something that can wait. If you’re using technology then you’re vulnerable and you’re a target. Improving can help you make sure the systems you are using support modern best practices, such as zero-trust, and also make sure you have enabled them. If you don’t have a SIEM deployed and monitoring your environment, get one. Most major vendors offer simple connectivity to SIEM products, and we’re happy to help recommend and integrate your systems with the SIEM you select. There are SIEM as a Service companies as well, that can help setup and monitor the information, so that you don’t have to have an in-house security team.
Make sure your other IT vendors follow best practices like OWASP when they write your software, and make sure your internal teams know what to do when they suspect a breach. Have an incident response plan and practice it; don’t wait for the real event to come along to have it be the first time you try it out. Improving has years of experience in Cybersecurity, reach out and work with the firm where “trust changes everything.”