I

XIOLOGIX SECURES YOUR DATA AND INFORMATION

Today’s maze of security issues and technologies can be overwhelming. Enterprise security is a critical business requirement that needs to tightly integrate the right people, processes, and technology.  Partnering with Xiologix will ensure that you have a partner to guide you through the latest security challenges and proactive solutions.

Security – General

Today’s maze of security issues and technologies can be overwhelming. Enterprise security is a critical business requirement that needs to tightly integrate the right people, processes, and technology.

Test your Metal

re you protected?It’s about more than just malware these days – attackers bypass antivirus and other detection methods by concealing code with different file types and compressions.Are you at risk? Find out with this easy test.

5 steps to stronger data security

Nearly everything we do in computer security is meant to protect data. After all, we don’t deploy antimalware software, tighten security configurations, or implement firewalls to protect users, per se. Job No. 1 is to protect the organization’s data — including employee and (especially) customer data.But guess what? People need to work with that data or you wouldn’t store it in the first place — which is why most data security measures focus on ensuring only trusted, authorized parties get access to it. Follow these five recommendations and your mission-critical data will be well protected.1. Identify the crown jewelsFirst, you need to identify your most precious data. The hard part is finding it. As one CIO told me years ago, “If you think you know where all your data is, you’re kidding yourself.”Precious data is stored in databases, application data repositories, and now the cloud, as well as on backup media and removable media. Precious data also includes critical subsystems that support delivering and securing actual data, including Active Directory domain controllers, credential databases, DNS, DHCP, network routers, and other services, all of which have their own security defenses. All data should be categorized for its business value and sensitivity, so keep your crown jewels to the smallest size possible. The least amount of data that needs to be stored should be stored, because nothing is as secure as data you didn’t store in the first place.All data should have an owner, to which all questions about its condition, treatment, and validity can be addressed. All data should be marked with a useful life, and at the end of that useful life, it should be disposed of.2. Clean up credentialsPractice good credential hygiene — that is, clean up your privileged account memberships, with the goal of minimizing permanent membership to zero or near zero. Administrative duties should be performed with the least amount of permissions and privileges necessary (sometimes called “just enough” permissions). Any permissions or privileges should be given only when needed, only for the time actually needed (called “just in time” permissions).Every organization should start by reviewing permanent memberships in each privileged group and removing members who do not need permanent, full-time access. If done with the appropriate rigor and analysis, this usually results in less than a handful of permanent members. In the best cases, only one or zero permanent members remain. The majority of admins should be assigned elevated permissions or privileges on a limited basis. Often this is done by having the admin “check out” the elevated credentials, with a preset expiration period.Credential hygiene is essential to strong database security, because attackers often, if not nearly always, seek to compromise privileged accounts to gain access to confidential data. Minimizing permanent privileged accounts reduces the risk that one of those accounts will be compromised and used maliciously.3. Set strict internal security boundariesLong gone are the days when a network boundary firewall could be seen as sufficient security. The inside, chewy center of most corporate networks must be separate, isolated security boundaries, which only predefined accounts can access. Strict internal security boundaries can be created by host-based firewalls, internal routers, VLANs, logical networks, VPNs, IPSec, and a myriad of other access control methodologies.For example, although a large majority of users may be able to access the Web front end of a multitier application, very few people should be able to directly access the back-end database. Perhaps only assigned database admins and a few supporting servers and users should be able to access the database server, along with the front-end Web database and any middle-tier services. That way, if attackers try to access the database directly without having the necessary credentials, they can be prevented from doing so, or at least an auditing alert can be initiated.4. Ensure encryption moves with the dataTraditional security defense touts two types of encryption: encryption for data during transport and encryption for data at rest. But this assumes the bad guys haven’t already stolen legitimate credentials to access the data in question, which is often the case.If you want solid data protection, make sure your encrypted data remains encrypted no matter where it is — and especially if it is moved to illegitimate locations. Nothing is more frustrating to the data thief.Many solutions encrypt individual data components and keep them encrypted no matter where they moves. Some are application services, like Microsoft’s Active Directory Rights Management Service, while others encrypt the data right within the database, such as Microsoft’s SQL Transparent Data Encryption.What’s the smart way to encrypt data? If someone steals it, it remains encrypted and useless.5. Protect the clientHackers rarely break into servers directly. It still happens — SQL injection attacks and remote buffer overflows, for example — but client-side attacks are far more common.If you want to protect your data, make sure you protect the people who access the data. This means that all critical patches are applied within a week or two, users are educated on social engineering, and workstations are securely configured.

Zero-days aren’t the problem — patches are

There’s a widely held view that our world is full of uber hackers who are too brilliant to stop. Thus, we should fear zero-day attacks because they’re our biggest problem.Nothing could be further from the truth.Most hackers follow the path created by a very few smart ones — and zero days make up a very small percentage of attacks. It turns out that patching vulnerable software, if implemented consistently, would stop most hackers cold and significantly reduce risk.Fear of the zero-day exploitZero-days, where an attacker exploits a previously unknown vulnerability to attack a customer, aren’t even the majority of bugs found. According to the most recent Microsoft Security Intelligence Report, around 6,300 unique vulnerabilities appeared in 2015. Symantec says that only 54 of them were classified as zero-days, a little less than 1 percent. If you tracked total attacks from all exploited vulnerabilities, I’m absolutely positive that it would be orders of magnitudes less.Most zero-days aren’t used against many people because as soon as they pop up with any frequency, they get “discovered” and reported to the software vendor and are added to antimalware updates. A major undiscovered zero day is often worth tens of thousands of dollars — sometimes more than $100,000. Once it’s discovered, the value may drop to nothing.In other words, if a hacker uses a zero-day too much, it won’t stay a zero-day for long. Hackers need to be “slow and low” with zero-days, and even then, they know the vulnerability will be discovered and patched soon enough.Zero-day? How about 365-days?Most exploits involve vulnerabilities that were patched more than a year ago. So why does it take so many people so long to apply the patch? Every patch management guide recommends that critical patches should be applied within one week of their release. Overall, you’ll be fine if you patch in the first month. Statistics show that the vast majority organizations that suffer exploits are those that don’t patch in the first year or ever patch at all.Microsoft has long written about how most of its customers are exploited by vulnerabilities that were patched years ago. Microsoft’s Security Intelligence Report lists the most popular exploits; you’ll be hard-pressed to find an exploit discovered as recently as 2015 on that list. Most successful exploits are old. This year, most exploits date back to 2012 to 2010 — and that’s not only a Microsoft software issue.The Verizon Data Breach Report 2016 revealed that out of all detected exploits, most came from vulnerabilities dating to 2007. Next was 2011. Vulnerabilities dating to 2003 still account for a large portion of hacks of Microsoft software. We’re not talking about being a little late with patching. We’re talking about persistent neglect.Why people don’t patch quicklyMost operating systems and applications come with self-patching mechanisms that will do their job if you let them. But why do so many people fail to patch?I think it comes down to a few factors. First, a lot of people — mostly home users — ignore all those update warnings. Some simply don’t want to take the time to patch and keep putting it off. Others are probably unsure whether the patch update notification message is real. How are they supposed to tell the difference between a fake patch warning and a legitimate patch warning? They chicken out and don’t patch.Another huge component of unapplied patches stems from unlicensed software. There are tens of millions of people using software illegally, and many are fearful that the latest patch will catch the unlicensed software and disable it. This is the reason why, years ago, Microsoft decided not to require a valid license in order to patch an operating system.Yet another cause: A lot of computers are set up for computer neophytes by friends or hired professionals who never return — and the neophyte doesn’t know enough to do anything. Very likely the vast majority of mom-and-pop computer stores sell computers that will never be patched during their useful lifetimes.Lastly, I’m sure some computers aren’t patched because the owners or user make the explicit decision not to patch. Most companies I’ve consulted for employ software programs they feel can’t be patched due to operational concerns. This article includes an interview that reveals the average organization takes 18 months to patch some critical vulnerabilities. I know many companies where that time lag stretches to many, many years.Focused patchingThe conventional wisdom is that all critical patches should be applied as soon as reasonably possible. Most guides say within one week, but I think anything within one month is acceptable.If you have limited resources (who doesn’t?), then at least concentrate on patching the applications with the most exploits successfully used against the computers you manage. The Verizon Data Breach report says that 85 percent of successful hacks used the top 10 exploits. The other 15 percent were caused by more than 900 different exploits. Patch a few critical programs with vulnerabilities, and you’ll eliminate most of the risk.Patching is easier — so what’s your excuse?In the past, patching took a long time. Vendors might take weeks, months, or even years to create a patch to a public vulnerability, and customers might take months to apply them.Back in 2003 when the SQL Slammer worm infected almost every unpatched SQL server on the Internet — more than 75,000 SQL instances in less than 10 minutes! — the Microsoft patch that closed the vulnerability had been available for almost six months.Kudos to Google for accelerating its patching schedule, to the point where Google software vulnerabilities take a day or less to be patched. Yet even Google faces a significant percentage of users who either take forever to patch or never patch.The cloud is fixing that problem. The provider patches the application and everyone who uses it is immediately patched — no stragglers.Microsoft was recently notified of a critical exploit in Office 365 and patched it within seven hours. Imagine, everyone protected quicker than they could read about it. That’s a huge positive for cloud computing.Meanwhile, however, most of the software you use remains installed on your own servers or clients. Patching demands vigilance, but patching a few applications can reduce most of your risk. You don’t always need to patch in the first day or week. But don’t take years.

Steps to take control

The ongoing Apple versus the FBI debate has me thinking more about the implications of encryption. Whether or not national governments around the globe choose to go down the path of further regulating encryption key lengths, requiring backdoors to encryption algorithms, mandating key escrow for law enforcement purposes, or generally weakening the implementations of encrypted communications and data storage in consumer technologies, the use of encryption will increase – and in parallel – network visibility of threats will decrease.While there are a handful of techniques available to enterprise network operators that will allow data inspection of encrypted flow, for all practical purposes they are of dwindling appeal and practicality. Host-based agents – while providing visibility of pre- and post-encryption communications – are easily bypassed by those with malicious or criminal intent. Meanwhile, in a world of increasingly diverse computing and mobile platforms, coverage for the expanding array of devices and operating systems means it is likewise increasingly impractical to deploy.Like a final front-line push prior to a cease-fire deadline, SSL terminator (or accelerator) technologies are being promoted as a solution. Hopes are certainly high that such technologies can “man-in-the-middle” SSL Internet-bound communications and provide the levels of deep packet inspection of an earlier age. However, the reality is – not only is enterprise-wide deployment and management of such devices (including the addition of appropriate certificate authority credentials on the client systems and devices) increasingly difficult – the scope of what can be detected and mitigated through such inspection is rapidly decreasing.Social media sites and online search engines have led the way in moving from SSL-by-default to wholly encrypted communications. We can then expect those sites to get smarter in not only detecting that SSL man-in-the-middle is being used to intercept traffic, but that the adoption of various SSL certificate key pinning standards will prevail for all other Internet services as well.Today, many organizations hope that evil hackers and malware using SSL as a means to control compromised systems and as an evasion aid will stand out amongst the authorized (unencrypted) traffic of a closely monitored corporate network. It’s an obviously flawed plan, but some true believers still feel it may be a viable technique for a few remaining years.How CxOs can better prepare Smart CxOs should be planning for the day when all of their network traffic is encrypted and deep packet inspection is no longer possible. In many networks, half of all Internet-bound traffic is already encrypted (mostly HTTPS) and it’s likely more than three-quarters of network traffic will be encrypted within the next couple of years. With this increase, the prospect of inspecting the content layer of traffic will have mostly disappeared.While the loss of content-level inspection will have a measurable effect on the network security technologies we’ve been dependent upon for the last decade or two (e.g. IDS, DLP, ADS, WAF, etc.), security teams will not be blind. While threats have advanced and encryption has covered the landscape like a perpetual pea soup fog, there remains plenty of “signals” in the transport of encrypted data and the packets traversing the corporate network.Just as the revelations of how law enforcement agencies around the globe consistently use “metadata” associated with cellular traffic logs (e.g. from, to, time, and duration of the call) to identify and track threats without being able to listen to the actual conversation, a similar story can be formulated for network traffic – without being reliant upon the content of the messages (which we can presume will be encrypted now and more so in the future).A new generation of network-based detection technologies – using machine learning and traffic modeling intelligence – have entered the security market. These new technologies are proving to be just as accurate (if not more accurate) than the legacy detection technologies that required deep packet inspection to identify threats and determine response prioritization.Network security architects – and those charged with protecting their Internet-connected systems – need to re-assess their defensive strategies in the wake of increased encrypted communications traffic. A smart approach would be to architect defenses with the assumption that all traffic will soon be encrypted. By all means, continue to hope that some level of content-layer inspection will be available for critical business data handling, but plan for that to be an edge case.

Setting the Record Straight on Cyber Threat Intelligence

Threat intelligence has achieved buzzword status. The good news behind that is people are talking about it – it is a critical component of a cyber risk management program. The bad news is too many folks have distorted and confused the term, so much so that it’s meaning varies widely depending with whom you’re speaking. And that fact is taking away from the real value of legitimate cyber threat intelligence.A perfect example is in an article I read recently where it stated that said,  “60% of organizations have had a threat intelligence program in place for more than 2 years.” It’s important to understand how “threat intelligence” is defined in this setting because there’s simply no way that a majority of organizations have a “threat intelligence program” established, let alone for the last 2 years. Let’s look at some of the more common definitions of threat intelligence:• Gartner defines threat intelligence as: “Evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging menace or hazard to assets that can be used to inform decisions regarding the subject’s response to that menace or hazard.”• Forrester defines it as: “The details of the motivations, intent, and capabilities of internal and external threat actors. Threat intelligence includes specifics on the tactics, techniques, and procedures of these adversaries. Threat intelligence’s primary purpose is to inform business decisions regarding the risks and implications associated with threats.”• INSA says that “Threat intelligence is an analytic discipline relying on information collected from traditional intelligence sources intended to inform decision makers on issues pertaining to operations at all levels in the cyber domain. Relevant data to be analyzed may be about network data, ongoing cyber activity throughout the world or potentially relevant geopolitical events. What matters is that it is timely, actionable, and relevant, helping to reduce uncertainty for decision makers. The origin of the data or information is not important. When analyzed and placed in context, information becomes intelligence; and it is intelligence that reduces uncertainty and enables more timely, relevant and cost-effective policy, as well as high-quality operational and investment decisions.” The trap many vendors and cybersecurity professionals unknowingly fall into is that information and intelligence are not one in the same. There is more information out there than anyone can possibly distill, analyze and use to quickly make sound decisions. Information is:• Unfiltered and unevaluated• Widely available• Accurate, false, misleading, and/or incomplete• Relevant or irrelevantInformation overload can kill your intelligence efforts because too much information is just a lot of outputs that requires a lot of time, money and staff. How much can you accurately automate? How large a staff of qualified analysts can you afford to review everything that isn’t automatically filtered out? I like to think of intelligence as driving outcomes as opposed to outputs. Think of it as information that can be acted upon to change outcomes for the better. Intelligence is:Organized, evaluated and interpreted by expertsAvailable from reliable, sources and checked for accuracyAccurate, timely, relevant and completeAligned with your businessThe world of cyber is infinite and with that comes many unknowns. Intelligence enables you to reduce your risk by moving from ‘unknown unknowns’ to ‘known unknowns’ through discovering the existence of threats, and then shifting ‘known unknowns’ to ‘known knowns’, where the threat is well understood and mitigated.How To Measure Threat IntelligenceThe KISS method is a good way to start… Good business managers run their business on a foundation of evaluated intelligence, or ‘known knowns’ – essentially the things you know with a level of certainty. The goal is to consistently look at the unknown and determine how to turn the uncertainty into more certainty. What are the characteristics that make up your business? What are the corresponding risks? Who are the Actors operating in your industry, which tactics, techniques and procedures do they favor? What has been their target commodity? What organizations have they targeted? What was the outcome from those efforts?Pull in data on who you are as a company such as your products, employees, software and hardware, geographical locations, industry sector, the data you store/transact, and much more. Overlay this company data and compare your business traits against cyber threats on the horizon. Now you can understand your business risk exposures based on your relevant cyber threats.Analysis is another critical differentiator between information and intelligence. When you establish an intelligence program, you are establishing a capability, not just deploying a tool.  Automation can play a role, but “All operations in “cyber space begin with a human being” (INSA) and threat actors/adversaries are people, they have desires, motivations, and intent.So What IS Cyber Threat Intelligence?At the end of the day, cyber threat intelligence should focus your organization on making better decisions and taking the right actions. Every organization uses intelligence already, but in the form of business intelligence that evaluates information on financials, customers, logistics, products, as well as any other areas that the business needs to make decisions and take actions on. The need for cyber threat intelligence is no different as every organization relies on technology to deliver its products and services to the end user and those cyber risks need to be evaluated.As I wrote recently on how cyber threat intelligence helps the business, intelligence should be giving decision makers the insights to understand if they are or are not well positioned for cyber threats – and if not, why not.

Micro-segmentation Defined – NSX Securing “Anywhere”

The landscape of the modern data center is rapidly evolving. The migration from physical to virtualized workloads, move towards software-defined data centers, advent of a multi-cloud landscape, proliferation of mobile devices accessing the corporate data center, and adoption of new architectural and deployment models such as microservices and containers has assured the only constant in modern data center evolution is the quest for higher levels of agility and service efficiency. This march forward is not without peril as security often ends up being an afterthought. The operational dexterity achieved through the ability to rapidly deploy new applications overtakes the ability of traditional networking and security controls to maintain an acceptable security posture for those application workloads. That is in addition to a fundamental problem of traditionally structured security not working adequately in more conventional and static data centers.Without a flexible approach to risk management, which adapts to the onset of new technology paradigms, security silos using disparate approaches are created. These silos act as control islands, making it difficult to apply risk-focused predictability into your corporate security posture, causing unforeseen risks to be realized. These actualized risks cause an organization’s attack surface to grow as the adoption of new compute technology increases, causing susceptibility to increasing advanced threat actors.A foundational aspect of solving this problem is the ability to implement micro-segmentation anywhere. NSX is a networking and security platform able to deliver micro-segmentation across all the evolving components comprising the modern datacenter. NSX based micro-segmentation enables you to increase the agility and efficiency of your data center while maintaining an acceptable security posture. The following blog series will define the necessary characteristics of micro-segmentation as needed to provide effective security controls within the modern data center and demonstrate how NSX goes beyond the automation of legacy security paradigms in enabling security through micro-segmentation. Acceptable Security in the Modern Data CenterIt is no longer acceptable to utilize the traditional approach to data-center network security built around a very strong perimeter defense but virtually no protection inside the perimeter. This model offers very little protection against the most common and costly attacks occurring against organizations today, which include attack vectors originating within the perimeter. These attacks infiltrate your perimeter, learn your internal infrastructure, and laterally spread through your data center.The ideal solution to complete datacenter protection is to protect every traffic flow inside the data center with a firewall and only allow the flows required for applications to function.  This is also known as the Zero Trust model.  Achieving this level of protection and granularity with a traditional firewall is operationally unfeasible and cost prohibitive, as it would require traffic to be hair-pinned to a central firewall and virtual machines to be placed on individual VLANs (also known as pools of security).A typical 1 Rack-Unit top-of-rack data center switch performs at approximately 2Tbps while the most advanced physical firewall performs at 200Gbps in 19 Rack-Unit physical appliances, providing 10% the usable bandwidth. Imagine the network resource utilization bottlenecks created by having to send all east-to-west communication from every VM to every other VM through a physical firewall and how quickly you would run out of available VLANs (limited to 4096) to segment workloads into application-centric pools of security. This is a fundamental architectural constraint created by traditional security architecture that hampers the ability to maintain an adequate security posture within a modern datacenter.Defining Micro-segmentation Micro-segmentation decreases the level of risk and increases the security posture of the modern data center. So what exactly defines micro-segmentation? For a solution to provide micro-segmentation requires a combination of the following capabilities, enabling the ability to achieve the below-noted outcomes.Distributed stateful firewalling for topology agnostic segmentation – Reducing the attack surface within the data center perimeter through distributed stateful firewalling and ALGs (Application Level Gateway) on a per-workload granularity regardless of the underlying L2 network topology (i.e. possible on either logical network overlays or underlying VLANs).Centralized ubiquitous policy control of distributed services – Enabling the ability to programmatically create and provision security policy through a RESTful API or integrated cloud management platform (CMP).Granular unit-level controls implemented by high-level policy objects – Enabling the ability to utilize security groups for object-based policy application, creating granular application level controls not dependent on network constructs (i.e. security groups can use dynamic constructs such as OS type, VM name or static constructs such active directory groups, logical switches, VMs, port groups IPsets, etc.). Each applcation can now have its own security perimeter without relying on VLANs . See the DFW Policy Rules Whitepaper for more information.Network overlay based isolation and segmentation – Logical Network overlay-based isolation and segmentation that can span across racks or data centers regardless of the underlying network hardware, enabling centrally managed multi-datacenter security policy with up to 16 million overlay-based segments per fabric.Policy-driven unit-level service insertion and traffic steering – Enabling Integration with 3rd party solutions for advanced IDS/IPS and guest introspection capabilities.Alignment with emerging Cybersecurity StandardsNational Institute of Standards and Technology (NIST) is the US federal technology agency that works with industry to develop and apply technology, measurements, and standards. NIST is working with standards bodies globally in driving forward the creation of international cybersecurity standards. NIST recently published NIST Special Publication 800-125B, “Secure Virtual Network Configuration for Virtual Machine (VM) Protection” to provide recommendations for securing virtualized workloads. The capabilities of micro-segmentation provided by NSX map directly to the recommendations made by NIST.Section 4.4 of NIST 800-125b makes four recommendations for protecting virtual machine workloads within modern data center architecture. These recommendations are as followsVM-FW-R1: In virtualized environments with VMs running delay-sensitive applications, virtual firewalls should be deployed for traffic flow control instead of physical firewalls, because in the latter case, there is latency involved in routing the virtual network traffic outside the virtualized host and back into the virtual network. VM-FW-R2: In virtualized environments with VMs running I/O intensive applications, kernel-based virtual firewalls should be deployed instead of subnet-level virtual firewalls, since kernel-based virtual firewalls perform packet processing in the kernel of the hypervisor at native hardware speeds.VM-FW-R3: For both subnet-level and kernel-based virtual firewalls, it is preferable if the firewall is integrated with a virtualization management platform rather than being accessible only through a standalone console. The former will enable easier provisioning of uniform firewall rules to multiple firewall instances, thus reducing the chances of configuration errors. VM-FW-R4: For both subnet-level and kernel-based virtual firewalls, it is preferable that the firewall supports rules using higher-level components or abstractions (e.g., security group) in addition to the basic 5-tuple (source/destination IP address, source/destination ports, protocol). NSX based micro-segmentation meets the NIST VM-FW-R1, VM-FW-R2 and VM-FW-R3 recommendations in providing the ability to utilize network virtualization based overlays for isolation, and distributed kernel based firewalling for segmentation through ubiquitous centrally managed policy control which can be fully API driven.Micro-segmentation through NSX also meets the NIST VM-FW-R4 recommendation to utilize higher-level components or abstractions (e.g., security groups) in addition to the basic 5-tuple (source/destination IP address, source/destination ports, protocol) for firewalling. NSX based micro-segmentation can be defined as granularly as a single application or as broad as a data center, with controls that can be implemented by attributes such as who you are or what device is accessing your data center.Micro-segmentation with NSX as a Security PlatformProtection against advanced persistent threats that propagate via targeted users and application vulnerabilities presents a requirement for more than network layer segmentation to maintain an adequate security posture.  These advanced threats require application-level security controls such as application-level intrusion protection or advanced malware protection to protect chosen workloads.  In being a security platform, NSX based micro-segmentation goes beyond the recommendations noted in the NIST publication and enables the ability for fine-grained application of service insertion (e.g. allowing IPS services to be applied to flows between assets that are part of a PCI zone). In a traditional network environment, traffic steering is an all or none proposition, requiring all traffic to steered through additional devices.  With micro-segmentation, advanced services are granularly applied where they are most effective, as close to the application as possible in a distributed manner while residing in separate trust zone outside the application’s attack surface.Securing Physical WorkloadsWhile new workload provisioning is dominated by agile compute technologies such as virtualization and cloud, the security posture of physical workloads still has to be maintained. NSX has the security of physical workloads covered as physical to virtual or virtual to physical communication can be enforced using distributed firewall rules at ingress or egress. In addition, for physical to physical communication NSX can tie automated security of physical workloads into micro-segmentation through centralized policy control of those physical workloads through the NSX Edge Service Gateway or integration with physical firewall appliances. This allows centralized policy management of your static physical environment in addition to your micro-segmented virtualized environment.ConclusionNSX is the means to provide micro-segmentation through centralized policy controls, distributed stateful firewalling, overlay- based isolation, and service-chaining of partner services to address the security needs of the rapidly evolving information technology landscape. NSX easily meets and goes above and beyond the recommendations made by the National Institute of Standards and Technology for protecting virtualized workloads, secures physical workloads, and paves a path towards securing future workloads with a platform that meets your security needs today and is flexible enough to adapt to your needs tomorrow. As we continue this multi-part series on micro-segmentation, we will continue to delve into deeper aspects of how NSX micro-segmentation will increase the security posture of your organization with the following upcoming topics.Securing Physical environmentsService InsertionOperationalizing Micro-segmentationSecuring Virtual Desktop InfrastructureMicro-segmentation for Mobile EndpointsMicro-segmentation Benchmark

Innovation Insights: Defining Open with the Fortinet Security Fabric

Securing networks has been a serious challenge ever since DEC salesman Gary Thuerk sent the first spam message to 400 unsuspecting users of the ARPANET back in 1978. Sure, security devices have become more sophisticated over time, and their evolution is a fascinating subject. But they all tend to suffer from a common problem: because they are a siloed technology, they can only solve the problem sitting right in front of them.This is one of the reasons why, in spite of the millions being spent on security by today’s organizations, the incidents of successful security breaches continue to grow. Cybercriminals have developed a set of very sophisticated capabilities designed to discover network vulnerabilities, circumvent security, evade detection, and then either cripple the network or retrieve valuable data. Or both.Which is why Fortinet has developed the Security Fabric, an architectural framework innovation that addresses cyberthreat capabilities with a dynamic set of interoperable, collaborative, and adaptive security solutions and capabilities of its own. It is designed to stop the attack chain through a continuous security cycle:1. Preparing the network for proactive threat defense through things like intelligent segmentation, establishing strong security processes, and proper training2. Preventing attacks through the integration of security technologies for the endpoint, access layer, network, applications, data center, and cloud into a single collaborative security architecture that can be orchestrated through a single management interface.3. Detecting threats before they get into the network through a combination of shared threat intelligence and collaborative defenses designed to see and stop even sophisticated multi-vector attacks4. Responding to attacks with an automated response to identified threats that breaks the infection chain, immediately protects network resources, and actively identifies and isolates affected devices.The cycle continues as protections from detected threats are implemented across the distributed network to improve the organization’s preparation against future attacks.InteroperabilityA critical component of the success of an architectural approach to security is the purpose-built interoperability between its individual security solutions. The Fortinet Security Fabric is built around a series of tiered interconnectivity and open API strategies that allow Fortinet and third-party solutions from Alliance Partners to collect and share threat intelligence, and coordinate a response when anomalous behavior or malware is detected.Inner Core Network Security – The foundation of the Fortinet Security Fabric relies on the tight integration and dynamic interoperability between three foundational Fortinet security technologies: FortiGate, FortiManager, and FortiAnalyzer. These solutions are built on a common operating system, and utilize centralized orchestration to harden the core of the network and actively monitor, analyze, and correlate threat activity.Outer Core Network Security – The next tier of the Fortinet Security Fabric is focused on expanding the security implemented at the network’s inner core out to the dynamic edges of the borderless network. This includes things like hardening wireless access points, seamlessly tracking and enforcing policy as it moves into the cloud, securing endpoint devices and BYOD strategies, and dynamically segmenting the network as organizations adopt IoT.Extended Security – Security also needs to extend to common attack vectors, like email and the web to proactively analyze data and traffic for unknown and zero-day threats. This extended protection is a critical function of the security fabric, and includes the Fortinet Advanced Threat Protection (ATP) solution, including FortiSandbox, as well as FortiMail and FortiWeb, designed to close the gap on what are still the most common attack vectors for malware and data loss.Global Threat Intelligence – While the Security Fabric generates and shares a great deal of local threat intelligence, it is essential that it is constantly tuned against the latest threats occurring in the wild. Fortinet’s global threat research team actively monitors the world’s networks to find, analyze, and develop protection against known and unknown security threats. They then automatically deliver continuous threat updates to firewall, antivirus, intrusion prevention, web filtering, email, and antispam services.Network & Security Operations – Fortinet’s network security and analysis tools are designed to provide a more holistic approach to threat intelligence gathering by actively synthesizing and correlating threat data between security tools and such devices as FortiSIEM and Fortinet’s suite of hardened network devices, such as FortiAP-U and FortiSwitch. The Security Fabric can also extend the coordination of a threat response through our alliance of fabric-ready and fabric-compliant partners.Visibility and ControlIntelligence plays a critical role in establishing broad visibility and granular, proactive control across the Security Fabric. On average, security breaches take nearly eight months to detect. Part of the reason for this delay is that enterprise security teams trying to track more than a dozen different security monitoring and management consoles. And they still have to hand-correlate events and data to detect today’s evasive advanced threats. If you can’t see what’s happening, threats will persist and proliferate, which can have devastating consequences for your business.FortiSIEM, our latest security technology solution, is an all-in-one next-generation security information and event management platform that provides deep, coordinated insight to what’s happening in the network. It enables organizations to rapidly find and fix security threats and manage compliance standards – all while reducing complexity, increasing critical application availability, and enhancing IT management efficiency. And its open design allows it to both collect and share critical threat intelligence from third-party solutions.SummaryThe evolving enterprise network and its transition to a digital business model is one of the most challenging aspects of network security today. As significant trends in computing and networking continue to drive changes across many critical business infrastructures, architectures, and practices, organizations require a new, innovative approach to network security that enables them to quickly embrace those changes.The Fortinet Security Fabric provides the integrated and collaborative security strategy your organization needs. It enables the protection, flexibility, scalability, adaptability, and manageability you demand across your distributed and highly dynamic network, from IoT to the cloud.

Best Of Black Hat Innovation Awards: And The Winners Are…

Three companies and leaders who think differently about security: Deep Instinct, most innovative startup; Vectra, most innovative emerging company; Paul Vixie, most innovative thought leader. Dark Reading this year is launching a new annual awards program, the Best of Black Hat Awards, which recognizes innovative companies and business leaders on the conference’s exhibit floor.The 2016 Dark Reading Best of Black Hat Awards recognize three categories of achievement: the Most Innovative Startup, which cites companies that have been in the industry for three years or less; the Most Innovative Emerging Company, which cites companies that have been operating for three to five years; and the Most Innovative Thought Leader, which recognizes individuals from exhibiting companies who are changing the way the industry thinks about security.These new awards, chosen by the editors of Dark Reading, are not an endorsement of any product, but are designed to recognize innovative technology ideas and new thinking in the security arena. In future years, Dark Reading hopes to expand the awards program to recognize new products in different categories, as well as more individuals who are making a difference in the way we think about security.Most Innovative Startup: Deep InstinctThe finalists for our Most Innovative Startup Award are Deep Instinct, which is driving past machine learning with an artificial intelligence concept called deep learning; Phantom, a security orchestration tool that provides a layer of connective tissue between existing security products; and SafeBreach, which provides a hacker’s view of enterprise security posture.The winner is: Deep Instinct. Here’s what our judges wrote about Deep Instinct: “This was not an easy decision—each of the finalists, Phantom, Deep Instinct, and SafeBreach, bring really intriguing and useful technology to the security problem.In the end, we selected Deep Instinct as the Most Innovative Startup. Here’s why:  the concept of a cerebral system to detect malware and malicious activity at the point of entry in real-time and quashing it then and there solves many of the other security problems down the line. If the tool can catch the malware when it hits the endpoint, a security pro theoretically wouldn’t need to check out security alerts, correlate them among various security tools and threat intel feeds, and then take the appropriate action (sometimes too late). And unlike traditional antivirus, this technology looks at all types of threats, not just known malware, which of course is key today given the polymorphic nature of malware.We considered Deep Instinct’s approach of automatically stopping a threat at the endpoint, where it first comes in, using software that can on its own understand that it’s a threat and continuously learn about threats as unique and promising for security organizations. Deep learning is the next stage of machine learning, mimicking the brain’s ability to learn and make decisions, and Deep Instinct is the first company to apply this type of artificial intelligence to cybersecurity, which also made it a top choice.In addition, benchmark tests of Deep Instinct’s technology indicate a high degree of accuracy in detecting malware, at 99.2%. And unlike some endpoint security approaches, it occurs locally and there’s no sandbox or kicking it to the cloud for additional analysis.”Most Innovative Emerging Company: VectraThe three finalists for our Most Innovative Emerging Company are SentinelOne, which combines behavioral-based inspection of endpoint system security processes with machine learning;  Vectra, which offers real-time detection of in-progress cyber attacks and helps prioritize the attacks based on business priority; and ZeroFOX, which monitors social media to help protect against phishing attacks and account compromise.And the winner is: Vectra. Here’s what our judges wrote about Vectra: “It was a tough choice, but in the end, we selected Vectra, because it addressed several of security professionals’ most persistent challenges, with solutions that were both inventive and practical.Infosec pros are inundated with alerts about threats. Whether those warnings come from media reports, newsletters, or one of many pieces of security technology, it’s often hard to prioritize them. Maybe it was declared “critical,” but is it critical to me? Maybe it was “medium,” but is it critical to me? Infosec pros have attackers dwelling on their networks for many, many months, largely because security teams cannot quickly make sense of all this threat data. And infosec pros try to solve problems faster by adding new security technology that can sometimes put a huge strain on the network.We chose Vectra as the winner, because their solution helps prioritize threats for your organization specifically, can reduce attacker dwell time, and do so with a lightweight solution.Vectra’s tool tunes into all an organization’s internal network communications, and then, using a combination of machine learning, behavior analysis, and data science will identify threats, correlate them to the targeted endpoint, provide context, and prioritize threats accordingly — as they relate to your organization. Vectra can detect things like internal reconnaissance, lateral movement, botnet monetization, data exfiltration and other malicious or potentially malicious activities throughout the kill chain.Most importantly, Vectra’s tool allows security teams to identify their most important assets, so that the tool will know to push even a gentle nudge at those systems to the top of the priority list.With just a glance at the simple, elegant visualization used by Vectra’s threat certainty index, an infosec pro will know in moments what precise endpoint needs their attention first.”Most Innovative Thought Leader: Paul VixieThe three finalists for our Most Innovative Thought Leader are Krishna Narayanaswamy, Chief Scientist and Co-Founder of Netskope, Inc., a top specialist in cloud security; Dr. Paul Vixie, Chairman, CEO, and Co-Founder of Farsight Security Inc., a leader in DNS and Internet security; and Jeff Williams, Chief Technology Officer and Co-Founder of Contrast Security, who focuses on application security.And the winner is: Paul Vixie, Farsight Security. Here’s what our judges wrote about Paul: “This was perhaps the most difficult choice we had to make in the awards, because all three of these individuals are thought leaders and difference-makers in their own fields of security. Each of them is a contributor not only to innovation in his own company, but to the industry at large.In the end, we chose Paul Vixie, at least in part, because he likes to work and research and innovate in areas where few others are working. The world of Domain Name Systems often seems impenetrable even to security experts, yet it is an essential element to the global Internet and, potentially, a huge set of vulnerabilities that could affect everyone who works and plays online.In the last year or so, Paul has taken some of the lessons he’s learned about DNS and the way the internet works and built Farsight Security, which collects and processes more than 200,000 observations per second to help security operations centers and incident response teams more quickly identify threats. It works by analyzing DNS, which is a fundamental technology that the bad guys have to use, just as the good guys do. And while Farsight is not the only company working in the DNS security space, it has developed new methods of analyzing and processing the data so that enterprises can make better use of relevant information.Paul doesn’t stop with the work he is doing at his own company. As a longtime contributor to internet standards on DNS and related issues, he continues to participate in a variety of efforts, including source address validation; the OpSec Trust initiative, which is building a trusted, vetted security community for sharing information, and internet governance, including the controversial discussion around route name service.While all three of our finalists are deserving of special recognition, we feel that Paul Vixie’s contributions to innovation at his company, to enterprise security, and to internet security worldwide earn him this award.”Our congratulations to all of this year’s Dark Reading Best of Black Hat Awards winners!Tim Wilson is Editor in Chief and co-founder of Dark Reading.com, UBM Tech’s online community for information security professionals. He is responsible for managing the site, assigning and editing content, and writing breaking news stories. Wilson has been recognized as one … View Full BioMore Insights

Gartner’s top cybersecurity ‘macro trends’ for 2017

Paying the security tax. Answering to Dr. No. Submitting to the control centre. If you’ve ever been responsible for running IT security at a business, these will all sound familiar – too familiar.But there’s another way to look at security, says Earl Perkins, a research vice-president in the Internet of Things group at Gartner. Presenting at the research firm`s  symposium in October, he spoke of cybersecurity trends to look out for in the year ahead. He also had some helpful advice on how to frame cybersecurity as a benefit to your organization, rather than be viewed as a hindrance.“We’ve been playing a poker game for decades,” Perkins says. “We’ve been betting just enough chips on security and now we’re hoping the hand we hold will be enough to win.”Rather than hope the next card off the top turns a weak hand into a flush, security chiefs should take heed of these seven trends and plan accordingly:1. Seeking the balance of risk and resilienceAs organizations have a growing need to move quickly and adopt new technology, security has to stop managing risk and start building resilience, Perkins says. It’s not about abandoning risk management, but balancing it with the needs the business has to create value.“Security doesn’t have to be a Dr. No kind of thing,” Perkins says.Rethinking security’s approach in this way will require defining a new mission. You’ll also have to develop a new risk formula capable of handling new variables and factors. Then communicate this new approach and mission to employees.Soon enough, soon you’ll be seen in a different light.2. Security disciplines converge while skills expand A modern model for IT security takes into account new areas like operational technology and physical security. Image courtesy of Gartner. The definition of cybersecurity is expanding and chief security officers may find their job requirements are creeping up as a result. In addition to the legacy IT systems to protect, more operational technology (OT) is seeing IT systems embedded with the Internet of Things trend. Similarly, physical security systems such as video surveillance are connected and rely on IT systems.And Perkins has bad news for CSOs: “If it fails, it’s already your fault.”You’ll have to assess what new skill sets are needed on your security team to meet all these new demands. They’ll likely include roles responsible for identity management, embedded security, and cyber-physical security automation.Don’t hesitate to invest in training for your current team, or even build up security skills development within your company’s lines of business. Know where the gaps are and how you plan to fill them – eventually.3. Secure digital supply chain needs growJust because software as a service is now off-loading some application delivery on the IT department’s behalf, that doesn’t mean the job of the chief security officer is also done. Rather, a confusing mish-mash of considerations must be made about how to handle a user and the device before and after accessing these new cloud services. Once cloud apps start integrating with internal systems, it really gets interesting. Managing security around cloud software has become a confusing matter. Image courtesy of Gartner. The response to this problem so far has been developing management consoles that are multi-cloud and multifunction, Perkins says. As those consoles evolve, they will also help manage security based on a user’s need and priority standing.“I want you to implement and enforce different types of policies based on use,” Perkins says. CSOs should also have an enterprise-wide public cloud strategy, implement solutions that solve cloud complexity, and have a governance approach that matches cloud life cycle.4. Adaptive security architecture embraced“Our hope is you’ll reach a point where you create a security architecture where you prevent everything that you could reasonably be expected to prevent,” Perkins says. After that, you’ll need to respond to the ones you missed in an effective way and catch the others you’ll never detect with predictive security.“Detection and response is a lot like going to the barn and seeing the door open and realizing the horse has escaped,” he says. “Predictive would allow us to know the horse is acting kind of funny and we need to be ready.”The technical version of keeping the horse in the barn involves a commitment to software define architectures, dividing a control pane of applications and APIs from your data plane. Your security team should be preventing attacks by isolating systems in this way, and when an incident is detected, the risk needs to be confirmed.From a budget point of view, shift spending from prevention to detection and response, as well as predictive capabilities. From a conceptual point of view, operate like a security operations centre that is in continuous response mode.5. Security infrastructure adaptsThe number of code libraries being used by your organization is only growing and they are all aging. Security checks need to be run on these code sets often, not just when they are deployed. So security application testing has to be embedded into the lifecycle of these repositories.As organizations create a pervasive digital presence through always-connected devices, sensors, actuators, and other IoT gear, network security concerns will grow.“Wi-Fi is not the answer to doing the Internet of Things,” Perkins says. While your gateways will still talk with IP and Wi-Fi devices, there will be strange new elements more familiar to those with OT (operational technology) skill sets. Make sure to talk with those experience with OT in your organization.Many organizations will want to invest in discovery solutions just to find IoT devices within their organization. Also key to managing network security will be setting up segmented network portions, and designating trust zones.6. Data security governance and flow arrives“You’re going to have introduced to you different kinds of data flows,” Perkins says. “Some of it will look familiar and some won’t look familiar at all.”To continue to ensure that you can properly audit and protect your data, you’ll have to profile it by its flow type. To start with – is it structured, semi-structured, or unstructured data? In line with your software-defined strategy, create a boundary between your data and its destinations.CSOs will want to incorporate big data plans into their security strategies to keep pace. Priorities should be placed on organization-wide data security governance and policy.7. Digital business drives digital securityThanks to IoT, “there is a pervasive digital presence,” Perkins says. “Once you network this presence, it substantively alters the risk for your business.”Digital security is the next wave in cybersecurity and it involves getting a grip on this pervasive presence. Risks include espionage and fraud, sabotage of automated devices, device impersonation and counterfeiting, and beyond. Related Download Sponsor: HPE Virtualization: For Victory Over IT Complexity Download this white paper to learn how to effectively deploy virtualization and create your own high-performance infrastructures Register Now

SECURITY PARTNER SHOWCASES

fortinet logo

Gemalto

KnowBe4 Logo

vectra78

REAL-TIME THREAT MAP

Knowledge of the threat landscape combined with the ability to respond quickly at multiple levels is the foundation of providing effective security. Since 2000, FortiGuard Labs have provided in-house, industry-leading security research on over 200 zero-day virus discoveries, powering Fortinet’s platform and suite of services.

Proactive Protection: FortiGuard takes information from global sources through its Security Services, using analytics and machine learning to turn big data into near real-time updates for Fortinet appliances, assuring some of the fastest response times in the industry
to new vulnerabilities, attacks, viruses, botnets, and zero-day exploits.