I

XIOLOGIX SECURES YOUR DATA AND INFORMATION

Today’s maze of security issues and technologies can be overwhelming. Enterprise security is a critical business requirement that needs to tightly integrate the right people, processes, and technology.  Partnering with Xiologix will ensure that you have a partner to guide you through the latest security challenges and proactive solutions.

Security – General

Today’s maze of security issues and technologies can be overwhelming. Enterprise security is a critical business requirement that needs to tightly integrate the right people, processes, and technology.

Fortinet Threat Landscape Report Q1 2017

The first few months of a new year are always a time of reflection, resolutions, and predictions. The cyber security industry is no exception. How will 2017 be remembered? What new threats will we face? What lessons will we learn? What improvements will we make?

Fortinet Security Fabric

The emerging digital economy drives business value by leveraging technology to connect users, devices, data, goods, and services. To compete successfully, organizations are required to adopt new models of connectivity and data sharing, including public and private clouds and enabling the Internet of Things (IoT). These new approaches enable organizations to be more agile, more responsive to customer needs and market demands, enhance competitive differentiation, and expand their global market footprint.

Ransomware And The Need For Multi Layer Security Xiologix White Paper V2 2

The sophistication of today’s malware is such that it is no longer sufficient to simply install Anti-Virus software on your systems and think you’re protected. A multi-layered approach to security is needed. , and even the best case can’t give you a 100% guarantee against infection…which me

Fortisiem

Security is no longer just about protecting information, it is critical to maintaining trust with customers and protecting the organization’s brand and reputation.

Fortinet Test Your Metal – NGFW, UTM, Web Security, & Endpoint security

Attackers get past security measures by hiding malware deep within compressed files. Unfortunately, most network security solutions are regularly fooled by this technique because they can’t analyze a file compressed with any format other than ZIP. There are a number of legitimate compression formats commonly used and easily opened by typical end users on most operating systems other than ZIP, such as:TAR.GZ – compression which dominates the world of Linux7Z – a fast compression format growing in popularityCAB – a standard Windows installer package compression formatThis is a simple test to see if your network security will catch malware hiding in a compressed file.Two files are put into a folder; one file is EICAR (a standard anti-virus test file) and the second is a screenshot of a website taken in the last 5 minutes.Then the test compresses this folder into a file using different types and multiple levels of compression to obscure the contents.Finally you use this tool to send these files to yourself so you can see how well your security identifies the EICAR code within different types of files.http://metal.fortiguard.com

www.vimeo.com

5 steps to stronger data security

Nearly everything we do in computer security is meant to protect data. After all, we don’t deploy antimalware software, tighten security configurations, or implement firewalls to protect users, per se. Job No. 1 is to protect the organization’s data — including employee and (especially) customer data.But guess what? People need to work with that data or you wouldn’t store it in the first place — which is why most data security measures focus on ensuring only trusted, authorized parties get access to it. Follow these five recommendations and your mission-critical data will be well protected.1. Identify the crown jewelsFirst, you need to identify your most precious data. The hard part is finding it. As one CIO told me years ago, “If you think you know where all your data is, you’re kidding yourself.”Precious data is stored in databases, application data repositories, and now the cloud, as well as on backup media and removable media. Precious data also includes critical subsystems that support delivering and securing actual data, including Active Directory domain controllers, credential databases, DNS, DHCP, network routers, and other services, all of which have their own security defenses. All data should be categorized for its business value and sensitivity, so keep your crown jewels to the smallest size possible. The least amount of data that needs to be stored should be stored, because nothing is as secure as data you didn’t store in the first place.All data should have an owner, to which all questions about its condition, treatment, and validity can be addressed. All data should be marked with a useful life, and at the end of that useful life, it should be disposed of.2. Clean up credentialsPractice good credential hygiene — that is, clean up your privileged account memberships, with the goal of minimizing permanent membership to zero or near zero. Administrative duties should be performed with the least amount of permissions and privileges necessary (sometimes called “just enough” permissions). Any permissions or privileges should be given only when needed, only for the time actually needed (called “just in time” permissions).Every organization should start by reviewing permanent memberships in each privileged group and removing members who do not need permanent, full-time access. If done with the appropriate rigor and analysis, this usually results in less than a handful of permanent members. In the best cases, only one or zero permanent members remain. The majority of admins should be assigned elevated permissions or privileges on a limited basis. Often this is done by having the admin “check out” the elevated credentials, with a preset expiration period.Credential hygiene is essential to strong database security, because attackers often, if not nearly always, seek to compromise privileged accounts to gain access to confidential data. Minimizing permanent privileged accounts reduces the risk that one of those accounts will be compromised and used maliciously.3. Set strict internal security boundariesLong gone are the days when a network boundary firewall could be seen as sufficient security. The inside, chewy center of most corporate networks must be separate, isolated security boundaries, which only predefined accounts can access. Strict internal security boundaries can be created by host-based firewalls, internal routers, VLANs, logical networks, VPNs, IPSec, and a myriad of other access control methodologies.For example, although a large majority of users may be able to access the Web front end of a multitier application, very few people should be able to directly access the back-end database. Perhaps only assigned database admins and a few supporting servers and users should be able to access the database server, along with the front-end Web database and any middle-tier services. That way, if attackers try to access the database directly without having the necessary credentials, they can be prevented from doing so, or at least an auditing alert can be initiated.4. Ensure encryption moves with the dataTraditional security defense touts two types of encryption: encryption for data during transport and encryption for data at rest. But this assumes the bad guys haven’t already stolen legitimate credentials to access the data in question, which is often the case.If you want solid data protection, make sure your encrypted data remains encrypted no matter where it is — and especially if it is moved to illegitimate locations. Nothing is more frustrating to the data thief.Many solutions encrypt individual data components and keep them encrypted no matter where they moves. Some are application services, like Microsoft’s Active Directory Rights Management Service, while others encrypt the data right within the database, such as Microsoft’s SQL Transparent Data Encryption.What’s the smart way to encrypt data? If someone steals it, it remains encrypted and useless.5. Protect the clientHackers rarely break into servers directly. It still happens — SQL injection attacks and remote buffer overflows, for example — but client-side attacks are far more common.If you want to protect your data, make sure you protect the people who access the data. This means that all critical patches are applied within a week or two, users are educated on social engineering, and workstations are securely configured.

Zero-days aren’t the problem — patches are

There’s a widely held view that our world is full of uber hackers who are too brilliant to stop. Thus, we should fear zero-day attacks because they’re our biggest problem.Nothing could be further from the truth.Most hackers follow the path created by a very few smart ones — and zero days make up a very small percentage of attacks. It turns out that patching vulnerable software, if implemented consistently, would stop most hackers cold and significantly reduce risk.Fear of the zero-day exploitZero-days, where an attacker exploits a previously unknown vulnerability to attack a customer, aren’t even the majority of bugs found. According to the most recent Microsoft Security Intelligence Report, around 6,300 unique vulnerabilities appeared in 2015. Symantec says that only 54 of them were classified as zero-days, a little less than 1 percent. If you tracked total attacks from all exploited vulnerabilities, I’m absolutely positive that it would be orders of magnitudes less.Most zero-days aren’t used against many people because as soon as they pop up with any frequency, they get “discovered” and reported to the software vendor and are added to antimalware updates. A major undiscovered zero day is often worth tens of thousands of dollars — sometimes more than $100,000. Once it’s discovered, the value may drop to nothing.In other words, if a hacker uses a zero-day too much, it won’t stay a zero-day for long. Hackers need to be “slow and low” with zero-days, and even then, they know the vulnerability will be discovered and patched soon enough.Zero-day? How about 365-days?Most exploits involve vulnerabilities that were patched more than a year ago. So why does it take so many people so long to apply the patch? Every patch management guide recommends that critical patches should be applied within one week of their release. Overall, you’ll be fine if you patch in the first month. Statistics show that the vast majority organizations that suffer exploits are those that don’t patch in the first year or ever patch at all.Microsoft has long written about how most of its customers are exploited by vulnerabilities that were patched years ago. Microsoft’s Security Intelligence Report lists the most popular exploits; you’ll be hard-pressed to find an exploit discovered as recently as 2015 on that list. Most successful exploits are old. This year, most exploits date back to 2012 to 2010 — and that’s not only a Microsoft software issue.The Verizon Data Breach Report 2016 revealed that out of all detected exploits, most came from vulnerabilities dating to 2007. Next was 2011. Vulnerabilities dating to 2003 still account for a large portion of hacks of Microsoft software. We’re not talking about being a little late with patching. We’re talking about persistent neglect.Why people don’t patch quicklyMost operating systems and applications come with self-patching mechanisms that will do their job if you let them. But why do so many people fail to patch?I think it comes down to a few factors. First, a lot of people — mostly home users — ignore all those update warnings. Some simply don’t want to take the time to patch and keep putting it off. Others are probably unsure whether the patch update notification message is real. How are they supposed to tell the difference between a fake patch warning and a legitimate patch warning? They chicken out and don’t patch.Another huge component of unapplied patches stems from unlicensed software. There are tens of millions of people using software illegally, and many are fearful that the latest patch will catch the unlicensed software and disable it. This is the reason why, years ago, Microsoft decided not to require a valid license in order to patch an operating system.Yet another cause: A lot of computers are set up for computer neophytes by friends or hired professionals who never return — and the neophyte doesn’t know enough to do anything. Very likely the vast majority of mom-and-pop computer stores sell computers that will never be patched during their useful lifetimes.Lastly, I’m sure some computers aren’t patched because the owners or user make the explicit decision not to patch. Most companies I’ve consulted for employ software programs they feel can’t be patched due to operational concerns. This article includes an interview that reveals the average organization takes 18 months to patch some critical vulnerabilities. I know many companies where that time lag stretches to many, many years.Focused patchingThe conventional wisdom is that all critical patches should be applied as soon as reasonably possible. Most guides say within one week, but I think anything within one month is acceptable.If you have limited resources (who doesn’t?), then at least concentrate on patching the applications with the most exploits successfully used against the computers you manage. The Verizon Data Breach report says that 85 percent of successful hacks used the top 10 exploits. The other 15 percent were caused by more than 900 different exploits. Patch a few critical programs with vulnerabilities, and you’ll eliminate most of the risk.Patching is easier — so what’s your excuse?In the past, patching took a long time. Vendors might take weeks, months, or even years to create a patch to a public vulnerability, and customers might take months to apply them.Back in 2003 when the SQL Slammer worm infected almost every unpatched SQL server on the Internet — more than 75,000 SQL instances in less than 10 minutes! — the Microsoft patch that closed the vulnerability had been available for almost six months.Kudos to Google for accelerating its patching schedule, to the point where Google software vulnerabilities take a day or less to be patched. Yet even Google faces a significant percentage of users who either take forever to patch or never patch.The cloud is fixing that problem. The provider patches the application and everyone who uses it is immediately patched — no stragglers.Microsoft was recently notified of a critical exploit in Office 365 and patched it within seven hours. Imagine, everyone protected quicker than they could read about it. That’s a huge positive for cloud computing.Meanwhile, however, most of the software you use remains installed on your own servers or clients. Patching demands vigilance, but patching a few applications can reduce most of your risk. You don’t always need to patch in the first day or week. But don’t take years.

Steps to take control

The ongoing Apple versus the FBI debate has me thinking more about the implications of encryption. Whether or not national governments around the globe choose to go down the path of further regulating encryption key lengths, requiring backdoors to encryption algorithms, mandating key escrow for law enforcement purposes, or generally weakening the implementations of encrypted communications and data storage in consumer technologies, the use of encryption will increase – and in parallel – network visibility of threats will decrease.While there are a handful of techniques available to enterprise network operators that will allow data inspection of encrypted flow, for all practical purposes they are of dwindling appeal and practicality. Host-based agents – while providing visibility of pre- and post-encryption communications – are easily bypassed by those with malicious or criminal intent. Meanwhile, in a world of increasingly diverse computing and mobile platforms, coverage for the expanding array of devices and operating systems means it is likewise increasingly impractical to deploy.Like a final front-line push prior to a cease-fire deadline, SSL terminator (or accelerator) technologies are being promoted as a solution. Hopes are certainly high that such technologies can “man-in-the-middle” SSL Internet-bound communications and provide the levels of deep packet inspection of an earlier age. However, the reality is – not only is enterprise-wide deployment and management of such devices (including the addition of appropriate certificate authority credentials on the client systems and devices) increasingly difficult – the scope of what can be detected and mitigated through such inspection is rapidly decreasing.Social media sites and online search engines have led the way in moving from SSL-by-default to wholly encrypted communications. We can then expect those sites to get smarter in not only detecting that SSL man-in-the-middle is being used to intercept traffic, but that the adoption of various SSL certificate key pinning standards will prevail for all other Internet services as well.Today, many organizations hope that evil hackers and malware using SSL as a means to control compromised systems and as an evasion aid will stand out amongst the authorized (unencrypted) traffic of a closely monitored corporate network. It’s an obviously flawed plan, but some true believers still feel it may be a viable technique for a few remaining years.How CxOs can better prepare Smart CxOs should be planning for the day when all of their network traffic is encrypted and deep packet inspection is no longer possible. In many networks, half of all Internet-bound traffic is already encrypted (mostly HTTPS) and it’s likely more than three-quarters of network traffic will be encrypted within the next couple of years. With this increase, the prospect of inspecting the content layer of traffic will have mostly disappeared.While the loss of content-level inspection will have a measurable effect on the network security technologies we’ve been dependent upon for the last decade or two (e.g. IDS, DLP, ADS, WAF, etc.), security teams will not be blind. While threats have advanced and encryption has covered the landscape like a perpetual pea soup fog, there remains plenty of “signals” in the transport of encrypted data and the packets traversing the corporate network.Just as the revelations of how law enforcement agencies around the globe consistently use “metadata” associated with cellular traffic logs (e.g. from, to, time, and duration of the call) to identify and track threats without being able to listen to the actual conversation, a similar story can be formulated for network traffic – without being reliant upon the content of the messages (which we can presume will be encrypted now and more so in the future).A new generation of network-based detection technologies – using machine learning and traffic modeling intelligence – have entered the security market. These new technologies are proving to be just as accurate (if not more accurate) than the legacy detection technologies that required deep packet inspection to identify threats and determine response prioritization.Network security architects – and those charged with protecting their Internet-connected systems – need to re-assess their defensive strategies in the wake of increased encrypted communications traffic. A smart approach would be to architect defenses with the assumption that all traffic will soon be encrypted. By all means, continue to hope that some level of content-layer inspection will be available for critical business data handling, but plan for that to be an edge case.

Setting the Record Straight on Cyber Threat Intelligence

Threat intelligence has achieved buzzword status. The good news behind that is people are talking about it – it is a critical component of a cyber risk management program. The bad news is too many folks have distorted and confused the term, so much so that it’s meaning varies widely depending with whom you’re speaking. And that fact is taking away from the real value of legitimate cyber threat intelligence.A perfect example is in an article I read recently where it stated that said,  “60% of organizations have had a threat intelligence program in place for more than 2 years.” It’s important to understand how “threat intelligence” is defined in this setting because there’s simply no way that a majority of organizations have a “threat intelligence program” established, let alone for the last 2 years. Let’s look at some of the more common definitions of threat intelligence:• Gartner defines threat intelligence as: “Evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice, about an existing or emerging menace or hazard to assets that can be used to inform decisions regarding the subject’s response to that menace or hazard.”• Forrester defines it as: “The details of the motivations, intent, and capabilities of internal and external threat actors. Threat intelligence includes specifics on the tactics, techniques, and procedures of these adversaries. Threat intelligence’s primary purpose is to inform business decisions regarding the risks and implications associated with threats.”• INSA says that “Threat intelligence is an analytic discipline relying on information collected from traditional intelligence sources intended to inform decision makers on issues pertaining to operations at all levels in the cyber domain. Relevant data to be analyzed may be about network data, ongoing cyber activity throughout the world or potentially relevant geopolitical events. What matters is that it is timely, actionable, and relevant, helping to reduce uncertainty for decision makers. The origin of the data or information is not important. When analyzed and placed in context, information becomes intelligence; and it is intelligence that reduces uncertainty and enables more timely, relevant and cost-effective policy, as well as high-quality operational and investment decisions.” The trap many vendors and cybersecurity professionals unknowingly fall into is that information and intelligence are not one in the same. There is more information out there than anyone can possibly distill, analyze and use to quickly make sound decisions. Information is:• Unfiltered and unevaluated• Widely available• Accurate, false, misleading, and/or incomplete• Relevant or irrelevantInformation overload can kill your intelligence efforts because too much information is just a lot of outputs that requires a lot of time, money and staff. How much can you accurately automate? How large a staff of qualified analysts can you afford to review everything that isn’t automatically filtered out? I like to think of intelligence as driving outcomes as opposed to outputs. Think of it as information that can be acted upon to change outcomes for the better. Intelligence is:Organized, evaluated and interpreted by expertsAvailable from reliable, sources and checked for accuracyAccurate, timely, relevant and completeAligned with your businessThe world of cyber is infinite and with that comes many unknowns. Intelligence enables you to reduce your risk by moving from ‘unknown unknowns’ to ‘known unknowns’ through discovering the existence of threats, and then shifting ‘known unknowns’ to ‘known knowns’, where the threat is well understood and mitigated.How To Measure Threat IntelligenceThe KISS method is a good way to start… Good business managers run their business on a foundation of evaluated intelligence, or ‘known knowns’ – essentially the things you know with a level of certainty. The goal is to consistently look at the unknown and determine how to turn the uncertainty into more certainty. What are the characteristics that make up your business? What are the corresponding risks? Who are the Actors operating in your industry, which tactics, techniques and procedures do they favor? What has been their target commodity? What organizations have they targeted? What was the outcome from those efforts?Pull in data on who you are as a company such as your products, employees, software and hardware, geographical locations, industry sector, the data you store/transact, and much more. Overlay this company data and compare your business traits against cyber threats on the horizon. Now you can understand your business risk exposures based on your relevant cyber threats.Analysis is another critical differentiator between information and intelligence. When you establish an intelligence program, you are establishing a capability, not just deploying a tool.  Automation can play a role, but “All operations in “cyber space begin with a human being” (INSA) and threat actors/adversaries are people, they have desires, motivations, and intent.So What IS Cyber Threat Intelligence?At the end of the day, cyber threat intelligence should focus your organization on making better decisions and taking the right actions. Every organization uses intelligence already, but in the form of business intelligence that evaluates information on financials, customers, logistics, products, as well as any other areas that the business needs to make decisions and take actions on. The need for cyber threat intelligence is no different as every organization relies on technology to deliver its products and services to the end user and those cyber risks need to be evaluated.As I wrote recently on how cyber threat intelligence helps the business, intelligence should be giving decision makers the insights to understand if they are or are not well positioned for cyber threats – and if not, why not.

Micro-segmentation Defined – NSX Securing “Anywhere”

The landscape of the modern data center is rapidly evolving. The migration from physical to virtualized workloads, move towards software-defined data centers, advent of a multi-cloud landscape, proliferation of mobile devices accessing the corporate data center, and adoption of new architectural and deployment models such as microservices and containers has assured the only constant in modern data center evolution is the quest for higher levels of agility and service efficiency. This march forward is not without peril as security often ends up being an afterthought. The operational dexterity achieved through the ability to rapidly deploy new applications overtakes the ability of traditional networking and security controls to maintain an acceptable security posture for those application workloads. That is in addition to a fundamental problem of traditionally structured security not working adequately in more conventional and static data centers.Without a flexible approach to risk management, which adapts to the onset of new technology paradigms, security silos using disparate approaches are created. These silos act as control islands, making it difficult to apply risk-focused predictability into your corporate security posture, causing unforeseen risks to be realized. These actualized risks cause an organization’s attack surface to grow as the adoption of new compute technology increases, causing susceptibility to increasing advanced threat actors.A foundational aspect of solving this problem is the ability to implement micro-segmentation anywhere. NSX is a networking and security platform able to deliver micro-segmentation across all the evolving components comprising the modern datacenter. NSX based micro-segmentation enables you to increase the agility and efficiency of your data center while maintaining an acceptable security posture. The following blog series will define the necessary characteristics of micro-segmentation as needed to provide effective security controls within the modern data center and demonstrate how NSX goes beyond the automation of legacy security paradigms in enabling security through micro-segmentation. Acceptable Security in the Modern Data CenterIt is no longer acceptable to utilize the traditional approach to data-center network security built around a very strong perimeter defense but virtually no protection inside the perimeter. This model offers very little protection against the most common and costly attacks occurring against organizations today, which include attack vectors originating within the perimeter. These attacks infiltrate your perimeter, learn your internal infrastructure, and laterally spread through your data center.The ideal solution to complete datacenter protection is to protect every traffic flow inside the data center with a firewall and only allow the flows required for applications to function.  This is also known as the Zero Trust model.  Achieving this level of protection and granularity with a traditional firewall is operationally unfeasible and cost prohibitive, as it would require traffic to be hair-pinned to a central firewall and virtual machines to be placed on individual VLANs (also known as pools of security).A typical 1 Rack-Unit top-of-rack data center switch performs at approximately 2Tbps while the most advanced physical firewall performs at 200Gbps in 19 Rack-Unit physical appliances, providing 10% the usable bandwidth. Imagine the network resource utilization bottlenecks created by having to send all east-to-west communication from every VM to every other VM through a physical firewall and how quickly you would run out of available VLANs (limited to 4096) to segment workloads into application-centric pools of security. This is a fundamental architectural constraint created by traditional security architecture that hampers the ability to maintain an adequate security posture within a modern datacenter.Defining Micro-segmentation Micro-segmentation decreases the level of risk and increases the security posture of the modern data center. So what exactly defines micro-segmentation? For a solution to provide micro-segmentation requires a combination of the following capabilities, enabling the ability to achieve the below-noted outcomes.Distributed stateful firewalling for topology agnostic segmentation – Reducing the attack surface within the data center perimeter through distributed stateful firewalling and ALGs (Application Level Gateway) on a per-workload granularity regardless of the underlying L2 network topology (i.e. possible on either logical network overlays or underlying VLANs).Centralized ubiquitous policy control of distributed services – Enabling the ability to programmatically create and provision security policy through a RESTful API or integrated cloud management platform (CMP).Granular unit-level controls implemented by high-level policy objects – Enabling the ability to utilize security groups for object-based policy application, creating granular application level controls not dependent on network constructs (i.e. security groups can use dynamic constructs such as OS type, VM name or static constructs such active directory groups, logical switches, VMs, port groups IPsets, etc.). Each applcation can now have its own security perimeter without relying on VLANs . See the DFW Policy Rules Whitepaper for more information.Network overlay based isolation and segmentation – Logical Network overlay-based isolation and segmentation that can span across racks or data centers regardless of the underlying network hardware, enabling centrally managed multi-datacenter security policy with up to 16 million overlay-based segments per fabric.Policy-driven unit-level service insertion and traffic steering – Enabling Integration with 3rd party solutions for advanced IDS/IPS and guest introspection capabilities.Alignment with emerging Cybersecurity StandardsNational Institute of Standards and Technology (NIST) is the US federal technology agency that works with industry to develop and apply technology, measurements, and standards. NIST is working with standards bodies globally in driving forward the creation of international cybersecurity standards. NIST recently published NIST Special Publication 800-125B, “Secure Virtual Network Configuration for Virtual Machine (VM) Protection” to provide recommendations for securing virtualized workloads. The capabilities of micro-segmentation provided by NSX map directly to the recommendations made by NIST.Section 4.4 of NIST 800-125b makes four recommendations for protecting virtual machine workloads within modern data center architecture. These recommendations are as followsVM-FW-R1: In virtualized environments with VMs running delay-sensitive applications, virtual firewalls should be deployed for traffic flow control instead of physical firewalls, because in the latter case, there is latency involved in routing the virtual network traffic outside the virtualized host and back into the virtual network. VM-FW-R2: In virtualized environments with VMs running I/O intensive applications, kernel-based virtual firewalls should be deployed instead of subnet-level virtual firewalls, since kernel-based virtual firewalls perform packet processing in the kernel of the hypervisor at native hardware speeds.VM-FW-R3: For both subnet-level and kernel-based virtual firewalls, it is preferable if the firewall is integrated with a virtualization management platform rather than being accessible only through a standalone console. The former will enable easier provisioning of uniform firewall rules to multiple firewall instances, thus reducing the chances of configuration errors. VM-FW-R4: For both subnet-level and kernel-based virtual firewalls, it is preferable that the firewall supports rules using higher-level components or abstractions (e.g., security group) in addition to the basic 5-tuple (source/destination IP address, source/destination ports, protocol). NSX based micro-segmentation meets the NIST VM-FW-R1, VM-FW-R2 and VM-FW-R3 recommendations in providing the ability to utilize network virtualization based overlays for isolation, and distributed kernel based firewalling for segmentation through ubiquitous centrally managed policy control which can be fully API driven.Micro-segmentation through NSX also meets the NIST VM-FW-R4 recommendation to utilize higher-level components or abstractions (e.g., security groups) in addition to the basic 5-tuple (source/destination IP address, source/destination ports, protocol) for firewalling. NSX based micro-segmentation can be defined as granularly as a single application or as broad as a data center, with controls that can be implemented by attributes such as who you are or what device is accessing your data center.Micro-segmentation with NSX as a Security PlatformProtection against advanced persistent threats that propagate via targeted users and application vulnerabilities presents a requirement for more than network layer segmentation to maintain an adequate security posture.  These advanced threats require application-level security controls such as application-level intrusion protection or advanced malware protection to protect chosen workloads.  In being a security platform, NSX based micro-segmentation goes beyond the recommendations noted in the NIST publication and enables the ability for fine-grained application of service insertion (e.g. allowing IPS services to be applied to flows between assets that are part of a PCI zone). In a traditional network environment, traffic steering is an all or none proposition, requiring all traffic to steered through additional devices.  With micro-segmentation, advanced services are granularly applied where they are most effective, as close to the application as possible in a distributed manner while residing in separate trust zone outside the application’s attack surface.Securing Physical WorkloadsWhile new workload provisioning is dominated by agile compute technologies such as virtualization and cloud, the security posture of physical workloads still has to be maintained. NSX has the security of physical workloads covered as physical to virtual or virtual to physical communication can be enforced using distributed firewall rules at ingress or egress. In addition, for physical to physical communication NSX can tie automated security of physical workloads into micro-segmentation through centralized policy control of those physical workloads through the NSX Edge Service Gateway or integration with physical firewall appliances. This allows centralized policy management of your static physical environment in addition to your micro-segmented virtualized environment.ConclusionNSX is the means to provide micro-segmentation through centralized policy controls, distributed stateful firewalling, overlay- based isolation, and service-chaining of partner services to address the security needs of the rapidly evolving information technology landscape. NSX easily meets and goes above and beyond the recommendations made by the National Institute of Standards and Technology for protecting virtualized workloads, secures physical workloads, and paves a path towards securing future workloads with a platform that meets your security needs today and is flexible enough to adapt to your needs tomorrow. As we continue this multi-part series on micro-segmentation, we will continue to delve into deeper aspects of how NSX micro-segmentation will increase the security posture of your organization with the following upcoming topics.Securing Physical environmentsService InsertionOperationalizing Micro-segmentationSecuring Virtual Desktop InfrastructureMicro-segmentation for Mobile EndpointsMicro-segmentation Benchmark

Innovation Insights: Defining Open with the Fortinet Security Fabric

Securing networks has been a serious challenge ever since DEC salesman Gary Thuerk sent the first spam message to 400 unsuspecting users of the ARPANET back in 1978. Sure, security devices have become more sophisticated over time, and their evolution is a fascinating subject. But they all tend to suffer from a common problem: because they are a siloed technology, they can only solve the problem sitting right in front of them.This is one of the reasons why, in spite of the millions being spent on security by today’s organizations, the incidents of successful security breaches continue to grow. Cybercriminals have developed a set of very sophisticated capabilities designed to discover network vulnerabilities, circumvent security, evade detection, and then either cripple the network or retrieve valuable data. Or both.Which is why Fortinet has developed the Security Fabric, an architectural framework innovation that addresses cyberthreat capabilities with a dynamic set of interoperable, collaborative, and adaptive security solutions and capabilities of its own. It is designed to stop the attack chain through a continuous security cycle:1. Preparing the network for proactive threat defense through things like intelligent segmentation, establishing strong security processes, and proper training2. Preventing attacks through the integration of security technologies for the endpoint, access layer, network, applications, data center, and cloud into a single collaborative security architecture that can be orchestrated through a single management interface.3. Detecting threats before they get into the network through a combination of shared threat intelligence and collaborative defenses designed to see and stop even sophisticated multi-vector attacks4. Responding to attacks with an automated response to identified threats that breaks the infection chain, immediately protects network resources, and actively identifies and isolates affected devices.The cycle continues as protections from detected threats are implemented across the distributed network to improve the organization’s preparation against future attacks.InteroperabilityA critical component of the success of an architectural approach to security is the purpose-built interoperability between its individual security solutions. The Fortinet Security Fabric is built around a series of tiered interconnectivity and open API strategies that allow Fortinet and third-party solutions from Alliance Partners to collect and share threat intelligence, and coordinate a response when anomalous behavior or malware is detected.Inner Core Network Security – The foundation of the Fortinet Security Fabric relies on the tight integration and dynamic interoperability between three foundational Fortinet security technologies: FortiGate, FortiManager, and FortiAnalyzer. These solutions are built on a common operating system, and utilize centralized orchestration to harden the core of the network and actively monitor, analyze, and correlate threat activity.Outer Core Network Security – The next tier of the Fortinet Security Fabric is focused on expanding the security implemented at the network’s inner core out to the dynamic edges of the borderless network. This includes things like hardening wireless access points, seamlessly tracking and enforcing policy as it moves into the cloud, securing endpoint devices and BYOD strategies, and dynamically segmenting the network as organizations adopt IoT.Extended Security – Security also needs to extend to common attack vectors, like email and the web to proactively analyze data and traffic for unknown and zero-day threats. This extended protection is a critical function of the security fabric, and includes the Fortinet Advanced Threat Protection (ATP) solution, including FortiSandbox, as well as FortiMail and FortiWeb, designed to close the gap on what are still the most common attack vectors for malware and data loss.Global Threat Intelligence – While the Security Fabric generates and shares a great deal of local threat intelligence, it is essential that it is constantly tuned against the latest threats occurring in the wild. Fortinet’s global threat research team actively monitors the world’s networks to find, analyze, and develop protection against known and unknown security threats. They then automatically deliver continuous threat updates to firewall, antivirus, intrusion prevention, web filtering, email, and antispam services.Network & Security Operations – Fortinet’s network security and analysis tools are designed to provide a more holistic approach to threat intelligence gathering by actively synthesizing and correlating threat data between security tools and such devices as FortiSIEM and Fortinet’s suite of hardened network devices, such as FortiAP-U and FortiSwitch. The Security Fabric can also extend the coordination of a threat response through our alliance of fabric-ready and fabric-compliant partners.Visibility and ControlIntelligence plays a critical role in establishing broad visibility and granular, proactive control across the Security Fabric. On average, security breaches take nearly eight months to detect. Part of the reason for this delay is that enterprise security teams trying to track more than a dozen different security monitoring and management consoles. And they still have to hand-correlate events and data to detect today’s evasive advanced threats. If you can’t see what’s happening, threats will persist and proliferate, which can have devastating consequences for your business.FortiSIEM, our latest security technology solution, is an all-in-one next-generation security information and event management platform that provides deep, coordinated insight to what’s happening in the network. It enables organizations to rapidly find and fix security threats and manage compliance standards – all while reducing complexity, increasing critical application availability, and enhancing IT management efficiency. And its open design allows it to both collect and share critical threat intelligence from third-party solutions.SummaryThe evolving enterprise network and its transition to a digital business model is one of the most challenging aspects of network security today. As significant trends in computing and networking continue to drive changes across many critical business infrastructures, architectures, and practices, organizations require a new, innovative approach to network security that enables them to quickly embrace those changes.The Fortinet Security Fabric provides the integrated and collaborative security strategy your organization needs. It enables the protection, flexibility, scalability, adaptability, and manageability you demand across your distributed and highly dynamic network, from IoT to the cloud.

Best Of Black Hat Innovation Awards: And The Winners Are…

Three companies and leaders who think differently about security: Deep Instinct, most innovative startup; Vectra, most innovative emerging company; Paul Vixie, most innovative thought leader. Dark Reading this year is launching a new annual awards program, the Best of Black Hat Awards, which recognizes innovative companies and business leaders on the conference’s exhibit floor.The 2016 Dark Reading Best of Black Hat Awards recognize three categories of achievement: the Most Innovative Startup, which cites companies that have been in the industry for three years or less; the Most Innovative Emerging Company, which cites companies that have been operating for three to five years; and the Most Innovative Thought Leader, which recognizes individuals from exhibiting companies who are changing the way the industry thinks about security.These new awards, chosen by the editors of Dark Reading, are not an endorsement of any product, but are designed to recognize innovative technology ideas and new thinking in the security arena. In future years, Dark Reading hopes to expand the awards program to recognize new products in different categories, as well as more individuals who are making a difference in the way we think about security.Most Innovative Startup: Deep InstinctThe finalists for our Most Innovative Startup Award are Deep Instinct, which is driving past machine learning with an artificial intelligence concept called deep learning; Phantom, a security orchestration tool that provides a layer of connective tissue between existing security products; and SafeBreach, which provides a hacker’s view of enterprise security posture.The winner is: Deep Instinct. Here’s what our judges wrote about Deep Instinct: “This was not an easy decision—each of the finalists, Phantom, Deep Instinct, and SafeBreach, bring really intriguing and useful technology to the security problem.In the end, we selected Deep Instinct as the Most Innovative Startup. Here’s why:  the concept of a cerebral system to detect malware and malicious activity at the point of entry in real-time and quashing it then and there solves many of the other security problems down the line. If the tool can catch the malware when it hits the endpoint, a security pro theoretically wouldn’t need to check out security alerts, correlate them among various security tools and threat intel feeds, and then take the appropriate action (sometimes too late). And unlike traditional antivirus, this technology looks at all types of threats, not just known malware, which of course is key today given the polymorphic nature of malware.We considered Deep Instinct’s approach of automatically stopping a threat at the endpoint, where it first comes in, using software that can on its own understand that it’s a threat and continuously learn about threats as unique and promising for security organizations. Deep learning is the next stage of machine learning, mimicking the brain’s ability to learn and make decisions, and Deep Instinct is the first company to apply this type of artificial intelligence to cybersecurity, which also made it a top choice.In addition, benchmark tests of Deep Instinct’s technology indicate a high degree of accuracy in detecting malware, at 99.2%. And unlike some endpoint security approaches, it occurs locally and there’s no sandbox or kicking it to the cloud for additional analysis.”Most Innovative Emerging Company: VectraThe three finalists for our Most Innovative Emerging Company are SentinelOne, which combines behavioral-based inspection of endpoint system security processes with machine learning;  Vectra, which offers real-time detection of in-progress cyber attacks and helps prioritize the attacks based on business priority; and ZeroFOX, which monitors social media to help protect against phishing attacks and account compromise.And the winner is: Vectra. Here’s what our judges wrote about Vectra: “It was a tough choice, but in the end, we selected Vectra, because it addressed several of security professionals’ most persistent challenges, with solutions that were both inventive and practical.Infosec pros are inundated with alerts about threats. Whether those warnings come from media reports, newsletters, or one of many pieces of security technology, it’s often hard to prioritize them. Maybe it was declared “critical,” but is it critical to me? Maybe it was “medium,” but is it critical to me? Infosec pros have attackers dwelling on their networks for many, many months, largely because security teams cannot quickly make sense of all this threat data. And infosec pros try to solve problems faster by adding new security technology that can sometimes put a huge strain on the network.We chose Vectra as the winner, because their solution helps prioritize threats for your organization specifically, can reduce attacker dwell time, and do so with a lightweight solution.Vectra’s tool tunes into all an organization’s internal network communications, and then, using a combination of machine learning, behavior analysis, and data science will identify threats, correlate them to the targeted endpoint, provide context, and prioritize threats accordingly — as they relate to your organization. Vectra can detect things like internal reconnaissance, lateral movement, botnet monetization, data exfiltration and other malicious or potentially malicious activities throughout the kill chain.Most importantly, Vectra’s tool allows security teams to identify their most important assets, so that the tool will know to push even a gentle nudge at those systems to the top of the priority list.With just a glance at the simple, elegant visualization used by Vectra’s threat certainty index, an infosec pro will know in moments what precise endpoint needs their attention first.”Most Innovative Thought Leader: Paul VixieThe three finalists for our Most Innovative Thought Leader are Krishna Narayanaswamy, Chief Scientist and Co-Founder of Netskope, Inc., a top specialist in cloud security; Dr. Paul Vixie, Chairman, CEO, and Co-Founder of Farsight Security Inc., a leader in DNS and Internet security; and Jeff Williams, Chief Technology Officer and Co-Founder of Contrast Security, who focuses on application security.And the winner is: Paul Vixie, Farsight Security. Here’s what our judges wrote about Paul: “This was perhaps the most difficult choice we had to make in the awards, because all three of these individuals are thought leaders and difference-makers in their own fields of security. Each of them is a contributor not only to innovation in his own company, but to the industry at large.In the end, we chose Paul Vixie, at least in part, because he likes to work and research and innovate in areas where few others are working. The world of Domain Name Systems often seems impenetrable even to security experts, yet it is an essential element to the global Internet and, potentially, a huge set of vulnerabilities that could affect everyone who works and plays online.In the last year or so, Paul has taken some of the lessons he’s learned about DNS and the way the internet works and built Farsight Security, which collects and processes more than 200,000 observations per second to help security operations centers and incident response teams more quickly identify threats. It works by analyzing DNS, which is a fundamental technology that the bad guys have to use, just as the good guys do. And while Farsight is not the only company working in the DNS security space, it has developed new methods of analyzing and processing the data so that enterprises can make better use of relevant information.Paul doesn’t stop with the work he is doing at his own company. As a longtime contributor to internet standards on DNS and related issues, he continues to participate in a variety of efforts, including source address validation; the OpSec Trust initiative, which is building a trusted, vetted security community for sharing information, and internet governance, including the controversial discussion around route name service.While all three of our finalists are deserving of special recognition, we feel that Paul Vixie’s contributions to innovation at his company, to enterprise security, and to internet security worldwide earn him this award.”Our congratulations to all of this year’s Dark Reading Best of Black Hat Awards winners!Tim Wilson is Editor in Chief and co-founder of Dark Reading.com, UBM Tech’s online community for information security professionals. He is responsible for managing the site, assigning and editing content, and writing breaking news stories. Wilson has been recognized as one … View Full BioMore Insights

Gartner’s top cybersecurity ‘macro trends’ for 2017

Paying the security tax. Answering to Dr. No. Submitting to the control centre. If you’ve ever been responsible for running IT security at a business, these will all sound familiar – too familiar.But there’s another way to look at security, says Earl Perkins, a research vice-president in the Internet of Things group at Gartner. Presenting at the research firm`s  symposium in October, he spoke of cybersecurity trends to look out for in the year ahead. He also had some helpful advice on how to frame cybersecurity as a benefit to your organization, rather than be viewed as a hindrance.“We’ve been playing a poker game for decades,” Perkins says. “We’ve been betting just enough chips on security and now we’re hoping the hand we hold will be enough to win.”Rather than hope the next card off the top turns a weak hand into a flush, security chiefs should take heed of these seven trends and plan accordingly:1. Seeking the balance of risk and resilienceAs organizations have a growing need to move quickly and adopt new technology, security has to stop managing risk and start building resilience, Perkins says. It’s not about abandoning risk management, but balancing it with the needs the business has to create value.“Security doesn’t have to be a Dr. No kind of thing,” Perkins says.Rethinking security’s approach in this way will require defining a new mission. You’ll also have to develop a new risk formula capable of handling new variables and factors. Then communicate this new approach and mission to employees.Soon enough, soon you’ll be seen in a different light.2. Security disciplines converge while skills expand A modern model for IT security takes into account new areas like operational technology and physical security. Image courtesy of Gartner. The definition of cybersecurity is expanding and chief security officers may find their job requirements are creeping up as a result. In addition to the legacy IT systems to protect, more operational technology (OT) is seeing IT systems embedded with the Internet of Things trend. Similarly, physical security systems such as video surveillance are connected and rely on IT systems.And Perkins has bad news for CSOs: “If it fails, it’s already your fault.”You’ll have to assess what new skill sets are needed on your security team to meet all these new demands. They’ll likely include roles responsible for identity management, embedded security, and cyber-physical security automation.Don’t hesitate to invest in training for your current team, or even build up security skills development within your company’s lines of business. Know where the gaps are and how you plan to fill them – eventually.3. Secure digital supply chain needs growJust because software as a service is now off-loading some application delivery on the IT department’s behalf, that doesn’t mean the job of the chief security officer is also done. Rather, a confusing mish-mash of considerations must be made about how to handle a user and the device before and after accessing these new cloud services. Once cloud apps start integrating with internal systems, it really gets interesting. Managing security around cloud software has become a confusing matter. Image courtesy of Gartner. The response to this problem so far has been developing management consoles that are multi-cloud and multifunction, Perkins says. As those consoles evolve, they will also help manage security based on a user’s need and priority standing.“I want you to implement and enforce different types of policies based on use,” Perkins says. CSOs should also have an enterprise-wide public cloud strategy, implement solutions that solve cloud complexity, and have a governance approach that matches cloud life cycle.4. Adaptive security architecture embraced“Our hope is you’ll reach a point where you create a security architecture where you prevent everything that you could reasonably be expected to prevent,” Perkins says. After that, you’ll need to respond to the ones you missed in an effective way and catch the others you’ll never detect with predictive security.“Detection and response is a lot like going to the barn and seeing the door open and realizing the horse has escaped,” he says. “Predictive would allow us to know the horse is acting kind of funny and we need to be ready.”The technical version of keeping the horse in the barn involves a commitment to software define architectures, dividing a control pane of applications and APIs from your data plane. Your security team should be preventing attacks by isolating systems in this way, and when an incident is detected, the risk needs to be confirmed.From a budget point of view, shift spending from prevention to detection and response, as well as predictive capabilities. From a conceptual point of view, operate like a security operations centre that is in continuous response mode.5. Security infrastructure adaptsThe number of code libraries being used by your organization is only growing and they are all aging. Security checks need to be run on these code sets often, not just when they are deployed. So security application testing has to be embedded into the lifecycle of these repositories.As organizations create a pervasive digital presence through always-connected devices, sensors, actuators, and other IoT gear, network security concerns will grow.“Wi-Fi is not the answer to doing the Internet of Things,” Perkins says. While your gateways will still talk with IP and Wi-Fi devices, there will be strange new elements more familiar to those with OT (operational technology) skill sets. Make sure to talk with those experience with OT in your organization.Many organizations will want to invest in discovery solutions just to find IoT devices within their organization. Also key to managing network security will be setting up segmented network portions, and designating trust zones.6. Data security governance and flow arrives“You’re going to have introduced to you different kinds of data flows,” Perkins says. “Some of it will look familiar and some won’t look familiar at all.”To continue to ensure that you can properly audit and protect your data, you’ll have to profile it by its flow type. To start with – is it structured, semi-structured, or unstructured data? In line with your software-defined strategy, create a boundary between your data and its destinations.CSOs will want to incorporate big data plans into their security strategies to keep pace. Priorities should be placed on organization-wide data security governance and policy.7. Digital business drives digital securityThanks to IoT, “there is a pervasive digital presence,” Perkins says. “Once you network this presence, it substantively alters the risk for your business.”Digital security is the next wave in cybersecurity and it involves getting a grip on this pervasive presence. Risks include espionage and fraud, sabotage of automated devices, device impersonation and counterfeiting, and beyond. Related Download Sponsor: HPE Virtualization: For Victory Over IT Complexity Download this white paper to learn how to effectively deploy virtualization and create your own high-performance infrastructures Register Now

20 Endpoint Security Questions You Never Thought to Ask

The endpoint detection and response market is exploding! Here’s how to make sense of the options, dig deeper, and separate vendor fact from fiction.There is a lot of buzz around the endpoint detection and response (EDR) market of late. The legacy endpoint market, traditionally dominated by large anti-virus (AV) vendors, has always been one that security professionals love to hate. Recently, however, several new players have entered the market with a variety of different approaches. These new entrants have shaken up the market and reinvigorated it with hope and cautious optimism for the future.Perhaps not surprisingly, with the endpoint market estimated to be somewhere between a $5B and $20B market (depending on source of research), hype and noise around it have quickly filled the air. Every potential buyer is bombarded by a long list of vendors, each one of which uses nearly the same marketing language as the other. So how can a security manager make sense of the options, dig deeper, and separate fact from fiction?  You guessed it – by playing a game of twenty questions, or in some cases show and tell.By DuMont Television/Rosen Studios, New York-photographer.Uploaded by We hope at en.wikipedia (eBay itemphoto frontphoto back) [Public domain], via Wikimedia Commons.Conceptually, viable EDR solutions need to provide three broad buckets of functionality:Prevent/Detect to block malicious code and prevent infection with a high rate of detection (true positives) and a low rate of both false positives and false negatives. This has long been the bailiwick of legacy anti-virus vendors, though detection rates and overall product efficacy have fallen sharply in the last few years due to a number of different factors. Among these factors are the ability for attackers to morph their malicious code to avoid signature-based detection approaches, as well as the gradual move by attackers away from malicious code and more towards theft of stolen credentials and other techniques involving no malicious code at all.Analysis that provides the capability to analyze, investigate, and perform forensics on the endpoint and across multiple different endpoints seamlessly.Response that gives you the ability to contain and remediate endpoints remotely.As you might have guessed, every EDR vendor will say they cover all three of these categories better than their competitors. Let’s play that game of 20 questions to understand how to find truth amidst the hype and noise:1. How easy is your solution to deploy?  Whether I have hundreds of thousands of endpoints within my enterprise or far fewer, I need a painless deployment process.2. How easy is your solution to manage?  With the number of agents I’m deploying, I can’t afford sloppy or immature management.3. How easy is it to configure rulesets and tune the solution once deployed?  Aside from the fact that threats are continually evolving, if there are activities that appear malicious elsewhere but are benign in my environment, I need a way to filter those out.4. How easy is it to update your solution’s knowledge base or take advantage of the latest knowledge around attacker activity?  If you can’t make it easy for me to operationalize what you’re selling me on, then your solution isn’t going to work for me.5. What additional load on the endpoint does your agent introduce? I can’t impact business productivity6. You want me to install yet another agent? I would be willing to do that if you articulate how you can consolidate functionality that I currently get from multiple different agents into one agent.7. How does your solution integrate with my existing security infrastructure?  I have a complex ecosystem of products deployed and yours needs to play nice with it.8. Not all intrusions involve malware. What is your strategy to detect intrusions that use no malware at all?9. Is your solution part of an overall platform, or is it just another point product that I need to figure out how to integrate into my operational workflow?10. Does your solution leverage and facilitate correlation with other data? I have a lot of great data elsewhere in my enterprise. Do you know how to take full advantage of it to improve your efficacy?11. Is your solution based on knowledge of attacker tactics, techniques, and procedures (TTPs)?  If not, how do you identify that type of activity?12. How does all the knowledge you’re selling me on make its way into the product to help me mitigate risk?13. Do you really have behavioral analysis and machine learning built into your solution, or is it just signatures and rulesets behind the scenes?14. Do you provide ability to remotely contain and remediate endpoints?15. How efficient and powerful is your enterprisewide search?  If I have an incident, or even if I don’t, I need to be able to slice and dice the data collected by my endpoint solution in an instant.16. How effective is your solution in a real enterprise against binaries you’ve never seen before?17. What is your true positive detection rate in the wild?  Results from your lab don’t interest me here.18. What percentage of events and alerts that you fire are false positives? Again, results from your lab don’t interest me here.19.  What is the upgrade path for your solution?  It should be a smooth and straightforward transition from one version to the next.20. How does your solution facilitate my information sharing initiatives?It’s not surprising that the endpoint market is a hot one. Changing attacker behaviors, historical disappointment with legacy endpoint products, the move to the cloud and the resulting loss of network visibility all combine to make endpoints a more critical target than ever before. Playing a good game of 20 questions with prospective EDR vendors will lead you to an educated decision that meet the specific requirements of your organization.Related Content: Black Hat Europe 2016 is coming to London’s Business Design Centre November 1 through 4. Click for information on the briefing schedule and to register.Josh is an experienced information security analyst with over a decade of experience building, operating, and running Security Operations Centers (SOCs). Josh currently serves as VP and CTO – Emerging Technologies at FireEye. Until its acquisition by FireEye, Josh served as … View Full BioMore Insights

Q&A;: Securing the Move to the Cloud

In the past decade, cloud computing has become increasingly popular among enterprises, with Gartner Research projecting IT spending on public cloud-based infrastructure services to surpass $24 billion in 2016, and associated management and security to surpass $8 billion. This evolution of our IT infrastructure brings with it concerns about the safety of our data, applications and end users. We talked to Chad Whalen about the move to the cloud, the related security concerns, and how Fortinet is protecting this rapidly-evolving IT infrastructure.How are organizations using the cloud today? As cloud technologies rapidly mature, they offer enterprises a number of different deployment options.Public cloud – Also commonly referred to as infrastructure as a service (IaaS), public cloud services from Amazon Web Services (AWS), Microsoft Azure, and other telcos and service providers are perhaps the most visible type of cloud computing.Private cloud – Enterprises have been adopting server virtualization for IT efficiency and data center consolidation for a number of years, but the notion of private clouds is more than just a virtualized data center. Instead, internal data centers are being transformed by successive waves of technology from software-defined networking (SDN) to SD-WAN, tiered storage, and other so-called software-defined data center (SDDC) technologies. These converged and orchestrated layers of logical infrastructure enable internal IT teams, or rather IT as a service (ITaaS), to deliver internal infrastructure with the same flexibility, and often economies, as those offered by public cloud providers.Hybrid cloud – Today, most organizations are moving to long-term strategies of deploying servers and applications
on a combination of both private and public cloud infrastructure. The persistence of both internal and externally hosted platforms additionally dictates migration of large volumes of data and applications, persistent site-to-site connectivity, and stretching of network topologies across the WAN.Software as a Service (SaaS) – As an alternative to deploying applications on cloud-based infrastructure, IT organizations can instead choose to procure web-based applications designed from the ground up to be delivered from the cloud, including popular applications like Salesforce.com, Office 365, and Dropbox. While appearing to be very different
from IaaS cloud services like AWS, software as a service (SaaS) really represents another fundamental cloud computing approach where the underlying infrastructure, from compute to security, is the responsibility of the SaaS vendor (who may in turn deploy on another provider’s IaaS/PaaS platform).Why are enterprises moving to the cloud at an unprecendented pace?Speed and agility: Solutions take too long to secure and deploy on premises. Virtualization provides enterprises the agility to provision, configure, and deploy infrastructure and applications nimbly and quickly for different organizations, business units, or projects.Scalability: Cloud environments are designed to scale elastically to much larger capacities than traditional IT environments, and they give you the ability to ramp capacity up and down quickly on demand.Cost: No large, upfront capital expenditures on hardware and software to run your network, fewer expensive software upgrades, reduced IT support costs, and predictable IT expenditures are just a few of the potential cost benefits of letting someone else manage your servers and applications. In addition, organizations increasingly expect to be able
to consume infrastructure on demand, starting and stopping instantly, and paying for only the capacity they need at any given time on a metered (i.e., utility-based) model.Service innovation velocity: The public cloud offers providers a great opportunity to easily innovate and deliver new services and offerings to extend their current business offerings to customers. These services can often be competitive differentiators for their business.What are the security challenges that customers face as they move to the cloud?As an organization’s IT infrastructure stretches and evolves, the attack surface expands as well. If your security can’t keep up with the agile public, private, and hybrid cloud environments of today, gaps in protection will occur. The biggest challenge is the growing concern of exposing sensitive corporate data to advanced malware and threats in this new, fast-evolving cloud environment. Customers worry about the ability to present a consistent security posture across physical, virtual, private, and public cloud platforms. Another concern is the loss of visibility and manageability across all traffic and environments, which means internal breaches can go undetected and spread. Client-side threats, where malware infects a cloud infrastructure through an authorized, yet infected or compromised endpoint device, is also a real concern.How is Fortinet helping enterprises secure their networks across private and public clouds?To keep pace with the rapid transition to the cloud and provide increased security effectiveness across any cloud environments, cloud security solutions need to be agile and scalable to meet changing needs. They also need to be segmented to minimize the impact of an advanced threat by isolating workloads, applications, data, and traffic, and consistent to ensure the seamless management, distribution, and enforcement of policy, as well as the collection of valuable threat intelligence.The Fortinet Security Fabric provides market-leading security solutions for any virtualization and cloud environment, including the most widely adopted service provider security solutions in the market. Using a cloud-based management tool (FortiManager), a common operating system (FortiOS), and a single threat intelligence source for consistent enforcement (FortiGuard), organizations can weave together a single, integrated security fabric for complete visibility and control across their entire distributed network environment.Read more on how to solve security challenges in hybrid cloud with Fortinet.

Byline: Is it Finally Time for Open Security?

One of the distinct advantages of working in the IT industry for over 35 years is all of the direct and indirect experience that brings, as well as the hindsight that comes with that. One of the more personally interesting experiences for me has been watching the growth and ultimate success of the Open Source Software (OSS) movement from a fringe effort (what business would ever run on OSS?) to what has now become a significant component behind the overall success of the Internet. I was initially reminded of the significance of the Open Source Software movement, and how long it’s actually been around when the technology press recognized the 25th anniversary of the Linux kernel. That, and the decision in January of 1998 by Netscape Communications Corp to release the complete source code for the Communicator web browser, are two of the top reasons for the Internet taking off. Well, the first specification for HTTP helped a little as well, I suppose. There are, of course, many other examples of OSS software that power the Internet, from the numerous Apache Foundation projects, relational and other database management systems like Postgres, MySQL, MongoDB, and Cassandra. The list of markets and technologies for which there are OSS resources is essentially endless. This all leads me to the title of this article. Perhaps it’s time to look at Open Security as the next necessary iteration of deploying security technology. Over the last thirty years we have gone through a slow (and often painful) evolution of security deployment models, including: – Why do I need security (the DARPA days, pre-Mitnick) – Various iteration of basic firewalls, from packet filter to proxy to stateful – Best of breed stacked implementation (dedicated IPS, dedicated WebFilter, dedicated caching/optimization) – Security function consolidation (UTM / NGFW) – Open Security Architecture These few examples of change all came with various degrees of pains, gain, and consequences. These were traditionally also very proprietary solutions, with limited abilities to interact. The most common method used to try and collect and correlate information across these isolated devices was the implementation of a SIEM or similar system. This has worked reasonably well until recently, but in an increasing number of environments the scale of the information generated by the security infrastructure is putting ever-increasing pressure on the SIEM, with an end result that is really not much better then an IDS, since manual intervention is still normally required to address a detected threat. In addition, ever-changing, complicated attack vectors, and an increasingly diverse range of end devices have also driven some of these evolutionary changes. I’m not advocating something as radical as a security vendor providing all their software as source code, such as Netscape Communications did. After all the R&D side of being a security vendor is incredibly expensive and resource intensive, and without constant, on-going research even the best implemented security products become fairly useless rather quickly (unless the product is an OFF button.) Instead, I’m advocating for security companies to design products that have Open and flexible interfaces, so that as an industry we have a better ability to adapt to the continuously changing threat landscape. For example, wouldn’t it be cool if your perimeter security solution, which has the ability to detect suspicious malware activity – such as a connection attempt to command and control servers – could instruct your L2/L3 internal switch infrastructure to migrate a particular interface from, say, the regular user L2 network to a different forwarding domain that only contains equally comprised systems? And do this without requiring the client system’s IP address to change, or their existing established network sessions to be interrupted? And then, wouldn’t it be even cooler if the security solution protecting your Data Center or Cloud-located services could then also know to apply more scrutiny to activity from this same, quite likely well-compromised client? All with no SOC interaction? Actually, it sounds a bit like the late 90’s migration from IDS (detection only, with manual intervention) to IPS (actual prevention) systems, doesn’t it? Do we need to go as far as trying to define these communications and interoperability interfaces via a standards body such as IETF? Frankly, I don’t really think that the current rate of change in the security industry meshes very well with the pace of a traditional standards body. Ultimately, of course, once these sorts of interfaces stabilize, and common denominators are determined, then a standards body-based approach might work. Some might say this has been tried previously via various approaches, but with the exception of limited-use technologies like WCCP and ICAP, all of the approaches that I have seen have all had some amount of proprietary technology involved. OPSEC anyone? What’s clear is that the isolated, proprietary security devices most organizations are using are simply not solving today’s cybersecurity challenges. Companies need something different. It seems to me that what they need are open security solutions that can be integrated together to share threat intelligence in order to provide actual protection, and would allow them to seamlessly interoperate across the distributed network infrastructure, from IoT to the cloud. With such an open, end-to-end fabric of security solutions woven together to scale and adapt as business demands, organizations could finally address the full spectrum of challenges they currently face across the attack lifecycle. It’s safe to say that open is officially a critical cybersecurity requirement for today’s digital business, and should not only be a requirement for every security solution you consider, but part of your foundational security policy and strategy as well. Ken McAlpine is VP, Network Security Solutions at Fortinet.  Note: Originally published by SecurityWeek

Byline: Meeting The Challenge of Securing the Cloud

The cloud has been a powerfully disruptive technology, transforming traditional network architectures that have been in place for decades, allowing businesses to be more agile, responsive and available than ever before. In fact, networking experts predict that by 2020 cloud data centers will house as much as 92 percent of all workloads. The challenge is that while cloud service providers certainly offer compelling new services, they also create isolated data silos that have to be managed separately, and impose unique security requirements on organizations. Unfortunately, many traditional security solutions were not designed to protect the agile and highly distributed cloud environments being adopted today – or the expanding attack surface they create. When corporate data no longer sits in isolated data centers, and users, devices, and applications can access virtually any information from any device or location, traditional security models and technologies simply can’t keep up. And as we see every day, cybercriminals are ready to exploit these security gaps and weaknesses. So, while organizations are re-engineering their networks, they have also begun to retool their security model and solutions. For example, some organizations have begun to move many of their traditional enterprise edge security tools into the cloud to protect critical workloads there, and load up on on-demand public cloud security, virtualized security tools designed for private clouds, and cloud-based tools like cloud access security brokers (CASB) designed to protect hosted SaaS applications and corporate data. Meanwhile, security budgets for existing traditional networks are being reassigned to the adoption of specialized security tools, such as data center protection, web application firewalls, security for mobile devices, thin clients, secure email gateways, advanced threat protection, and sandboxes. The result, in many cases, is that today’s hybrid cloud environments are recreating the same data center security sprawl that organizations have spent years trying to streamline and consolidate. Implementing dozens of isolated security tools and platforms, regardless of how relevant they are to new cloud-based networks, creates their own problem. IT teams are already overburdened with managing their network transformation. The lack of additional resources, combined with the growing security skills gap, means that security technicians now need to learn how to deploy, configure, monitor, and manage dozens of additional cloud security tools, with no good way to establish consistent policy enforcement or correlate the threat intelligence each of these devices produces. But what if the data and security elements across an organization’s various cloud environments were well integrated, cohesive and coherent, like a seamlessly woven fabric? Such an approach would allow companies to see, control, integrate and manage the security of their data across the hybrid cloud, thereby enabling them to take better advantage of the economics and elasticity provided by a highly distributed cloud environment. This type of approach would also allow security to dynamically expand and adapt as more and more workloads and data move into the cloud, and seamlessly follow and protect data, users, and applications as they move back and forth from IoT and smart devices, across borderless networks, and into cloud-based environments. An approach like that addresses the three fundamental requirements necessary to meet today’s advanced networking and security requirements: Integration Security, network and cloud-based tools need to work together as a single system to enhance visibility and correlate and share threat intelligence Synchronization Security solutions need to work as a unified system for simplified single-pane-of-glass management and analysis, and to enable a coordinated respond to threats through such methods as isolating affected devices, dynamically partitioning network segments, updating rules, and removing malware Automation For security solutions to adapt to dynamically changing network configurations and respond in real time to detected threats, security measures and countermeasures need to be applied automatically, regardless of where a threat originates, from remote devices to the cloud Unfortunately, for many organizations their cloud-based infrastructure and services have become a blind spot in their security strategy. And cybercriminals are prepared to take advantage of that. As we all know, a critical lapse in visibility or control in any part of the distributed network, especially in the cloud, can spell disaster for a digital business and have repercussions across the emerging global digital economy. To securely meet today’s digital business requirements, organizations need to be able to cut through the cloud security hype and intentionally select security solutions designed to be part of an interconnected, end-to-security framework that can solve evolving physical and virtual IT challenges regardless of the deployment option. Security needs to be designed to meet this new challenge not only now, but into tomorrow as organizations continue to evolve towards a fully digital business model.  Michael Xie is Founder, President and CTO, Fortinet This byline originally appeared in American Security Today

A Guide to Security for Today’s Cloud Environment

Enterprises have rapidly incorporated cloud computing over the last decade, and that trend only seems to be accelerating. Private cloud infrastructure, including virtualization and software-defined networking (SDN), is in the process of transforming on-premise data centers, which host the majority of enterprise server workloads around the world. Enterprises are also embracing public clouds at an unprecedented rate, with most connecting back to on-premise environments to create a true hybrid cloud environment. For all their advantages, these accelerated infrastructural changes also raise major concerns about security, and the ability to protect end-users and sensitive data from ever-evolving cyber threats. As today’s enterprise data centers evolve from static internal environments to a mix of private, public, and hybrid clouds, organizations need to augment traditional firewalls and security appliances (deployed for north-south traffic at the network edge) with expanded protection for east-west traffic moving laterally across the network, both within internal networks and across clouds. To maintain a strong security posture in private, public, and hybrid clouds, organizations need to increase and perhaps even reallocate security to keep pace with these more dynamic, distributed, and fast-paced environments. Here are some specific areas to consider: Scalability Cloud computing enables the rapid development and delivery of highly scalable applications. Security needs to be equally elastic to scale with the cloud infrastructure itself and to provide transparent protection without slowing down the business. Today’s cloud environments require ultra-fast physical firewalls that provide highly scalable north-south data center firewall and network security protection at the edge of the private cloud. They also need virtual firewalls that provide north-south protection for public clouds. And they need virtual firewalls that provide east-west protection for data and transactions moving between devices in the cloud. High-performance firewalls and network security appliances need to scale vertically to meet volume and performance demands, and laterally to seamlessly track and secure data from IoT/endpoints, across the distributed network/data center, and into the cloud. Segmentation With the IT efficiencies gained by pooling resources (e.g., compute, storage, network) through technologies such as virtualization and software-defined networking (SDN), cloud environments have become increasingly aggregated to the point where entire data centers can be consolidated. If a hacker or advanced threat breaches the cloud perimeter via a single vulnerable application, there’s typically little to protect critical assets within the flat and open internal network. To minimize that serious potential for damage and loss, organizations need to isolate business units and applications. Networks need to be intelligently segmented into functional security zones to control east-west traffic. End-to-end segmentation provides deep visibility into traffic that moves east-west across the distributed network, limits the spread of malware, and allows for the identification and quarantining of infected devices. A robust end-to-end segmentation strategy includes internal segmentation firewalling across data centers, campuses, and branch offices, and secure microsegmentation for SDN and cloud environments. Awareness In addition to scalability and segmentation, your underlying security infrastructure should offer automatic awareness of dynamic changes in the cloud environment to provide seamless protection. It’s not enough to detect bad traffic or block malware using discrete security devices. Security should be integrated into a SIEM and other analytic tools in private and public clouds that have the ability to collect and correlate data and automatically orchestrate changes to security policy/posture in response to detected incidents and events. The individual elements need to work together as an integrated and synchronized security system with true visibility and control. Extensibility Solutions should also be built on an extensible platform with programmatic APIs (for example, REST and JSON) and other interfaces to dynamically integrate with the wide array of deployed hypervisors, SDN controllers, cloud management consoles, orchestration tools, and software-defined data centers and clouds. This enables security that can automatically adapt to the evolving network architecture and changing threat landscape. Choosing a cloud security solution When evaluating a security solution, there are a few general questions that organizations ought to start with. Is it scalable? A comprehensive security strategy must be elastic in both depth (performance and deep inspection) and breadth (end-to-end). Is it aware? You need to not only track how data flows in and out of your network, but also how it moves within the perimeter and who has access to it. Is it really secure? The different tools that protect your network need to work together as an integrated system to provide unified visibility and control. Is it actionable? You need a common set of threat intelligence combined with centralized orchestration that enables security to dynamically adapt as new threats are discovered and automatically deliver a synchronized response anywhere across the distributed network. How open is it? Well-defined, open APIs allow technology partners to become part of the fabric — helping to maximize investments while dynamically adapting to changes. Other specific features to look for might include: Software-defined Security: Look for a
unified security platform with a single OS to enable orchestration and automation across physical, virtual, and cloud-based security. Integration: Solutions should integrate with VMware vSphere and NSX environments, as well as public cloud environments like AWS and Azure to provide on-demand provisioning, pay-as-you-go pricing, elastic auto-scaling, and unified analytics that enhance protection and visibility. Single-Pane-of-Glass Visibility and Control: Your security solution should include centralized management with a consolidated view of policies and events—regardless of physical, virtual, or cloud infrastructure. Conclusion The evolving enterprise network combined with the transition to a digital business model present some
of the biggest challenges currently facing traditional network security. At the same time, the rapid adoption of private, public, and hybrid clouds is driving the evolution of cloud security. The next generation of agile and elastic security solutions must transcend the static nature of their forebears to fundamentally scale protection while providing segmentation within and across cloud environments—helping organizations embrace the benefits of an evolving infrastructure while anticipating the attack vectors of current and emerging threats.

Strengthening the Security Fabric of Blockchain

Blockchain is a shared and continuously reconciled database used to maintain a list of digital records, called blocks. It is quickly becoming an important tool not just for financial information, but also for managing and recording virtually all types of data, such as medical and other records, identity management, and transaction processing. Because a blockchain database is distributed and interconnected, it provides several essential services. The first is transparency. Because data is embedded within the network as a whole, it is by definition public. The second is that it is difficult to corrupt because altering any unit of information on the blockchain would also modify all subsequent blocks unless huge amounts of computing power are used to override the entire network. Next, because it is distributed, it cannot be controlled by any single entity. And for that same reason, it also has no single point of failure. While blockchain was first adopted by BitCoin to manage and secure transactions, mainstream organizations were skeptical and slow to adopt it. But according to the recent PwC Global Fintech Survey 2017, blockchain is now moving out of the lab.  77% of organizations surveyed now expect to adopt blockchain as part of an in production system or process by 2020. Key capabilities of blockchain include: Mutual: A blockchain is shared across organizations, Owned equally by all, and Dominated by no one. Distributed: Blockchain inherently uses a multi-locational data structure, And any user can keep his or her own copy. Ledger-based: Blockchain units are immutable, meaning that once a transaction is written it cannot be erased, And because the ledger is public, its integrity can easily be proven. Blockchain technology by its nature establishes assurance, and significantly reduces the need for processes and controls for reconciliations, confirmations, and identity. As a result, a Blockchain infrastructure is essentially a permanent timestamping engine for computer records. These timestamps can be used for such things as proving that data elements were entered at or before a certain time, and that they have not been altered. Attack Surface of a Private, Permissioned, or Consortium Blockchain Blockchain technology does not include the built-in functionality of user roles or access controls. Because everyone has the ledger, everyone can read it. Roles and access controls are something that can always be added at the application layer. In an un-permissioned blockchain, like those used for cryptocurrencies such as BitCoin, anyone can access and update the blockchain. Everyone has permission. New transactions are added to the ledger and inconsistencies resolved by a scheme in which users with the most resources win. For permissioned or consortium based blockchains, however, organizations will need to run them within a secure environment, such as a security fabric architecture, that can provide essential services across the entire distributed environment, such as access control, privacy, key management, and protection against attacks such as denial of service. Security Fabric Component 1: Access Control and Privacy When used by a consortium or private entity, most enterprise blockchains will be permissioned. In such blockchains, a governance structure has to be defined. This structure ensures which users can view or update the blockchain, and how they can do it. This establishes a consensus process that is controlled by a pre-selected set of nodes and predefined rules of governance. For example, if you have a financial organization of 25 institutions, you may want to establish a rule requiring that at least 15 of them must sign a block in order for the block to be valid. While blockchain technology guarantees integrity, security components such as access control and privacy are things that need to be overlaid. It is important that all participants be protected from unauthorized access. So, in a permissioned blockchain, outsiders should not be able to tamper with the ledger. Therefore, the administrator of the permissioned blockchain must minimize its attack surface. In practical terms, this means that every participant is a target, and that traffic to and from participating entities must be protected using policies. Security Fabric Component 2: Secure Key Management A secure blockchain application requires the secure management of user private keys. Insecurity of keys can severely impact the confidentiality and integrity of data. Therefore, the same technologies that are typically put in place to address such concerns elsewhere should be used to secure these keys. Blockchain by itself doesn’t make establishing this sort of control any easier or harder than with other technologies. The protection of these can be ensured using a variety of methods, including physical access control, network access control, and a key management solution that includes generation, distribution, storage and escrow, and backup etc. Security Fabric Component 3: Distributed Denial of Service (DDoS) Blockchain transactions can be easily denied if participating entities are prevented from sending transactions. A DDoS attack on an entity or set of entities, for example, can totally cripple the blockchain organization and the attendant infrastructure. Such attacks can introduce integrity risks to blockchain by affecting such things as consensus. Therefore, blockchain architects must work with their security counterparts to ensure the availability of the infrastructure via such methods as building strong DDoS attack mitigation directly into the network. Conclusion Blockchain is a critical component of the digitalization of the economy. When adopted, it will certainly revolutionize a variety of businesses. But the success of blockchain will greatly depend on how robust cybersecurity is to ward off threats from all directions.

SECURITY PARTNER SHOWCASES

fortinet logo

Gemalto

KnowBe4 Logo

vectra78

REAL-TIME THREAT MAP

Knowledge of the threat landscape combined with the ability to respond quickly at multiple levels is the foundation of providing effective security. Since 2000, FortiGuard Labs have provided in-house, industry-leading security research on over 200 zero-day virus discoveries, powering Fortinet’s platform and suite of services.

Proactive Protection: FortiGuard takes information from global sources through its Security Services, using analytics and machine learning to turn big data into near real-time updates for Fortinet appliances, assuring some of the fastest response times in the industry
to new vulnerabilities, attacks, viruses, botnets, and zero-day exploits.