Directory Image
This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Privacy Policy.

Zero-day thesis

Author: Janet Peter
by Janet Peter
Posted: Dec 03, 2018
day exploits

There is little known regarding the duration as well as the prevalence of zero-day attacks that tend to exploit the vulnerabilities not yet disclosed publically. The cyber criminals are continually discovering new vulnerabilities, and this is providing them with a free pass to attack any target in which they may have interest. It is, however, hard to analyze many of those threats because data is unavailable until ‘the discovery of the attack takes place. Another reason is that zero-day attacks tend to be rare and hard to observe in lab experiments or honeypots. The response teams and incident handlers in organizations are struggling to identify and respond to the threats on which they do not have any knowledge. That issue plagues many organizations that rely on signature-based detection mechanisms. The attempt of trying to handle unknown threats minus having a systematic plan will not work. The purpose of this paper is major to exploit on the ways of handling zero-day attacks in an effective manner so as to secure assets of organizations. It looks at the vulnerabilities that exist in the organization and the way attackers can utilize vulnerabilities in those assets so that it is easy to handle this issue appropriately.


A zero-day attack is a cyber attack that exploits the vulnerabilities not yet discovered or disclosed publicly. That means that there is almost no defense that exists against zero-day attacks because as much as the vulnerability remains unknown, it is not easy to patch the software affected, and the antivirus software is unable to detect the attack by any known means. Cyber criminals take advantage of unpatched software like Microsoft Office, Adobe Flash among others and use them as a free pass to attack their targets. That is why the market value of new vulnerabilities remains high currently, and it ranges between $5,000 and $250,000. An example of a notable zero-day attack is the Aurora attack that took place in the year 2010, and its aim was to steal information from many companies. There is little information regarding zero-day attacks at the disposal of incident handlers because data is unavailable until after the discovery of the attacks.

Based on the post-mortem analysis of vulnerabilities, the usage of zero-day attacks is primarily for performing out targeted attacks. The zero-day threat presents an increasingly new front aginst which incident handlers need to fight. The challenge that zero-day poses to the information security teams are that, there is a big gap that exists between the detection and identification capability. Even the vendors lack prior knowledge of threat and for that reason, the signature-based systems, for instance, the intrusion detection systems and antivirus cannot identify the threats. While the incident response teams trust their signature-based systems for unknown threats, it is almost impossible trying to identify a zero-day attack. That is the reason it is prudent to establish a solid and a phased response plan that can efficiently detect and discover a zero-day for immediate mitigation to take place as quickly as possible. A zero-day vulnerability is the dream of any hacker because it guarantees instant fame. Certain governments also use them to sabotage the foreign systems or enterprises. The protection against these type of attacks is extremely indispensable and for that reason, many organizations have their in-house teams of hackers to compete against cyber criminals in detecting and locating zero-day vulnerabilities ahead of their exploitation.

Life Cycle of Zero day Vulnerabilities

Fig 1: A life cycle of zero day vulnerability. Source: Bilge and Dumitras (2012). Retrieved from

There is a database maintained by a Common Vulnerabilities and Exposure consortium, and their database contains detailed information on vulnerabilities. That database is paramount for the governments, firms, academia and the cyber security industry as it can help them to prepare adequately to address the zero-day attacks. That consortium defines vulnerability as a software mistake that gives room for the attackers to execute commands just like other users and consequently gain access to the data having access restrictions. Those attackers behave as the actual users of those accounts and they can, in turn, launch denial-of-service attacks. As defined earlier, a zero-day attack is a type of attack taking advantage of the security breaches not yet disclosed to the public. A zero-day vulnerability, just like any other vulnerability starts as a programming bug that avoids testing. The attackers sometimes may discover that security breach; exploit it as they also package the exploit using a malicious payload to accomplish their zero-day attacks against their choice targets.

After the security community notes the vulnerabilities, they describe it in a public advisory so that the vendor of the affected software can release a patch to counteract the vulnerability. The security vendors, in turn, release updates to their antivirus software signatures so that they can detect and perhaps prevent those specific attacks. After the release of the patch for the software, there can be the reuse of the exploits and many at times there is also the creation of some additional exploits based on the patch. Their purpose is to exploit the Internet hosts where there has been no application of the patches and the new updates to their antivirus software. That battle between the remediation measures and the attacks can go on for a long time until the time when the security experts completely seal the vulnerabilities. The following are the events that mark this vulnerability cycle.

  • The introduction of the vulnerability: there is the introduction of a bug in the software that is later to have deployment and release into several hosts across the globe. The bug can either be memory mismanagement or a programming mistake.
  • The release of the exploit in the wild. Attackers discover the vulnerability and come up with a working exploit that they can use to conduct stealth attacks on the selected targets.
  • The vendor discovers the vulnerability. The vendor can then learn about the vulnerability of a third party of through testing. The vendor then performs an assessment of the threat’s severity, plans on how to apply a fix to it and after which he then begins to work on a patch.
  • Public disclosure of the vulnerability. The vendor, a public forum, or a mailing list discloses the vulnerability to the general public. The assignment of a Common Vulnerability and Exposure identifier takes place so that that vulnerability has a unique identifier to identify it.
  • The release of antivirus signature. After any of the parties named above discloses the vulnerability, the antivirus vendors release new signatures for the created heuristic detections and then ongoing attacks for the exploit. That makes it possible for the discovery of the attacks at the end hosts using the updated antivirus signature.
  • Patch Released. The software vendor releases the patch immediately after the disclosure date of on the material day of the vulnerability disclosure. The host that will apply the patch will be free from the exploitation of the security vulnerability that was there previously.
  • Completion of the patch deployment. The patch only ceases to have an impact won the hosts after all the hosts’ worldwide patch the susceptible software.

Characteristics of a Zero-Day Vulnerability

There are three characteristics of zero-day vulnerabilities and without any of these characteristics, then that vulnerability does not qualify to be zero-day vulnerability. The first feature is that the vulnerability must be unfixed, or that there should not be any information on the public regarding the way to fix the vulnerability. History shows that some zero days were known to the vendors even though their testing; release of disclosure hindered them from availing affixed software before the vulnerability becoming a zero-day infamous. Nevertheless, when there is official or public information available concerning the fixed software, then the vulnerability is no longer a zero day. As a matter of practicality, once vulnerability is labeled as a zero-day it must bear that name until wit is no longer relevant.

Another characteristic regarding zero-day vulnerability is that external knowledge of the vulnerability. The vulnerability is not a zero day if the people that know it are only those at the software’s source. The existence of a working exploit code is also another characteristic of zero-day vulnerability. The most common example of a working exploit code is the proof-of-concept code that offers a crash without code execution on the vulnerability device. Any exploit that executes code is certainly a working exploit just like the exploits that crash the client physical security system or crash remote routers in a way that allows physical theft.

Detecting the Zero-day Exploits

As technology usage is proliferating and business IT environments become increasingly more complex than before, there is also a growing danger of exploits to be more ominous than in the past. Most companies shave a good preparation to address the known threats via the usage of the specific security tools like IDS devices, anti-malware, antivirus device, and vulnerability assessment tools. However, with zero-day exploits the security personnel does not have the knowledge of the source of exploits because they always manifest in ways that are undetectable using the traditional means. Many organizations do not have the equipment that can detect and respond to the initial threats. Because exploits can emanate from anywhere, prevention and mitigation need a true, global window both in the security specific event and in operations. The detection should take pace using automatically recognizing aberrant behavior that can then alert the administrators forthwith.

The zero-day attacks resemble the polymorphic worms, Trojans, viruses, and other malware. Kur and Singh (2014) discovered that the most prevalent attacks are polymorphic worms that display distinct behaviors. Those behaviors may include complex mutations to avoid detection, targeted exploitation that directs the attacks against vulnerable hosts, multi-vulnerability scanning for identifying potential targets and remote shells that have the capability of opening arbitrary ports at the compromised hosts. Most zero-day vulnerability attack detection took place in the year 2013 as compared to the previous years according to the Symantec security report. That research community classified the detection mechanism to four namely, statistical, signature-based, behavioral and hybrid-based zero-day exploits detection techniques (Symantec Corporations, 2014). The primary goals of the exploits named above are to detect zero-day exploits in a real-time manner followed by a quarantine of the specific attack to minimize or eliminate the damage caused by the attack. The challenge that exists in those methods is that it is hard to avoid exceeding the victim’s limit for the delay of analysis and quarantine. If they exceed the threshold, there will be destabilization of the victim machine (Ting et al., 2009).

The Statistical Approach to Zero-days Exploits Detection

The statistical-based method helps to detect the exploits in an almost real-time or in a real-time basis based on the attack profiles built from the historical data. When the data patterns change with time, this technique of detecting zero-day exploits becomes sterile because of its basis on historical data. A new profile would be in the requirement in the case of a new zero-day exploit pattern.

Signature Detection Technique

The primary focus for these types of zero-day exploit detection is the polymorphic worm detection. The technique is dependent on the exploit signatures at the public disposal. The method helps to defend against any variations of the exploits from the original signature or any exploits depending on the process that the attackers use to hide the original exploits signature. Kaur and Singh break this technique further into content-based, vulnerability-based and semantic abased detections.

Behavior-based Detection model

The behavior-based technique has sits basis on the analysis of the interaction of the exploit with the target. The method is very effective in capturing the interactions between the exploit and the target; it also enhances the learning of normal interactions as well as the prediction of future activity. It groups the interactions into behavior groups in which any interaction that deviates from the normal behavior has to be quarantined. The technique has the potential of detecting and analyzing the zero-day exploits almost in real-time basis (Alosefer & Rana, 2011).

The Hybrid Detection Technique

The hybrid model combines the previous techniques using a heuristic approach. Many researchers claim that this method is the strongest of the four particularly in detecting polymorphic and other obfuscations (Tiang et al., 2009). The method depends on which types of the previous techniques have the combination. the

Analyzing the Detection Techniques

Any company connected to the Internet independent of its size is susceptible to the zero-day exploit. The goals of those exploits include monitoring the target operations, stealing the secrets, and disrupting the production of the target. The attackers making those exploits purchase them for the purpose of attacking and the ones that purchase them include governments and organized crime. The zero-day exploits are receiving more demand in the market nowadays than the previous years and the business for selling those exploits is becoming lucrative (Bilge & Dumitras, 2012).

For the malicious zero-day exploits to remain valuable, they should remain undetected enterprises’ detection/defense strategies until after the attackers achieve their goals. The longer the period of exploits the more lucrative the exploit. According to research by Bilge and Dumitras (2012) an average exploit goes undetected for 312 days resulting in the accomplishing of the purpose of the attackers on their target victims. Many times the vendor of the software may ignore the vulnerability of they may not have the knowledge of the zero-day vulnerability in their software programs. An attacker may also alter the original code of the application exposing the software to attack. The vendor and the attacker may both use code obfuscation to achieve vulnerability cloaking.

Code obfuscation is the process of making the code unintelligible of hard to understand. Code obfuscation follows the process of transformations to the code to change its physical appearance while preserving its black box specifications (Balakrishman & Schulze, 2005). When the code of the attacker has a better confiscation, the defense-in-depth strategies of a company will not have the ability to detect the exploit. Code obfuscation is what programmers use to conceal intellectual property and eliminate reverse engineering, and that is the same technique that attackers use to conceal malicious codes from detection by the detection systems of organizations (Balakrishnan & Schulze, 2005). Irrespective of code confiscation, if an application gets good time and resources, it is possible for it to undergo reverse engineering (Collberg et al., 1997). Wall, zero-day exploits, have some life spans and the close the span tends towards zero the less the time it can have for causing damage across enterprises. When the zero-day exploit becomes public, and the patches are available, the exploit is then preventable because the patches do correct the vulnerabilities.

Intrusion detection/prevention systems make use of the four previously mentioned prevention techniques for defending the company assets from zero-day exploits. Those signatures should meet tow conditions; first, their detection rate should be high, meaning that they should not miss the commonly available attacks. The next condition that they should meet is that they should have the capability of generating some false alarms (Yegneswaran et al., 2005).

The Behavior-based Defense Method

The behavior-based techniques search for the specific behaviors of the worms that do not need ten examinations of the payload patterns of bytes (Kaur & Singh, 2014). The kind of technique is very helpful as it helps in the prediction the future behavior patterns of the web server or the victim machine so as to prevent the unexpected behaviors. The defense technique learns those behavior patterns by examining the past and the current interactions with the victim machine, the server of the web server (Alosefer & Rana, 2011). That technique relies on the ability to predict the traffic flow of the network.

There is a method that can be useful for detection a malicious activity and that method is the Hidden Markov Model that can have usage together with an organization’s honeypot system. In a Markov model, the end user cannot observe the state although they can see the outcome. The algorithm developed by Alosefer and Rana can detect a system’s behavior based on the present and the past interactions. The honeypot systems are the systems that keeps the records of those interactions and these honeypots consists of state machines. Analysis of the future states of those state machines takes place after which there is an examination of any variations from the expectations using the Hidden Markov Model (HMM) and the Baum-Welch algorithm.

The Statistical-based Defense Mechanisms

The statistical-based techniques for detecting the zero-day exploits rely on the at6tack profiles from the previous exploits that are now public. The technique can adjust the profile of these known past exploits of that it can detect new attacks. Even if this technique is effective in detecting the exploits, there are limits on the quality of detection that takes place based on the threshold set by the vendor or the security personnel (Kaur & Singh, 2014). The technique isolates the normal activities from the ones that are abnormal, thus blocking or flagging the activities that are outside the normal. The length of time that the detection system remains online, the more accurate it will be in determining what is or is not normal. The available techniques in this approach conduct a statistical analysis on the packet payloads so that they can have detected invariant characteristics of the decryption method of a polymorphic worm. That technique is vital in detecting the exploit ahead of the execution of the actual code (Kong et al., 2011). The statistical-based technique can help in identifying false positives or negatives depending on the thresholds chosen.

The semantic aware statistical algorithm is an example of a mechanism that uses the statistically based technique. The statistical technique couples the semantic analysis with the statistical analysis in generating the signature. The first phase of this algorithm is the extraction of the signature followed by the phase o matching that signature. There is the division of the first phase into modules of payload extraction and disassembly, instruction distilling, clustering and generation of the signature. The modules in the second phase include the extraction of the payload followed by its matching. The payload extraction is the one that implements the malicious intent from a flow and the signature matching module then begins to detect the worm packets. The signature matching takes place by matching the state-transition-graph signatures with the input packets (Kong et al., 2011).

The technique does some things well, for instance, it can filter noise often injected into the packets. The filtering of the noise helps the algorithm to generate cleaner signature rather than the learned one that ahs noise. The statistical signature is more complex compared to the previous ones, and this makes it difficult for the hackers to craft the packets and consequently interrupt the signature generation process. The technique is very intelligent and since it has its basis won semantic patterns, even if the hackers modify the packets, the technique can easily identify them. The other characteristic of this technique is that it has low overheads that make it easy to detect exploits in real-time (Kong et al., 2011).

However, the SAS algorithm has some weaknesses including the inability to take care of some state of the art obfuscations such as the branch function obfuscation. The branch function fails to return to the instruction after the call instruction takes place and it rather branches to a different location in the program that relies on the source of the call (Linn & Debray, 2003). The other weakness of this algorithm is that the use of complex encryption makes it possible for the attackers to evade detection. Therefore, it has to be carefully used, or it can have usage together with the other defense techniques of zero-day exploits.

The Signature-based Defense Method

The signature-based defense technique is a method used by various antivirus software vendors as they compile various signatures of malware. The vendors cross reference those signatures with network files, web downloads, and email downloads of local files based on the settings the user leverages. New updates continue to take place on the signatures in the library, and those signatures represent new exploits vulnerabilities. The signature-based technique as mentioned before is a step behind zero-day exploits as it requires a signature to exist in the signature library so that it can then detect it. For that reason, the software vendors continually update their virus definitions so as to include any newly identified exploited vulnerabilities’ signatures.

The signature-based technique also has a division into content-based, vulnerability-based and semantic-based. These signatures are somewhat effective in the detection and defense against polymorphic worms. It is often difficult to detect the polymorphic worms because their payloads change from time to time, hence posing a challenge to the security professionals and intrusion detection and prevention systems (Mohammed et al., 2013). When those polymorphic worms change like that, they have the ability to attack with different signatures from their previous signatures of attack making them difficult to detect and thus they can cause more damage for a long period. The antivirus software packages utilize the signature-based techniques, and these techniques are very useful in defending organization assets against malware and worms.

Now I made mention of three types of signature-based techniques, and the first one was a content-based signature. These types of signatures make comparisons of the content packets with the publicly known signatures. We can further classify the content-based signatures into content and image attributes that act as inputs to the signature algorithm (Dittmann et al., 1999). The content-based signature techniques do capture the characteristics that are specific to a given worm implementation meaning that they are not generic enough, and other exploits can easily evade them. Various attackers can as well avoid content-based signatures due to their ability to mislead the process of generating the signatures by injecting crafted packets into the normal traffic. A false negative is the result of any the change ion structure of a malicious packet.

A good example of the content signature-based technique is the polygraph that produces signatures to detect and match polymorphic worms. The creators of the polygraph, Newsome, and colleagues (2005) argue that there is a possibility of automatically creating signatures that match the variation of polymorphic worms, and this provides low false positives and negatives. A real-world exploit contains multiple invariant substrings in all the variants of the payload, and those substrings correspond to return addresses and protocol framing (Newsome et al., 2005). Polygraph successfully generates signatures depending on these invariant substrings.

The other variant of the signature-based technique is the semantic-based signature. The word semantics is the study of meaning. Semantics are useful in uncovering the meaning of a whole expression. The linguists that study the semantics do look for rules that govern the relationship between the form, or the arrangement of words in a sentence and their meaning. In our case, the semantic-based signature techniques are relatively expensive to generate because of the number of computations involved. It is also impossible to implement them in the existing intrusion detection systems such as Snort (Kaur & Singh, 2014).

A technique that makes use of the semantic-based signatures is the Nemean, which is a system that automatically generates intrusion signatures using the honeynet packet traces. A honeynet is a group of honeypots and it ahs implementation as part of a network IDS. A honeypot ahs the ability to simulate a production environment and the security teams use it to monitor an attacker’s log activities. Nemean creates signatures that lead to lower false alarm rates as it balances the generality and specificity (Yegneswaran, et al., 2005). The authors claim that those capabilities are indispensable for systems concerned with the automatic generation of signatures.

The other types of signatures mentioned under the signature-based technique are the vulnerability-based. These types of signatures identify the vulnerability condition as well as in identifying the predicate of the point vulnerability reachability. That is a condition that signifies whether an input message causes the execution of the program reaches the vulnerability point. The basis of these types of signatures is on the publicly known vulnerabilities rather than on the actual exploit (Caballero et al., 2009). They have very few false positives because they highly depend on known vulnerabilities. The limitation of the vulnerability-based signatures is that there is a limitation of the library of known signatures. The major challenge of the vulnerability-based signatures is the accuracy and the automatic generation of signatures on a real-time basis to result in low false negatives and positives.

Hybrid-based Defenses Methods

These type of techniques consist of heuristics that entail the combinations of the previously mentioned defense techniques that include the statistical-based, behavioral-based and signature-based. The essence of using the hybrid model is to overcome the limitations of using a single technique (Kaur & Singh, 2014). These authors used a hybrid technique known as suspicious traffic filter to detect the zero-day polymorphic worms. The hybrid technique has four advantages, and they include the following.

It proposes the most appropriate technique, the one that is likely to offer better sensitivity by identifying a zero-day attack based on the data gathered on high interaction honeypots.

  • This technique strengthens the existing techniques through the combination of their strengths and minimizing their weaknesses.
  • The technique uses honeypots to detect anomalies and it does not have its basis on the prior knowledge of zero-day attacks.
  • The technique detects zero-day attacks early enough, and it can thwart the attack before it escalates and causes serious damage to the assets.

How to Prepare for Zero Day Incidents (internal response)

The method of preparing to respond to the zero-day attacks follows the steps of identifying, correlation, analyzing and mitigating the incident. Preparation is imperative for any organization if it has to contain the zero-day incidents effectively. One thing that the incident response teams should do is to have at their disposal and incident response toolkit. That toolkit should be a read-only disc consisting of known trusted binaries, and it should be bootable. There is also some configurations required to ensure that there is effective identification process, and that should take pace via system logging, host monitoring, and network monitoring. The incident response team and the incident handler have the mandate of continually updating what can augment their ability to identify the zero-day exploits.

1. Internal log monitoring

A log monitoring mechanisms are very important in securing a network from any form of attack or intrusion. There is a need for organizations to have intelligent security monitoring systems that have the capability of correcting information from those logs. Enterprises can leverage the open source security information management technique.tey should then configure appropriately configure all the devices that have the ability to send logs to the remote system. That gives the response team the ability to have a single view of the entire system at any given point in time. The security team should configure all the systems to use the centralized time source like the network time protocol so that there is consistency across all the disparate systems. That is because synchronized time is imperative to incident response.

2. Monitor suspicious Network Activity

Identifying zero-day depends on the system as well as the network visibility. Malicious process penetrates the corporate network to reach the intended target; therefore, the logging of the network activity can offer crucial information. The network activities that the security team should watch for include malware propagation, target proliferation, command, and control. There are systems and tools that can be useful in accomplishing the monitoring of suspicious network activities. Ourmon is one of the systems, and it utilizes flow-based data gathering and analysis to detect any anomalies. The system functions as a sniffer, and it collects traffic flows that take place between the client and the server. Another tool is Netflow that utilizes statistical data regarding the client/server IP flows to identify anomalies. BotHunter is another application that monitors the communication between the intranet hosts and the Web so as to identify the compromised machines. Other systems and tools that can have usage include the darknets as well as the honeynets explained early.

3. Monitor the host Activities

Monitoring of individual systems is crucial in identifying zero-day apart from the monitoring of the network. Monitoring hosts in the internal network are very helpful in detecting and identifying zero-day exploits because minus the same, attacks can go unnoticed. Some products/technologies for identifying anomalous activities do exist. Examples of these technologies include host intrusion detection/prevention, system logging and file-level monitoring. Tripwire is a technology designed to numerous system monitoring features such as rules, policies, and customization ability. Those rules and policies allow the baselining of a system against known good states and raise alarms in case of any violations. The AIDE (Advanced Intrusion Detection Environment) is another system for integrity source file monitoring. It works as a tripwire. The OSSEC performs log analysis root-kit detection; file integrity checking, real-time alerting and active response.

4. Malware Collection and Analysis

An effective response to any requires that there be a method of collecting it. It is the obligation of the incident response team to ensure that they have at their disposal the ability to capture a malware and to analyze it. Honeypots are some useful mechanisms for securing the servers from malicious users and software as many times the servers are the targets for the hackers. Honeypots track hackers collect malware and identify new types of attacks. The computers whose sole purpose is to act as honeypots are high-interaction honeypots. They have an installed operating system and applications required to meet their role. The low interaction honeypot systems, on the other hand, are the ones used for malware analysis. Dionea is another solution for malware detection and analysis. It is a system designed to trap malware exploitable vulnerabilities as exposed by the services that are taking place over a network. Its ultimate goal is to gain a copy of the malware. Dionea uses a method that enables it to identify a shell code in an exploit after which it lets the exploit run in a chrooted environment thus revealing its true actions.

5. Application Whistling

Application whistling is another zero-day mitigation strategy that has been gaining popularity with time in the recent years. The application allows organizations to permit the running of safe applications and blocks all the others that are not safe. The application thwarts any remote code execution although there is a requirement of work up front if it has to ensure business continuity. The main advantage of application whistling is that it only allows the running of applications that do not pose any threat to the corporate systems. The IDS system contains a list of trusted application names and if any application runs within the list of trusted application that means that the application is safe, and it may not affect the system adversely.


The zero-day threats are the challenge that the incident response teams are facing in organizations. The zero-day threats are those that the public does not know yet, and as much as they continue to exist and pose a challenge to the security professionals, it is vital to have a solid plan for handling them. The response will include a means of responding to the threats and preventing them where possible. Most of the defense mechanism and techniques are available off-the-shelf, and so organizations can obtain them easily although zero-day vulnerabilities still exist in almost all the known systems. If an organization is to have a solid defense against the zero-day exploits, it should understand defense techniques in which their defense strategies can defend.

The IT staffs have the obligation of carrying out penetration analysis on their systems from time to time so as to identify and close any security loopholes. Other organizations that do not have IT expert to conduct penetration analysis hires external experts to do it on their behalf.

In the paper I identify the life cycle that a zero-day attack passes through and explain how to respond to the threat if made public, majorly the software vendor releases a patch to counteract the exploit and seal close the vulnerability. I also discussed the ways of detecting and defending the organizational assets from the zero-day attacks. The paper analyzed the detection techniques and explained each in detail. It also shows the zero-day vulnerabilities sometimes do occur due to the intentional mistakes of the software vendors while sometimes the vendors may not be aware. The baseline is that organizations must have strong security teams that are vigilant about the security of those systems. That will help them identify and address any zero-day exploits as appropriately as possible.


Alosefer, Y. & Rana, O.F. (2011). Predicting client-side attacks via behavior analysis using honeypot data, Next Generation Web Services Practices (NWeSP). 7th International Conference on Next Generation Web Services Practices, pp.31-36.

Bilge, L. & Dumitras, T. (2012). Before we knew it: An Empirical Study of Zero-Day Attacks in the Real World. ACM Conference on Computer and Communication Security.

Caballero, J. et al. (2009). Towards generating high coverage vulnerability-based signatures with protocol-level constraint-guided exploration. In Recent Advances in Intrusion Detection. Springer Berlin Heidelberg. Pp. 161-181.

Colberg,C., Thimborson, C. & Low, D. (1997). A taxonomy of obfuscating transformations. In: Technical Report 148, University of Auckland.

Dittmann, J., Steinmetz, A. & Steinmetz, R. (1999). Content-based digital signature for motion pictures authentication and content-fragile watermarking, Multimedia Computing and Systems. IEEE International Conference, 2, 209-213.

Kaur, R. & Singh, M. (2014). Efficient hybrid technique for detecting zero-day polymorphic worms. Advance Computing Conference (IACC), 2014 IEEE International. Pp. 21-22.

Kong, D. (2011). SAS: semantics aware signature generation for polymorphic worm detection. International Journal of Information Security, 10(5), 269-283.

Mohammed, E. et al. (2013). An Automated Signature Generation Method for Zero-Day Polymorphic Worms Based on Multilayer Perceptron Model. Advanced Computer Science Applications and Technologies (ACSAT). 2013 International Conference. Pp. 23-24.

Newsome, J. et al. (2005). Polygraph: automatically generating signatures for polymorphic worms, Security and Privacy. 2005 IEEE Symposium, Pp.226-241.

Symantec Corporation (2014). Internet security threat report. Retrieved from

Ting, C., Xiaosong, Z. & Zhi, L. (2009). A Hybrid Detection Approach for Zero-Day Polymorphic Shell codes. E-Business and Information System Security, 1(5), 23-24

Sherry Roberts is the author of this paper. A senior editor at Melda Research in custom research paper services if you need a similar paper you can place your order for a custom research paper from top research paper writing companies.

About the Author

"Janet Peter is the Managing Director of a globally competitive essay writing company.

Rate this Article
Leave a Comment
Author Thumbnail
I Agree:
Author: Janet Peter
Premium Member

Janet Peter

Member since: Dec 11, 2017
Published articles: 353

Related Articles