Monday, May 11, 2009

Midterm Question #3

Internet if properly maximized can be used as a medium to the advantage of the company. However, risks and threats are there. Thus, research the following:

1. Identify the possible risks and threats (eg. virus) that can potentially attack a company with internet connection.

2. Case research and analysis:
2.a Identify one company that had experienced an attacked from the internet.
2.b Describe the attack.
2.c Identify the damages done and the solutions adopted to reverse the damages and to protect the company from future threats.

Answer:

Although you’ve gathered a considerable amount of data to this point, you will need to analyze this information to determine the probability of a risk occurring, what is affected, and the costs involved with each risk. Assets will have different risks associated with them, and you will need to correlate different risks with each of the assets inventoried in a company. Some risks will impact all of the assets of a company, such as the risk of a massive fire destroying a building and everything in it, while in other cases; groups of assets will be affected by specific risks.

Assets of a company will generally have multiple risks associated with them. Equipment failure, theft, or misuse can affect hardware, while viruses, upgrade problems, or bugs in the code may affect software. By looking at the weight of importance associated with each asset, you should then prioritize which assets will be analyzed first, and then determine what risks are associated with each.

Once you’ve determined what assets may be affected by different risks, you then need to determine the probability of a risk occurring. While there may be numerous threats that could affect a company, not all of them are probable. For example, a tornado is highly probable for a business located in Oklahoma City, but not highly probable in New York City. For this reason, a realistic assessment of the risks must be performed.

Historical data can provide information on how likely it is that a risk will become reality within a specific period of time. Research must be performed to determine the likelihood of risks within a locality or with certain resources. By determining the likelihood of a risk occurring within a year, you can determine what is known as the Annualized Rate of Occurrence (ARO).

Information for risk assessment can be acquired through a variety of sources. Police departments may be able to provide crime statistics on the area your facilities are located, allowing you to determine the probability of vandalism, break-ins, or dangers potentially encountered by personnel. Insurance companies will also provide information on risks faced by other companies, and the amounts paid out when these risks became reality. Other sources may include news agencies, computer incident monitoring organizations, and online resources.

Once the ARO has been calculated for a risk, you can then compare it to the monetary loss associated with an asset. This is the dollar value that represents how much money would be lost if the risk occurred. You can calculate this by looking at the cost of fixing or replacing the asset. For example, if a router failed on a network, you would need to purchase a new router, and pay to have the new one installed. In addition to this, the company would also have to pay for employees who aren’t able to perform their jobs because they can’t access the network. This means that the monetary loss would include the price of new equipment, the hourly wage of the person replacing the equipment, and the cost of employees unable to perform their work. When the dollar value of the loss is calculated, this provides total cost of the risk, or the Single Loss Expectancy (SLE).

To plan for the probable risk, you would need to budget for the possibility that the risk will happen. To do this, you need to use the ARO and the SLE to find the Annual Loss Expectancy (ALE). To illustrate how this works, let’s say that the probability of a Web server failing is 30 percent. This would be the ARO of the risk. If the e-commerce site hosted on this server generates $10,000 an hour and the site would be estimated to be down two hours while the system is repaired, then the cost of this risk is $20,000. In addition to this, there would also be the cost of replacing the server itself. If the server cost $6000, this would increase the cost to $26000. This would be the SLE of the risk. By multiplying the ARO and the SLE, you would find how much money would need to be budgeted to deal with this risk. This formula provides the ALE:

ARO x SLE = ALE

When looking at the example of the failed server hosting an e-commerce site, this means the ALE would be:

.3 x $26,000 = $7,800

To deal with the risk, you need to assess how much needs to be budgeted to deal with the probability of the event occurring. The ALE provides this information, leaving you in a better position to recover from the incident when it occurs.

Once you’ve identified the risks that can pose a probable threat to your company, and determined how much loss can be expected from an incident, you are then prepared to make decisions on how to protect your company. After performing a risk assessment, you may find a considerable number of probable threats that can affect your company. These may include intrusions, vandalism, theft, or other incidents and situations that may vary from business to business. This may make any further actions dealing with risk management seem impossible.

The first thing to realize is that there is no way to eliminate every threat that may affect your business. There is no such thing as absolute security. To make a facility absolutely secure would be excessive in price, and it would be so secure that no one would be able to enter and do any work. The goal is to manage risks, so that the problems resulting from them will be minimized.

The other important issue to remember is that some threats will be excessive in cost to prevent. For example, there are a number of threats that can impact a server. Viruses, hackers, fire, vibrations, and other risks are only a few. To protect the server, it is possible to install security software (such as anti-virus software and firewalls) and make the room fireproof, earthquake proof, and secure from any number of threats. The cost of doing so, however, will eventually become more expensive than the value of the asset. It would be wiser to back up the data, install a firewall and anti-virus software, and run the risk that other threats will not happen. The rule of thumb is to decide which risks are acceptable.

After calculating the loss that may be experienced from a threat, you will need to find cost-effective measures of protecting yourself. To do this, you will need to identify which threats will be dealt with and how. Decisions will need to be made by management as to how to proceed, based on the data you’ve collected on risks. In most cases, this will involve devising methods of protecting the asset from threats. This may involve installing security software, implementing policies and procedures, or adding additional security measures to protect the asset.

You may decide that the risks involved with an asset are too high, and the costs to protect it are too high, as well. In such cases, the asset should be moved to another location, or eliminated completely. For example, if there is a concern about a Web server affected by vibrations from earthquakes in California, then moving the Web server to the branch office in New York nullifies the threat. By removing the asset, you subsequently eliminate the threat of it being damaged or destroyed.

Another option is to transfer the potential loss associated with a threat to another party. Insurance policies can be taken out insuring the asset, so that if any loss occurs the company can be reimbursed through the policy. Leasing equipment or services through another company can also transfer the risk. If a problem occurs, the leasing company will be responsible for fixing or replacing the assets involved.

Finally, the other option is to do nothing about the potential threat, and live with the consequences (if they occur). This happens more often than you’d expect, especially when you consider that security is a tradeoff. For every security measure put in place, it makes it more difficult to access resources and requires more steps for people to do their jobs. A company may have broadband Internet connectivity through a T1 line for employees working from computers inside the company, and live with the risk that they may download malicious programs. While this is only one possible situation where a company will live with a potential threat (and gamble that it stays “potential” only), it does show that in some situations, it is preferable to have the threat rather than to lose a particular service.


Suddenly your Web server becomes unavailable. When you investigate, you realize that a flood of packets is surging into your network. You have just become one of the hundreds of thousands of victims of a denial-of-service attack, a pervasive and growing threat to the Internet. What do you do?

Internet Denial of Service sheds light on a complex and fascinating form of computer attack that impacts the confidentiality, integrity, and availability of millions of computers worldwide. It tells the network administrator, corporate CTO, incident responder, and student how DDoS attacks are prepared and executed, how to think about DDoS, and how to arrange computer and network defenses. It also provides a suite of actions that can be taken before, during, and after an attack.

I. SECURITY SOLUTIONS.

1. Securing Windows Server.

Improved Default Security in Windows. Securing the Hatches. Know Who Is Connected Using Two-factor Authentication. Using Templates to Improve Usage and Management. Patrolling the Configuration. Securing the File System. Securing Web Services. Keeping Files Confidential with EFS. Bulletproof Scenario. Summary.
2. Implementing Secured Wireless Technologies.

Working Through Walls. Managing Spectrums to Avoid Denial of Service. Implementing Support for Secure 802.1x Technologies. Taking Advantage of Windows Server 2003 Security Features. Configuring the Wireless Client. Maximizing Wireless Security through Tunneling. Maintaining Knowledge of Your Wireless Networks. Summary.
3. Integrating Smartcard and Secured Access Technologies.

Maximizing Certificate Services Implementations. Securing Certificate Services. Getting the Most Out of Smartcards. Tips and Tricks for Securing Access to the Network. Creating a Single Sign-on Environment. Securing Access to Web Servers and Services. Protecting Certificate-based Services from Disaster. Integrating Smartcards with Personal Devices. Summary.

II. MANAGEMENT AND ADMINISTRATION SOLUTIONS.

4. Distributing Administration.

Choosing the Best Administrative Model for Your Organization. Using Role-based Administration for Optimal Delegation. Leveraging the Delegation of Control Wizard. Enhancing Administration with Functional Levels. Managing Domain and Enterprise Administration. Developing Group Policies that Affect Administration. Testing Level of Administrative Access. Auditing Administrative Activities. Summary.
5. Managing User Rights and Permissions.

Leveraging Domain Local, Global, and Universal Groups. Using NTFS and AD Integrated File Shares. Using Group Policy to Administer Rights and Permissions. Maximizing Security, Functionality, and Lowering Total Cost of Ownership (TCO) with User Profiles. Managing Rights and Permissions for Specific User Types. Summary.
6. Implementing Group Policies.

Leveraging Group Policies. Group Policy Deployment. Understanding GP Inheritance and Application Order. Understanding the Effects of Slow Links on Group Policy. Using Tools to Make Things Go Faster. Automating Software Installations. Enhancing Manageability with Group Policy Management Console. Using Resultant Set of Policies in GPMC. Maximizing Security with Group Policy. Increasing Fault Tolerance with Intellimirror. Leveraging Other Useful Tools for Managing Group Policies. Using Administrative Templates. Finding Additional Resources about Group Policy. Summary.
7. Managing Desktops.

Automating Backup of Desktop Data. Accelerating Deployments with Workstation Images. Creating Windows XP Images. Automating Software Installation. Slow Link Detection. Ensuring a Secured Managed Configuration. Managing Systems and Configurations. Leveraging Useful Tools for Managing Desktops. Summary.
8. Administering Windows Server 2003 Remotely.

Using Remote Desktop for Administration. Taking Advantage of Windows Server 2003 Administration Tools. Using Out-Of-Band Remote Administration Tools for Emergency Administration. Using and Configuring Remote Assistance. Securing and Monitoring Remote Administration. Delegating Remote Administration. Administering IIS in Windows Server 2003 Remotely. Summary.
9. Maintenance Practices and Procedures.

Maintenance Is Not As Interesting as Implementing New Technology. What to Do Every Day. What to Do Every Week. What to Do Every Month. Consolidating Servers as a Maintenance Task. Backup Tips and Tricks. Making Automated System Recovery Work for You. Leveraging Scripting for Maintenance Practices. Why Five-9s Might Be a Bad Idea. Automating Updates. Summary.

III. DESIGN AND IMPLEMENTATION SOLUTIONS.

10. Advanced Active Directory Design.

Implementations Small and Large. Configuring and Reconfiguring Domains and Organizational Units. Sites and the New Knowledge Consistency Checker. Using Cross-Forest Trusts Effectively. Interforest Synchronization. Active Directory Migration Tool Best Practices. Using Microsoft Metadirectory Services Effectively. Domain Controller Placement. Global Catalog Placement. Taking Advantage of Replication Improvements. Active Directory Functional Levels. Summary.
11. Implementing Microsoft Windows Server.

Best Practices for Successful Server Deployments. Licensing and Activating Windows Server. Automating Deployment with Remote Installation Service. Using Sysprep for Servers to Maximize Consistency. Customizing Setup Using Unattend and Setup Manager. Creating Custom Bootable CDs for Rapid Deployment. Optimizing Standard Server Configurations. Customizing Servers with Setup Wizards. Controlling the Back-end with the Windows Registry. Summary.
12. Implementing Microsoft Active Directory.

Taking Advantage of Functional Levels. Improving Domain Controller Installation. Getting the Most Out of Global Catalog Servers. Maximizing Flexible Single Master Operation (FSMO) Roles. Expanding the Enterprise by Interconnecting Forests and Domains. Enhancing Flexibility with Renaming Domains. Managing the Active Directory Schema. Improving Replication—with Application Partitions. Summary.
13. Establishing a Solid Infrastructure Foundation.

Focusing on the Windows Server 2003 Infrastructure Components. DNS in an Active Directory Environment. The Domain Name System (DNS) In Depth. Installing DNS Using the Configure Your Server Wizard. Configuring DNS to Point to Itself. Using Resource Records in a Windows 2003 Environment. Establishing and Implementing DNS Zones. Creating Zone Transfers in DNS. Understanding the Importance of DNS Queries. Other DNS Components. DNS Maintenance, Updates, and Scavenging. Troubleshooting DNS. The Dynamic Host Configuration Protocol (DHCP) In Depth. DHCP Changes in Windows Server. Installing DHCP and Creating New Scopes. Creating DHCP Redundancy. Advanced DHCP Concepts. Optimizing DHCP through Proper Maintenance. Securing a DHCP Implementation. Continuing Usage of Windows Internet Naming Service (WINS). Installing and Configuring WINS. WINS Planning, Migrating, and Maintenance. Global Catalog Domain Controllers (GC/DCs) Placement. The Need to Strategically Place GCs and DCs. Summary.

IV. MIGRATION AND INTEGRATION SOLUTIONS.

14. Migrating from Windows NT 4.0

Migrating to a Scalable Windows 2003 Server Environment. Fallback Plans and Failover Procedures. Tips to Minimize Network Downtime. Planning and Implementing Name Resolution When Migrating. Planning and Upgrading File Systems and Disk Partitions. Avoiding Failures and Disruptions During Server Upgrades. Keeping Windows Servers Current with Windows Updates. Finalizing Server Upgrades with Windows Update. Supporting Windows Clients During Coexistence. Implementing and Securing Password Migrations. Addressing Permissions Issues When Migrating Desktops. Best Practices for Maintaining and Managing Coexistence. Common Mistakes When Decommissioning Domains and Servers. Summary.
15. Migrating from Windows.

Preparing the Migration. Windows Server 2003 Applications Compatibility. Using the Application Compatibility Tool Kit. Upgrading and Installing Windows Server. Migrating Network Services. Migrating Active Directory Objects. FailOver Best Practices. Supporting Clients with Windows Server. Decommissioning Windows. Raising Windows 2003 Functional Levels. Summary.
16. Integration with Unix/LDAP-Based Systems.

Designing and Planning Platform Integration. Creating an Integrated Infrastructure. Integrating Directories Across Environments. Using Password Synchronization. Centralizing the Management of Cross-Platform Resources. Accessing Unix from a Windows Perspective. Accessing Windows from a Unix Perspective. Migrating Resources from One Platform to the Other. Summary.
17. Integrating Windows 2003 with Novell Networks.

Leveraging Services for NetWare. Creative Ways of Bridging the Gap Between Novell and Windows. Installing the Microsoft Services for NetWare Tool. Creating a Single Sign-on Environment. Synchronizing eDirectory/NDS with Active Directory. Replacing NetWare Servers with Windows Servers. Summary.

V. REMOTE AND MOBILE USER SOLUTIONS.

18. VPN and Dial-up Solutions.

Choosing the Right VPN Solution. Best Practices for Securing L2TP. Best Practices for Securing PPTP. Taking Advantage of Internet Authentication Service. Using VPN for Wireless. Deploying VPN and Dial-up Services. Using Site-to-Site VPNs. Using Load Balancing to Add Scalability and Resiliency. Summary.
19. Web Access to Windows Server 2003 Resources.

Best Practices for Publishing Web Shares to the Internet. Securing Access to Resources with SSL. Enabling SSL on a Web Server Directory. Enabling and Securing Internet Printing. Best Practices for Securing FTP Services. Accessing Resources with Terminal Services and Remote Desktops. Monitoring IIS Access Through Auditing and Logging. Using Windows Tools and Scripts to Manage IIS. Summary.
20. Advanced Active Directory Design.

Advantages of Using Terminal Services. Keeping Users Connected with Session Directory. Adding Redundancy to Session Directory. Optimizing Terminal Service Performance. Managing Terminal Service Users with Group Policy. Keeping Terminal Service Secure. Leveraging Local Resources. Summary.

VI. BUSINESS CONTINUITY SOLUTIONS.

21. Proactive Monitoring and Alerting.

Leveraging Windows Management Instrumentation. Leveraging Scripts for Improved System Management. Deciding What to Monitor. Determining What to Monitor and Alert Upon. Responding to Problems Automatically. Using Microsoft Operations Manager for Advanced Automation. Summary.
22. Creating a Fault-Tolerant Environment.

Optimizing Disk Management for Fault Tolerance. Maximizing Redundancy and Flexibility with Distributed File System. Simplifying Fault Tolerance with Volume Shadow Copy. Optimizing Disk Utilization with Remote Storage. Optimizing Clusters to Simplify Administrative Overhead. Leveraging Network Load Balancing for Improved Availability. Realizing Rapid Recovery Using Automated System Recovery (ASR). Summary.

VII. PERFORMANCE OPTIMIZATION SOLUTIONS.

23. Tuning and Optimization Techniques.

Understanding of Capacity Analysis. Best Practice for Establishing Policy and Metric Baselines. Leveraging Capacity-Analysis Tools. Identifying and Analyzing Core Analysis and Monitoring Elements. Optimizing Performance by Server Roles. Summary.
24. Scaling Up and Scaling Out Strategies.

Size Does Matter. Building Bigger Servers. Building Server Farms. Avoiding the Pitfalls. Making It Perform. Scaling the Active Directory. Scaling for the File System. Scaling for RAS. Scaling Web Services. Scaling for Terminal Services. Summary.
25. Utilizing Storage Area Networks.

Defining the Technologies. When Is the Right Time to Implement NAS and SAN Devices? Designing the Right Data Storage Structure. Adding in Fault Tolerance for External Storage Systems. Combining Hardware Fault Tolerance with Windows Server 2003 Technologies. Best Practices for SAN and NAS. Recovering from a System Failure. Leveraging NAS and SAN Solutions for Server Consolidation. Summary.

VIII. BUSINESS PRODUCTIVITY SOLUTIONS.

26. User File Management and Information Look-up.

Enabling Collaboration with Windows SharePoint Services. Expanding on the File and Data Management Capabilities of Windows. Simplifying File Sharing with Office. Improving Data Lookup with Indexing. Taking Advantage of Revision Control Management. Implementing Information, Communication, and Collaboration Security. Summary

The goal of a DoS attack is to disrupt some legitimate activity, such as browsing Web pages, listening to an online radio, transferring money from your bank account, or even docking ships communicating with a naval port. This denial-of-service effect is achieved by sending messages to the target that interfere with its operation, and make it hang, crash, reboot, or do useless work.

One way to interfere with a legitimate operation is to exploit a vulnerability present on the target machine or inside the target application. The attacker sends a few messages crafted in a specific manner that take advantage of the given vulnerability. Another way is to send a vast number of messages that consume some key resource at the target such as bandwidth, CPU time, memory, etc. The target application, machine, or network spends all of its critical resources on handling the attack traffic and cannot attend to its legitimate clients.

Of course, to generate such a vast number of messages the attacker must control a very powerful machine--with a sufficiently fast processor and a lot of available network bandwidth. For the attack to be successful, it has to overload the target's resources. This means that an attacker's machine must be able to generate more traffic than a target, or its network infrastructure, can handle.

Now let us assume that an attacker would like to launch a DoS attack on example.com by bombarding it with numerous messages. Also assuming that example.com has abundant resources, it is then difficult for the attacker to generate a sufficient number of messages from a single machine to overload those resources. However, suppose he gains control over 100,000 machines and engages them in generating messages to example.com simultaneously. Each of the attacking machines now may be only moderately provisioned (e.g., have a slow processor and be on a modem link) but together they form a formidable attack network and, with proper use, will be able to overload a well-provisioned victim. This is a distributed denial-of-service--DDoS.

Both DoS and DDoS are a huge threat to the operation of Internet sites, but the DDoS problem is more complex and harder to solve. First, it uses a very large number of machines. This yields a powerful weapon. Any target, regardless of how well provisioned it is, can be taken offline. Gathering and engaging a large army of machines has become trivially simple, because many automated tools for DDoS can be found on hacker Web pages and in chat rooms. Such tools do not require sophistication to be used and can inflict very effective damage. A large number of machines gives another advantage to an attacker. Even if the target were able to identify attacking machines (and there are effective ways of hiding this information), what action can be taken against a network of 100,000 hosts? The second characteristic of some DDoS attacks that increases their complexity is the use of seemingly legitimate traffic. Resources are consumed by a large number of legitimate-looking messages; when comparing the attack message with a legitimate one, there are frequently no telltale features to distinguish them. Since the attack misuses a legitimate activity, it is extremely hard to respond to the attack without also disturbing this legitimate activity.

Take a tangible example from the real world. (While not a perfect analogy to Internet DDoS, it does share some important characteristics that might help you understand why DDoS attacks are hard to handle.) Imagine that you are an important politician and that a group of people that oppose your views recruit all their friends and relatives around the world to send you hate letters. Soon you will be getting so many letters each day that your mailbox will overflow and some letters will be dropped in the street and blown away. If your supporters send you donations through the mail, their letters will either be lost or stuffed in the mailbox among the copious hate mail. To find these donations, you will have to open and sort all the mail received, wasting lots of time. If the mail you receive daily is greater than what you can process during one day, some letters will be lost or ignored. Presumably, hate letters are much more numerous than those carrying donations, so unless you can quickly and surely tell which envelopes contain donations and which contain hate mail, you stand a good chance of losing most of the donations. Your opponents have just performed a real-world distributed denial of service attack on you, depriving you of support that may be crucial to your campaign.

What could you do to defend yourself? Well, you could buy a bigger mailbox, but your opponents can simply increase the number of letters they send, or recruit more helpers. You must still identify the donations in the even larger pool of letters. You could hire more people to go through letters--a costly solution since you have to pay them from diminishing donations. If your opponents can recruit more helpers for free, they can make your processing costs as high as they like. You could also try to make the job of processing mail easier by asking your supporters to use specially colored envelopes. Your processing staff can then simply discard all envelopes that are not of the specified color, without opening them. Of course, as soon as your opponents learn of this tactic they will purchase the same colored envelopes and you are back where you started. You could try to contact post offices around the country asking them to keep an eye on people sending loads of letters to you. This will only work if your opponents are not widely spread and must therefore send many letters each day from the same post office. Further, it depends on cooperation that post offices may be unwilling or unable to provide. Their job is delivering letters, not monitoring or filtering out letters people do not want to get. If many of those sending hate mail (and some sending donations) are in different countries, your chances of getting post office cooperation are even smaller. You could also try to use the postmark on the letters to track where they were sent from, then pay special attention to post offices that your supporters use or to post offices that handle suspiciously large amounts of your mail. This means that you will have to keep a list of all postmarks you have seen and classify each letter according to its postmark, to look for anomalous amounts of mail carrying a certain postmark. If your opponents are numerous and well spread all over the world this tactic will fail. Further, postmarks are fairly nonspecific locators, so you are likely to lose some donations while discarding the hate letters coming to you from a specific postmark.

As stated before, the analogy is not perfect, but there are important similarities. In particular, solutions similar to those above, as well as numerous other approaches specific to the Internet world, have been proposed to deal with DDoS. Like the solutions listed above that try to solve the postal problem, the Internet DDoS solutions often have limitations or do not work well in the real world. This book will survey those approaches, presenting their good and bad sides, and provide pointers for further reference. It will also talk about ways to secure and strengthen your network so it cannot be easily taken offline, steps to take once you are under attack (or an unwitting source of the attack), and what law enforcement can do to help you with a DDoS problem.

No comments:

Post a Comment