Chapter 2
Hacking Ethically and Legally

Unfortunately, the term hacker has negative connotations for many who automatically attribute hacking to an illegal activity. Just like any professional however— be it a doctor, lawyer, or teacher—the job title hacker is neutral; we can have inept doctors, dishonest lawyers, and poor teachers, but we tend to assume that these roles are inherently “good”.

The following definition from Wikipedia outlines the term “hacker” as it has come to be understood in technical communities:


A computer hacker is any skilled computer expert who uses their technical knowledge to overcome a problem. While hacker can refer to any skilled computer programmer, the term has become associated in popular culture with a security hacker, someone who, with their technical knowledge, uses bugs or exploits to break into computer systems.

—Wikipedia, November 2018


Using bugs and exploits to break into computer systems is something you'll be doing a lot of in this book; breaking into computer systems is legal provided you have written permission to do so from the owner of the system. Using your skills and knowledge to gain unauthorized access—that is, access where you do not have permission—is most likely illegal where you live. Breaking the law is something that every ethical hacker and penetration tester needs to avoid. The goal of this chapter is to give you some guidelines for avoiding this predicament as well as a basic understanding of the legal, ethical, and moral obligations that can be expected of you.

Laws That Affect Your Work


The law is complicated, and it varies (sometimes significantly) from country to country. We cannot provide you with a complete one-size-fits-all solution, and rather than try, we will instead outline some basic pointers. As we say to students at the beginning of each Hands-on Hacking training course at Hacker House (hacker.house), we are not made up of a team of lawyers, but we do use lawyers when necessary. If you do need legal advice, you should consult a suitably qualified professional. Before undertaking any work, you should become familiar with the laws where you live. If you are living and working in the United States, for example, you should be aware of several acts and laws including:

  • Computer Fraud & Abuse Act 1984
  • Digital Millennium Copyright Act 1998
  • Electronic Communications Privacy Act 1986
  • Trade secrets law
  • Contract law

Each country has its own set of laws, some of which are similar to each other. The U.K. acts are as follows:

  • Computer Misuse Act 1990
  • Human Rights Act 1998
  • Data Protection Act 1998
  • Wireless Telegraphy Act 2006
  • Police & Justice Act 2006
  • Serious Crime Act 2015
  • Data Protection Act 2018

Photograph of a gavel.

Criminal Hacking


The penalties for illegal hacking attacks are often severe, so make sure you're aware of what is and isn't legal before undertaking any work.

As an example of one such severe hacking penalty, take the case of Albert Gonzalez, who on March 25, 2010, was sentenced to 20 years in federal prison in the United States. Gonzalez stole a large amount of credit card information (some 170 million numbers) from various sources. One of his earliest known “hacks” was his unauthorized access to NASA at the age of 14.

In the case of Lauri Love, a British hacker sought by the United States for extradition, he faced a possible 99 years in prison for his alleged role in an Anonymous (an international hacktivist organization) protest about the unjust treatment of Aaron Schwartz, who was an American entrepreneur and activist who hanged himself not long after being prosecuted for multiple violations of the Computer Fraud and Abuse Act in the United States. There are countless examples of similar lengthy prison sentences handed out or attempted to be handed out, especially in the United States.

Hacking Neighborly


Generally speaking, testing your own desktop or laptop computers is lawful. This is not the case for equipment belonging to a third party, such as a smart meter or set-top box, even if it resides in your home. If you're testing computer systems at your place of work, or a neighbor's computer system, then you must obtain written permission from the system owner before starting any hacking activity. Asking a colleague at work whom you believe to be responsible for a particular system may not be enough, especially if it turns out they are not responsible. Without proper, written permission, you're almost guaranteed to be in violation of some law.

You should also consider the implications of running tools while connected to your Internet service provider (ISP). Do they allow such activity as part of their user agreement?

Legally Gray


Scanning Internet-connected equipment using a tool like Nmap (a network probing tool that we'll be demonstrating throughout this book), while not illegal, is frowned upon by some system owners. While you can scan the Internet for common vulnerabilities (and there are services such as Shodan www.shodan.io to do this for you), if you start scanning from your own machine, you may receive complaints. This is especially likely if you start scanning the U.S. Department of Defense, for instance. You may get some emails indicating that your behavior is not welcome or a follow-up from your ISP alerting you to this nonpermissible behavior.

Caution should be exercised when it comes to scanning systems without permission. Imagine if by scanning a system you inadvertently caused some problem—a side effect such as a denial-of-service condition (preventing access by other legitimate users to the service). Whether or not this is intended may not be relevant in the eyes of the law and could land you in trouble. You also have to be careful of intent; that is, what reason do you have for scanning government computer systems?

Using default passwords or accessing services without permission—even if they are unprotected—is another gray area. There is an argument for accessing systems that do not have any real security features: If it is possible for a resource to be accessed by the public, is it not therefore a publicly available resource and thus authorization is implied? An example of this is a website containing documents whereby a URL parameter can be altered to view different documents. For example, you might change govsite.gov/?docid=1 to govsite.gov/?docid=500 in your web browser's address bar. The website might show you a new document when you make this change, but do you really have the authority to view it? Such websites may contain sensitive information that was not intended to be made public but that was left exposed, perhaps by an inexperienced employee who was simply unaware of any problem. It is advised that you steer clear of such situations and those where default passwords allow access to resources. In 2005, a security consultant named Daniel Cuthbert was convicted under the United Kingdom's Computer Misuse Act for changing a URL parameter on a donations page that was set up for victims of a tsunami. He did not have permission for this, making his actions illegal. Cuthbert was fined by the court and dismissed by his employer despite wide criticism by IT security professionals.

Penetration Testing Methodologies


When you engage with a client as a penetration tester or hacker-for-hire, you should adhere to a set of methodologies. Many open standards, guidelines, and frameworks have emerged over the years including the following:

  • Information Systems Security Assessment Framework
  • Penetration Testing Execution Standard (PTES)
  • Penetration Testing Guidance (part of the Payment Card Industry Data Security Standard)
  • Open Source Security Testing Methodology Manual (OSSTMM)
  • The Open Web Application Security Project (OWASP) Testing Framework
  • MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK)

Methodologies help you to move through a number of tasks in a systematic manner, ensuring that nothing is missed. They may also help you to comply with legislation and industry best practices. Hacker House recommends checking out the Penetration Testing Execution Standard, which can be found at www.pentest-standard.org/index.php/Main_Page. The PTES covers a lot, from how to engage with clients in the first place through issuing a final report. It provides overall guidance on how to conduct a penetration test, and it includes details on how to execute a number of tasks.

The Open Source Security Testing Methodology Manual is also full of useful information, and it can be obtained from www.isecom.org. Version 3 of this manual is a little dated, as it refers to technologies like private branch exchange (PBX), voice mailboxes, fax, and Integrated Services Digital Network (ISDN). Nonetheless, it's useful if you come across one of these legacy technologies for the first time.

This book borrows elements from various methodologies, and it incorporates extensive personal experience to bring you a guide on hacking and conducting penetration tests, which can be thought of as being like a methodology. However, the book seeks to be more accessible and entertaining than one of the previous examples. We will be focusing on certain tools, technologies, and exploits, which generally isn't a feature of these methodologies. At some point, you may want to delve further into a particular area, such as web application hacking, in which case finding further resources that specialize in this area is recommended. At some point, you may end up writing your own methodology because nothing suitable exists for the particular area in which you are working! The testing techniques and strategies in this book often follow the same common steps outlined in such methodologies.

When approaching a system or technology for our hacking purposes, we abide by the following logical process steps:

  1. Reconnaissance
  2. Passive and active probes
  3. Enumeration
  4. Vulnerability analysis
  5. Exploitation
  6. Cleanup

Authorization


If you're undertaking a penetration test for a client, it is imperative that you have written permission to carry out the activities you need to do in order to complete the test. During testing, you may be able to gain access to an area containing sensitive data, such as personally identifiable information (PII). Your client needs to understand and authorize this. Even if you have agreed to test systems with a client and have authority to conduct certain activities on certain systems, finding a vulnerability and using it to gain access to a system to which the client has not agreed would mean you're breaking the law.

Even though you're working for a client who is paying you for a service, you need to protect yourself from any potential legal repercussions. It is also beneficial to set out everything clearly and in certain terms for the client's benefit. This is achieved with an authorization for testing contract (usually a form) that both the tester and the client agree upon and sign. This should clearly state that they will not seek to prosecute you under the Computer Misuse Act (and/or any other relevant laws). This form will reference the scope that has been agreed to with your client. The scope will list all the systems that are to be tested, usually containing a list of Internet Protocol (IP) addresses. Sometimes domain names will also be given. Any areas that are off-limits should also be outlined in this document.

Even with a disclaimer in place, it is best to consult with your client before running any exploits that might cause harm. Ideally, you will be testing a development or staging environment. Even so, transparency is key. If you find a vulnerability that when exploited could take entire systems offline, this is something you want to check on first with your client to ensure that it is appropriate for you to test. When conducting a dangerous activity such as exploiting a remote vulnerability that could cause impact on a system, it's important to let the system owner know. Clear communication and transparency are key to avoiding misunderstandings that cause complications with your clients.

Always remember that you're a guest in the client's computer environment, and it's in your interest to be invited back in the future!

Responsible Disclosure


Responsible disclosure is the practice of first informing and then working with product vendors to resolve a vulnerability. It is a process to protect consumers or users of the software or product and, eventually (or as a last resort), potentially publish information on such a vulnerability.

Consider this situation: You're working on a penetration test for a client, and you find some new way to access sensitive information that should not be possible. This bug, flaw, or vulnerability doesn't just affect your client, but any user of that particular piece of software. During your testing, you find a way to exploit the weakness that you've found and conduct some research, ultimately determining that it is an undocumented vulnerability and that there is no information on exploiting it. Congratulations, you've found a zero-day vulnerability that puts regular users at risk, which should be fixed by the software vendor. A zero-day vulnerability is a flaw that is not yet patched by a vendor and may not be widely known beyond the discoverer. What do you do next?

First, as you are working for a client, they should be informed of the problem immediately. Measures should always be taken to secure communications when discussing sensitive matters. Use email encrypted with OpenPGP. PGP stands for Pretty Good Privacy, and it is considered an appropriate security practice (see www.openpgp.org). You and the client may agree that they will contact the software vendor with details of the bug found so that (ideally) work can be started on a patch as soon as possible. In the event that your client does not want to disclose the vulnerability, as a security professional you should advise them on the ramifications to other companies using the software. However, ultimately your responsibility is to report the vulnerability to your client. If your client does not want this information disclosed to a third party (such as a vendor), then you should respect their wishes.

You may have found the flaw while hacking around with one of your own computer systems, in which case you should contact the vendor directly. Industry de facto guidelines suggest that a 90-day window of opportunity should be given to allow the vendor to prepare a patch. You're effectively giving the vendor 90 days to implement some kind of fix for the bug or flaw you found. After this time, if no patch has been prepared, and depending on the nature of the flaw, it is often considered responsible to alert the normal, everyday users of the affected software to the problem. Up until this point, nobody else should know about the bug other than you (and your client, if you're working with one) and the vendor.

Software vendors should not be “held to ransom” or forced into a difficult situation under this responsible disclosure practice; however, as history has shown, some vendors will do nothing until the problem is made public. Some vendors may not want to work with you on fixing the problem or may refuse to acknowledge the problem exists at all. These vendors will almost certainly be aggrieved when you go public, but as long as you're following industry best practice, you don't really have anything to worry about. By doing this, you're forcing the vendor to acknowledge the problem where otherwise they were content to ignore it. Ultimately, you're helping to protect consumers, as the vendor will now be forced to make a fix or risk losing customers. As hackers, we have a moral obligation to inform the public and affected parties in such situations. However, you should always consider the risk of going public versus not doing so.

Once the vendor has developed a patch (or fix) for the product, a further 30 days should be allowed before disclosure in order to allow affected customers to obtain and apply the patch. There are no laws that state this is the case, but it has generally become the accepted de facto vulnerability disclosure timeline to which most hackers adhere.

When disclosing weaknesses to vendors, you may find that some are hostile while others are open and engaging. It has become common practice for vendors to reward public-spirited disclosures by adding your name to their hall of fame (for example, see www.mozilla.org/en-US/security/bug-bounty/hall-of-fame), perhaps even sending a token gesture or monetary reward for your efforts. However, such rewards should not be expected and are more often the exception, not the norm.

Bug Bounty Programs


One approach that some companies or organizations take to improving their information security is to open their applications or products to testing from the public. These arrangements are known as bug bounty programs. Two well-known bug bounty platforms are: www.bugcrowd.com and www.hackerone.com.

In a sanctioned bug bounty program, anyone is allowed to carry out certain activities against designated systems and services; the idea is that when a vulnerability or bug is found, it is reported to the company. In return, the finder of the bug is given a monetary reward—a bounty, so to speak—and the company is able to patch the hole in its defenses, rather than having it exploited maliciously.

This is great for hackers and anyone new to this industry, as it gives you viable commercial experience for effectively carrying out a penetration test and reporting any issues that you find in a way that means a company can re-create the bug and ultimately fix or patch it.

Many hackers undertake bug hunting not only for the bounties but for the fun and thrill of the hunt! It is also a fantastic way to build up experience, and it is a good talking point in any security job interview. Just make sure you're operating under a legitimate program and staying within the scope of the project at all times.

Legal Advice and Support


Cybercrime lawyers are expensive—really expensive. Of course, this depends on the type of advice required and the particular country or legal system. You'll no doubt avoid engaging with a lawyer as much as possible for this reason, but if you do find yourself in trouble, you'll need to make sure that you get someone who is well-versed in cyber law. There are often miscarriages of justice in this space because of a lack of understanding by nonspecialist lawyers. The authors of this book are not lawyers, and we do not recommend following our advice in place of sound legal advice.

Fortunately, there are organizations that look out for the “little person.” One such organization is the Electronic Frontier Foundation (EFF), which has helped individuals and small companies defend themselves in cases against huge corporations. Take, for example, the case where 28 of the world's largest entertainment companies, led by MGM Studios, attempted to sue distributors of peer-to-peer file-sharing software, blaming them for piracy of copyrighted works. Another example is when the EFF held Sony BMG accountable for infecting customers' computers with a type of malware that could spy on the user's listening habits.

The EFF can be found at www.eff.org, and it may be able to offer legal support or refer you to one of its trusted attorneys. The organization also has a Coders' Rights project (www.eff.org/issues/coders) that addresses a number of common legal issues that reverse-engineers, hackers, and security researchers in the United States may face. The EFF is an American organization, but there is a similar European association, European Digital Rights (edri.org), that exists in several countries in Europe.

These organizations can't help you in day-to-day legal matters. Contacting a local trusted professional is always best. Nevertheless, such organizations can most certainly recommend experts to contact. If you are unfortunate enough to be arrested or you experience legal complications because of your ethical hacking activities, seek professional legal advice immediately.

Hacker House Code of Conduct


When students attend our Hands-on Hacking training course, the first thing we ask is that they agree to and sign our Hacker House Code of Conduct. While we cannot force this upon you as a reader of this book, we do hope that you take what you learn from it and apply it legally, morally, and ethically.

Throughout this book, we will be probing for weaknesses and vulnerabilities and exploiting them—just as you would in the real world. The approaches and techniques covered could be used to commit criminal acts—in the same way that a book on accounting could help a rogue accountant commit money laundering offenses or a book on medicine could be used by an unethical doctor to harm patients. It is our hope that you will use the tools and techniques in this book to contribute proactively to information security and to defend systems more effectively from those who would do them harm.

Summary


Here are the key points to remember as you set off on your hacking journey:

  • Always get written permission from the system owner before attempting any hacking.
  • Agree to a well-defined scope when working with a client. What is and isn't allowed? What systems will be tested?
  • Have your client sign an authorization for testing contract that refers to the scope and negates your liability if things go wrong.
  • Always remain within the project's scope.
  • Be open and transparent from the outset to avoid complications that may arise.
  • Seek professional legal help when required. This is highly recommended for creating your own authorization for testing contract.
  • Follow responsible disclosure best practices where relevant.

Have fun and enjoy hacking!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset