Nytro Posted January 16, 2013 Report Posted January 16, 2013 Ethics In Security Research Description: Recently, several research papers in the area of computer security were published that may or may not be considered unethical. Looking at these borderline cases is relevant as today's research papers will influence how young researchers conduct their research. In our talk we address various cases and papers and highlight emerging issues for ethic committees, internal review boards (IRBs) and senior researchers to evaluate research proposals and to finally decide where they see a line that should not be crossed.For researchers in computer security the recent success of papers such as [KKL+09] are an incentive to follow along a line of research where ethical questions become an issue. In our talk at the conference we will address various cases and papers and provide possible guidelines for ethic committees, internal review boards (IRBs) and senior researchers to evaluate research proposals and to finally decide where they see a line that should not be crossed. While some actions might not be illegal they still may seem unethical.Key phrases that would be addressed in the discussion: (1) Do not harm users actively, (2) Watching bad things happening, (3) Control groups, (4) Undercover work. In the following, we introduce some lines of thought that should be discussed throughout the talk:A first and seemingly straightforward principle is that researchers should not actively harm others. So deploying malware or writing and deploying new viruses is obviously a bad idea. Is it, however, ok to modify malware? Following the arguments of [KKL+08], one would not create more harm if, for instance, one would instrumentalized a virus so that it sends us statistical data about its host. Such a modification could be made by the ISP or the network administrators at a university network. If this modification makes the virus less likely to be detected by anti-virus software, the case, however, changes. Then this is analogous to distributing a new virus. A few quick lab experiments have shown that malware that is detected by virus scanners is very often not picked up after it has been modified.Stealing a user's computing and networking resources may harm her; however, if some other malware already steals the resources one could argue that the damage is less since the researcher's software does "less bad things". This is basically what the authors of [KKL+08] argue. So when taking over a botnet, generating additional traffic would not be permissible whereas changing traffic would be. The real-world analogue is that you see someone breaking in a house, you scare the person away and then you go in and only look around, for instance, to understand how the burglar selected the target and what he was planning to steal, which is "less bad" than the stealing what the burglar was probably planning to do.There is a line of research when researchers only passively observe malware and phishing without modifying any content or receivers. When thinking of research ethics of "watching what happens", the Tuskegee Study of Syphilis [W1] comes to mind. Patients were not informed about available treatments, no precautions were taken that patients did not infect others, and they were also actively given false information regarding treatment. Today it is obvious that the study is unethical. As done in [bSBK09] the best way is to ask people for their consent prior to the experiment. In other studies, involving, for instance, botnets, this procedure may be impossible as a host computer can only be contacted after sending modified messages. In a botnet study such as [sGCC+09] it seems both feasible and responsible to inform a user that her computer is part of a botnet. However obvious this may seem, there might be multiple users on an infected machine and informing an arbitrary user could cause some additional harm. For instance, the infection of an office computer may have been caused by deactivating the anti-virus software, surfing to Web pages not related to work, etc. Thus informing one person could cause another person to lose his job. While this is not as extreme as the "Craiglist experiment" [W2] similar impacts are conceivable. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Ethics In Security Research Quote