Informed consent is a procedure designed to assure which of the following ethical principles?

Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research, 1978) resulted in a dramatic change in human subjects-based research in response to several scandals. It seems that the notion of autonomy was suddenly injected into the conversation and became the dominant driver of ethical behavior in human subjects-based research. Indeed, the history of human subjects-based research clearly demonstrated that obligations to beneficence and nonmaleficence, as well as appealing to the virtue of scientists, were insufficient. Thus, the surest way to enforce justice was to enforce the principle of autonomy. The principle of autonomy was proceduralized in the formality of informed consent for research participation and rapidly found adoption in medical and clinical ethics, formally or informally.

The rise of obligations to autonomy would seem to demand a patient-centric approach to the ethics of everyday medicine. But one has to ask what was the status of patient-centric ethics prior to the 1978 Belmont Report? One might think that the altruism typically attributed to physicians and healthcare professionals through the ages bespeaks patient-centric ethics. But was it ever the case that the ethics of everyday medicine was patient-centric and is it now?

A historical analysis argues that until the late 1960s and early 1970s, at least in North America, the ethics of everyday medicine was largely physician-centric. The transition that occurred was not to a patient-centric ethic but rather to a business- and government-centric ethics. This is not to say that the consequences are unjust, necessarily. However, this transition does require examination of how such business- and government-centric ethics comport the assumptions of a pluralistic modern liberal democracy that ethics (principles, theories, laws, and precedent) are derived from a consensus of those governed (for example, the citizen as a patient or potential patient) sufficient to enforce universally. Recently, particularly with the emergence of evidence-based medicine, the ethics of everyday medicine has become scientist/academic-centric and enabled by business- and government-centric ethics.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128228296000291

Relevance of Ethics in Biotechnology

Padma Nambisan, in An Introduction to Ethical, Safety and Intellectual Property Rights Issues in Biotechnology, 2017

The Belmont Report prepared by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research is a statement of basic ethical principles and guidelines that provide an analytical framework to guide the resolution of ethical problems that arise from research with human subjects. The basic ethical principles delineated in the report include:

Respect for Persons:

It entails treating individuals as autonomous persons capable of choosing for themselves. In the case of persons with limited autonomy, additional protection even to the extent of excluding them from activities that may harm them should be advocated. The extent of protection would depend on the nature of potential risk of harm and the likelihood of benefit. The application of this principle involves an informed consent process during which subjects are provided all information (in a comprehensible form) necessary for an individual to make a decision to voluntarily participate in a study.

Beneficence:

This requires an assessment of the potential risks (probable harm) to the anticipated benefits (promotion of health, well-being, or welfare). Investigators are required to devise mechanisms that maximize the benefits and reduce the risk that may be involved in the research. The public too need to take cognizance of the risks and benefits that may result from novel medical, psychological, and social processes and procedures.

Justice:

This principle advocates fair treatment for all and a fair distribution of the risks and benefits of the research. It forbids exploitation of vulnerable people (for instance, economically disadvantaged or those with limited cognitive capacity) or those who are easily manipulated as a result of their situation. It also requires that the researcher verifies that the potential subject pool is appropriate for the research and that the recruitment of volunteers is fair and impartial.

Although never officially adopted by the US Congress or the Department of Health Education and Welfare (now Department of Health and Human Services), the Belmont Report has served as an ethical framework for protecting human subjects and its recommendations incorporated into other guidelines. It is an essential reference document for Institutional Review Boards (IRBs) that review and ensure that research proposals involving human subjects conducted or supported by the Human & Health Services (HHS) meet the ethical standards of the regulations.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128092316000053

Major Issues in Ethics of Aging Research

Michael D. Smith, in Handbook of Models for Human Aging, 2006

JUSTICE

The Belmont Report uses the term “justice” to refer to “fairness in distribution.” This is different from the word's common association with enforceable rights and penalties within a legal system but consistent with general usage in the field of bioethics. Justice in this context involves the ethical allocation of a fair share of risks or possible harms incurred in research and the allocation of benefits expected to result from the research. Research risks can range from minor inconvenience, to discomfort, to actual harm. For an individual or group to carry a large share of risks of research without getting a proportionate share of the benefits seems unfair, even if it is difficult to say exactly what constitutes a fair and equitable distribution. The early history of human subject research is replete with examples of disadvantaged or vulnerable populations (prisoners, disabled elderly, developmentally disabled persons, etc.) serving as research subjects and risking harm in the quest for knowledge expected to benefit some other population. Study design and subject selection distribute potential harm among subjects. The knowledge acquired in research and the research activity itself (particularly in therapeutic clinical research) can be a source of tremendous benefits; the nature of the study itself and its subject recruitment and selection determine who will get those benefits.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123693914500072

Mary Jane Kagarise, George F. Sheldon, in Surgical Research, 2001

2. Ethical Principle Two: Beneficence

This principle requires that the risks and anticipated benefits of the research be accurately identified, evaluated, and described. Furthermore, in clinical research, the risks and benefits of the research interventions must be evaluated separately from those of the therapeutic interventions. Though the risks posed by the performance of investigational interventions and procedures may be more intuitively notable, many of the risks of research reside in the risks inherent in the methodologies of gathering and analyzing data (35).

Risk is “the probability of harm or injury (physical, psychological, social, or economic) occurring as a result of participation in a research study.” Both the probability and magnitude of possible harm may vary from minimal to significant. Federal regulations define only minimal risk (36): “A risk is minimal where the probability and magnitude of harm or discomfort anticipated in the proposed research are not greater, in and of themselves, than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests” [Federal Policy §__.102(1)] (37). There are strict limitations on research presenting more than minimal risk for research involving fetuses and pregnant women (45 CFR 46 Subpart B), research involving children (45 CFR 46 Subpart D), and research involving prisoners (45 CFR 46 Subpart C). The concepts of risk and benefit, then—having been classified as physical, psychological, social, and economic—incorporate all possible harms and advantages, not just the physical or psychological ones to an individual. For example, the societal benefits that might be gained from the research are to be considered.

The Belmont Report is concerned with the magnitudes and probabilities of possible risks and anticipated benefits in terms of defining their nature and scope, systematically assessing each one, assessing information on all aspects of the research, and systematically considering the alternatives. Five basic principles in making the risk–benefit analysis are cited (38):

1.

Brutal or inhumane treatment of human subjects is never morally justified.

2.

Risks should be minimized, including the avoidance of using human subjects if at all possible.

3.

IRBs must be scrupulous in insisting upon sufficient justification for research involving “significant risk of serious impairment.”

4.

The appropriateness of involving vulnerable populations must be demonstrated.

5.

The proposed informed consent process must thoroughly and completely disclose relevant risks and benefits.

The IRB performs six fundamental steps in risk-benefit analysis (39):

1.

Identification of the risks associated with the research, as distinguished from the risks of therapies the subjects would receive even if not participating in research.

2.

Determination that the risks will be minimized to the extent possible.

3.

Identification of the probable benefits to be derived from the research, both to subjects and to society.

4.

Determination that the risks are reasonable in relation to the anticipated benefits to subjects and the importance of the knowledge to be gained.

5.

Assurance that potential subjects will be provided with an accurate and fair description of the risks or discomforts and the anticipated benefits.

6.

Determination of the intervals of periodic review, and, where appropriate, determination that adequate provisions are in place for monitoring the data collected.

The process of distinguishing between the risks for potential human subjects associated with research and the risks associated with therapy requires that human subjects be defined and that research and practice be differentiated. The distinction between research and practice is often blurred in patient care situations as well as in some educational settings. Research and therapy may occur simultaneously, and experimental procedures do not necessarily constitute research (40).

Therapeutic practice consists of “interventions that are designed solely to enhance the well-being of an individual patient or client and that have a reasonable expectation of success. The purpose of medical or behavioral practice is to provide diagnosis, preventive treatment, or therapy to particular individuals” (41). Research is “an activity designed to test an hypothesis, permit conclusions to be drawn, and thereby to develop or contribute to generalizable knowledge (expressed, for example, in theories, principles, and statements of relationships). Research is usually described in a formal protocol that sets forth an objective and a set of procedures designed to reach that objective” (42). “Research itself is not therapeutic; for ill patients, research interventions may or may not be beneficial. Indeed, the purpose of evaluative research is to determine whether the test intervention is in fact therapeutic” (43). The federal regulations define research as “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge [Federal Policy §__.102(d)]” (44). Human subjects are “living individual(s) about whom an investigator (whether professional or student) conducting research obtains (1) data through intervention or interaction with the individual, or (2) identifiable private information [Federal Policy §__.102(f)]” (45).

Treatment of a single patient can constitute “research” if there is “a clear intent before treating the patient to use systematically collected data that would not ordinarily be collected in the course of clinical practice in reporting and publishing a case study. Treating with a research intent must be distinguished from the use of innovative treatment practices” (46).

Investigators are required to exercise due care to reduce and manage risks, including incorporating risk-reducing precautions, safeguards, and alternatives into the research protocol. The risk–benefit evaluation is the major ethical judgment required of the IRB (47). The IRB takes into consideration in its assessment the prevailing community standards, currently available information about the risks and benefits, the degree of confidence in this information, and whether the protocol involves the use of interventions that have the intent and reasonable probability of providing benefit for the individual patient or whether its procedures are performed only for research purposes (48).

Subjects always retain the right to withdraw from a research project, so continuing consent is important. Investigators must inform subjects of any important new information that might affect their willingness to continue participating (Federal Policy §__.116) (49). Thus it is necessary for the IRB to monitor whether the risk–benefit ratio has shifted, whether there are unanticipated findings involving risks to subjects, and whether any new information regarding the risks and benefits should be provided to subjects. IRBs are required [Federal Policy §__.108(e)] to reevaluate research projects on an annual basis plus at any additional intervals indicated by degree of risk (50).

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780126553307500083

Ethical Issues in Translational Research and Clinical Investigation

Greg Koski, in Clinical and Translational Science, 2009

Ethics and Translational Research

In its Belmont Report, the National Commission on Protection of Human Subjects in Biomedical and Behavioral Research (1979) reviewed and reaffirmed the ethical principles that should guide everyone engaged in research involving human subjects. These three principles, respect for persons, beneficence and justice are the subject of extensive writings regarding their origin, interpretation and application, but none as succinctly or with greater wisdom and clarity than the original report.

Many ethicists say that the Belmont Report should be required reading for every scientist involved in human research. Simply put, this recommendation is true but inadequate. Everyone involved in research with human subjects must do more than just read the Belmont Report; they must have, at a minimum, an effective working knowledge of the principles identified and explained in the Belmont Report as a prerequisite for engaging in this endeavor. Even more importantly, these principles must be internalized. It is not sufficient to know them –one must live by them. They provide the normative basis for the responsible scientist engaged in human subjects research, and any scientist unwilling or unable to be guided by them should not be permitted by society or his peers to participate in human research.

As mentioned earlier, one might well add to these traditional principles that of caring. The ethics of care remind us that it is often necessary to subjugate one’s own interests to those of another for whose interests and well-being one bears responsibility (Noddings, 1984).

Responsibility for the well-being of another individual is assumed in many types of care-giving relationships, including parenting, fire-fighting, nursing, medicine and other professions. In these types of relationships, caring can be characterized as a social contract established by societal norms. Caring is a form of altruism, a personal character trait greatly prized when observed in others, but often difficult to achieve personally, particularly in situations where strong competing interests create ambivalence about the proper course of action.

Reconciling the entrepreneurial spirit so common in science today with a spirit of altruism is one of the great challenges facing scientists in both industry and academia, as evidenced by the vigorous discussions of conflicts of interest at every level of the scientific endeavor.

While the principles referenced above are certainly applicable to all clinical research, and while one might reasonably presume that they would also be appropriate for translational research, it is likely that they are necessary but insufficient. Translational research, those critical studies in which advances made in the laboratory are first brought to bear in experiments performed on human beings, requires even more zealous attention to ethics than most clinical research, primarily because of uncertainty.

The recent death of Sir Edmund Hillary reminds us that while climbing Mt Everest will always be a monumental accomplishment accompanied by great risk, he who did it first faced far greater risk because of the uncertainty about whether it could even be done. The translational scientist, whether exploring normal physiology, pathophysiology of disease, its diagnosis, prevention or treatment, is akin to that first climber in some respects, but rarely is he the one actually subject to the associated risks –the risk is borne primarily by others: individuals, populations, or in the extreme, all of humankind.

Nuclear physicists like Robert Oppenheimer and Hans Bethe, instrumental figures in the development of the first atomic bomb, acknowledged the vexing uncertainty that accompanied the first detonation of a nuclear device in the atmosphere, including the prospect of actually igniting the atmosphere, starting combustion of nitrogen with oxygen, with potentially devastating immediate consequences, not to mention the long-term consequences for humanity (Broad, 2005). While not biomedical in nature, this was certainly an example of translational research, some would say of the very worst kind, because it translated scientific knowledge of the atom to the power of destruction. Although Oppenheimer and Bethe admitted to ‘no regrets’ about having helped to achieve the technical success of creating the atomic bomb, they and some of their colleagues, as they watched the events of the Cold War unfold, expressed a sense of concern about the consequences of what they had done, collectively and individually, even if it was for what they believed at the time to be a good and necessary cause.

The translational biomedical scientist should heed and learn from this lesson. Fortunately, some have, as demonstrated by the Asilomar Conference on Recombinant DNA in 1975, during which leading geneticists and molecular biologists voluntarily developed and adopted recommendations to forego certain types of genetic manipulation research until the potential risks, biohazards and benefits were better understood (Berg et al., 1981). Today’s ongoing debate within the scientific community, and outright arguments among scientists, ethicists, religious leaders, governments and others about human cloning, illustrates the ongoing need for both dialogue and restraint.

The recent scandal in South Korea, in which a renowned cellular biologist seemed so anxious to claim priority for the first successful cloning of a human that he would actually fabricate data for publication, is probably the most egregious example of scientific misconduct, irresponsibility and unethical behavior ever observed in the history of science (Hwang et al., 2005). That any scientist could so willingly disregard the norms of scientific and ethical conduct is most disturbing and gives everyone in science good cause to reevaluate the cultural and environmental factors that would drive a scientist to such lengths, and permit him to succeed, even if that ‘success’ was fraudulent and fleeting.

The extraordinarily powerful tools of cell biology, genomics, bioinformatics, nanotechnology, cybernetics and functional brain imaging have opened some of the most important frontiers of biology to detailed inquiry and manipulation once believed to be the stuff of science fiction. Concurrently, society seems increasingly concerned that our readiness to deal with the consequences of exploration in these domains, be they environmental, social or moral in nature, has not kept pace with our ability to ask questions. Albert Einstein once said that ‘Science without ethics is lame, and ethics without science is blind’. To avoid being either blind or lame, science and ethics must walk hand-in-hand.

The rapidity of scientific and technological advancement since the Enlightenment has made it very difficult for ethics to keep pace, and the current public outcry to ban human cloning is just one modern-day example of the public anxiety and even fear that is bred of misunderstanding and uncertainty. The message here is that science must take care not to get too far out in front of public expectation and concern, even if that means slowing down in some areas of inquiry until a proper ethical framework, and where appropriate, guidelines, regulations and oversight mechanisms are in place to ensure safety and accountability.

Carol Levine’s observation that our system for protection of human subjects of research was ‘born of abuse and reared in protectionism’ underscores the reactive nature of a concerned public and the likely actions of policy makers, a message that all translational scientists should listen to very carefully as the age of genomics and nanotechnology rolls on. One cannot doubt that failure of scientists to be sensitive to societal concerns about what they are doing will be met with not only resistance, but also with restrictions by law and regulation, neither of which is in the interests of either science or society.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123736390000285

Ethical Issues in Translational Research and Clinical Investigation

Greg Koski, in Clinical and Translational Science (Second Edition), 2017

Ethics and Translational Research

In its Belmont Report, the National Commission on Protection of Human Subjects in Biomedical and Behavioral Research (1979) reviewed and reaffirmed the ethical principles that should guide everyone engaged in research involving human subjects. These three principles, respect for persons, beneficence, and justice are the subjects of extensive writings regarding their origin, interpretation, and application, but none is succinctly or with greater wisdom and clarity than the original report.

Many ethicists say that the Belmont Report should be a required reading for every scientist involved in human research. Simply put, this recommendation is true but inadequate. Everyone involved in research with human subjects must do more than just reading the Belmont Report; they must have, at a minimum, an effective working knowledge of the principles identified and explained as a prerequisite for engaging in this endeavor. Even more importantly, these principles must be internalized. It is not only sufficient to know them—one must live by them. They provide the normative basis for the responsible scientist engaged in human subjects research, and any scientist unwilling or unable to be guided by them should not be permitted by society or his peers to participate in human research.

As mentioned earlier, one might as well add to these traditional principles another principle, that of caring. The ethics of care remind us that it is often necessary to subjugate one's own interests to those of another for whose interests and well-being one bears responsibility (Noddings, 1984).

Responsibility for the well-being of another individual is assumed in many types of care-giving relationships, including parenting, firefighting, nursing, medicine, and other professions. In these types of relationships, caring can be characterized as a social contract established by societal norms. Caring is a form of altruism, a personal character trait greatly prized when observed in others, but often difficult to achieve personally, particularly in situations where strong competing interests create ambivalence about the proper course of action.

Reconciling the entrepreneurial spirit so common in science today with a spirit of altruism is one of the great challenges facing scientists in both industry and academia, as evidenced by the vigorous discussions of conflicts of interest at every level of the scientific endeavor.

While the principles referenced above are certainly applicable to all clinical research, and while one might reasonably presume that they would also be appropriate for translational research, it is likely that they too are necessary but insufficient. Translational research, those critical studies in which advances made in the laboratory are first brought to bear in experiments performed on human beings, requires even more zealous attention to ethics than most clinical research, primarily because of uncertainty.

The death of the renowned explorer Sir Edmund Hillary almost a decade ago reminds us that while climbing Mt Everest will always be a monumental accomplishment accompanied by great risk, he who did it first faced far-greater risk because of the uncertainty about whether it could even be done. The translational scientist, whether exploring normal physiology, pathophysiology of disease, its diagnosis, prevention, or treatment, is akin to that first climber in some respects, but rarely he is the one who actually subjects to the associated risks—the risk is borne primarily by others: individuals, populations, or in the extreme, all of humankind.

Nuclear physicists Robert Oppenheimer and Hans Bethe, instrumental figures in development of the first atomic bomb, acknowledged the vexing uncertainty that accompanied the first detonation of a nuclear device in the atmosphere, including the prospect of actually igniting the atmosphere, starting combustion of nitrogen with oxygen, with potentially devastating immediate consequences, not to mention the long-term consequences for humanity (Broad, 2005). While not biomedical in nature, this was certainly an example of translational research, some would say of the very worst kind, because it translated scientific knowledge of the atom to the power of destruction. Although Oppenheimer and Bethe admitted to “no regrets” about having helped to achieve the technical success of creating the atomic bomb, they and some of their colleagues, as they watched the events of the Cold War unfold, expressed a sense of concern about the consequences of what they had done, collectively and individually, even if it was for what they believed at the time to be a good and necessary cause.

The translational biomedical scientist should heed and learn from this lesson. Fortunately, some have, as demonstrated by the Asilomar Conference on Recombinant DNA in 1975 (http://en.wikipedia.org/wiki/Asilomar_conference_on_recombinant_DNA), during which leading geneticists and molecular biologists voluntarily developed and adopted recommendations to forego certain types of genetic manipulation research until the potential risks, biohazards, and benefits were better understood (Berg et al., 1975). Today's ongoing debate within the scientific community and outright arguments among scientists, ethicists, religious leaders, governments, and others about human cloning illustrates the ongoing need for both dialog and restraint.

The recent scandal in South Korea, in which a renowned cellular biologist seemed so anxious to claim priority for the first successful cloning of a human that he would actually fabricate data for publication, is one of the more egregious examples of scientific misconduct, irresponsibility, and unethical behavior ever observed in the history of science (Hwang et al., 2005). That any scientist could so willingly disregard the norms of scientific and ethical conduct is most disturbing and gives everyone in science good cause to reevaluate the cultural and environmental factors that would drive a scientist to such lengths and permit him to succeed, even if that “success” was fraudulent and fleeting.

The extraordinarily powerful tools of cell biology, genomics, bioinformatics, nanotechnology, cybernetics, and functional brain imaging have opened some of the most important frontiers of biology to detailed inquiry and manipulation once believed to be the stuff of science fiction. Concurrently, society seems increasingly concerned that our readiness to deal with the consequences of exploration in these domains, be they environmental, social, or moral in nature, has not kept pace with our ability to ask questions. Albert Einstein once said that “Science without ethics is lame, and ethics without science is blind.” To avoid being either blind or lame, science and ethics must walk hand-in-hand.

The rapidity of scientific and technological advancement since the Enlightenment has made it very difficult for ethics to keep pace, and the current public outcry to ban human cloning is just one modern-day example of the public anxiety and even fear that is bred of misunderstanding and uncertainty. The message here is that science must take care not to get too far out in front of public expectation and concern, even if that means slowing down in some areas of inquiry until a proper ethical framework, and where appropriate guidelines, regulations, and oversight mechanisms are in place to ensure safety and accountability.

Carol Levine's observation that our system for protection of human subjects of research was “born of abuse and reared in protectionism” underscores the reactive nature of a concerned public and the likely actions of policy-makers, a message that all translational scientists should listen to very carefully as the age of genomics and nanotechnology rolls on. One cannot doubt that failure of scientists to be sensitive to societal concerns about what they are doing will be met with not only resistance, but also with restrictions by law and regulation, neither of which is in the interests of either science or society.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128021019000247

Medical Ethics, History of

R. Baker, in Encyclopedia of Applied Ethics (Second Edition), 2012

An Institutional Framework for American Bioethics

In its 1978 Belmont Report the Commission stipulated that in reviewing research proposals, IRBs should be guided by three “basic ethical principles”: respect for persons, beneficence, and justice. To interpret these ethical principles IRBs would ultimately look to an unlikely amalgam of concerned healthcare professionals, scientists, theologians, and philosophers. Two institutes, or ‘think tanks,’ marshaled this amalgam into a new field: bioethics. Roman Catholic intellectuals who had been alienated by the Vatican’s decision not to end the Church’s opposition to birth control founded both institutes. In 1969 Daniel Callahan, former executive editor of the Catholic journal Commonweal, cofounded the Hastings Center; in 1971, Dutch Roman Catholic scientist André Hellegers founded the Kennedy Center for Bioethics at Georgetown University, the oldest Catholic university in the United States.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123739322001526

Clinical research in India

Umakanta Sahoo, in Clinical Research in Asia, 2012

3.8.3 Socio-economic-cultural factors

In its ‘Belmont Report’ of 1979, the US FDA’s National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research highlighted the ethical principles and guidelines for the protection of human research subjects. In the context of this report, three basic principles (respect for persons, beneficence and justice) are among those generally accepted in the Indian cultural tradition and are particularly relevant to the ethics of research involving human subjects. Yet research involving human subjects remains somewhat contentious, and many fear that the impoverishment, illiteracy and social ills in Indian society may have an impact on the ethical conduct of a clinical trial.

It is a common belief that that impoverished people with less education cannot decide of their free will and may through economic compulsion end up participating in a clinical trial. Numerous observers argue against this belief, however, and point out the danger of generalisations. Indeed, poverty and illiteracy should not be mistaken for a lack of common sense or intelligence, and potential subjects should be capable of making decisions on their own. Potential subjects may not comprehend the complicated statistical design of a clinical trial, but if the investigator engages them through proper coaching and guidance, they will be exposed to adequate information. In this manner they can weigh up the facts and decide whether or not to participate – if they are not convinced, they simply will not participate in the trial. It must be remembered that the rich and literate are no less committed to their life and to their families than the poor and uneducated. Just because they are on the lower social strata, one cannot take their willingness to volunteer for granted; indeed, gaining their participation can be very hard.

Many consider impoverishment to be a compelling factor for potential subjects in India to become involved in clinical trials. But if one examines the existing healthcare system, the majority of the impoverished population already depend on free or subsidised treatment from government-run hospitals and dispensaries as there is no universal healthcare in India. Participating in a clinical trial may thus be seen by the subjects as a means to ease some of their additional economic burdens in terms of medication and treatments. In some studies, the trial design demands additional visits for tests and procedures. In such instances, the sponsor provides compensation towards the conveyance, stay, and loss of wages for the subject and other earning members of the family, as well as incidental expenses. This acts as a motivation and improves subject compliance and retention. Similarly, as few patients from low economic strata have storage infrastructure, if the investigational product is very temperature-sensitive, the sponsors may have to provide pool-refrigerators for patient use, either through a facility attached to investigator’s site or through the patients’ local physicians. These represent genuine compensation and logistical support for the trial subjects and cannot be construed as an inducement or compulsion for enrolment. At first glance, some of these costs represent additional burdens to the sponsor if the trial is undertaken in India. However, these costs rarely represent a major expense and should be balanced with the rapid recruitment potential in India and the increased subject retention. Poor and illiterate subjects are generally more compliant as they are very sincere and follow protocol tests and procedures as per the instruction and advice of the physician.

To eliminate any concern regarding potential exploitation in developing countries because of illiteracy and poverty, it is in the interest of every pharmaceutical sponsor to maintain high ethical standards for clinical trials. However, where sponsors act with fairness and respect in equal measure, there can be no accusations of exploitation.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781907568008500036

Ethics and Experiments

Karen A. Hegtvedt, in Laboratory Experiments in the Social Sciences (Second Edition), 2014

II Defining ethics in research

Ethics broadly refer to the moral principles governing behavior—the rules that dictate what is right or wrong. Yet as the previously noted examples illustrate, what constitutes right or wrong is subjective, defined by groups with particular aims. Such aims underlie the fundamental conflict between (social) scientists who pursue knowledge that they hope may benefit society and the rights of research participants (McBurney & White, 2012; Neuman, 2011). In the absence of moral absolutes, professional associations and others craft rules for what is proper and improper regarding scientific inquiry to ameliorate this conflict. The resulting ethics codes reflect philosophical ideas and attempt to bridge to regulatory requirements. The ethical conduct of research pertains to more than data collection involving human participants and encompasses more than simply complying with specific federal regulations protecting such participants.

Scientific misconduct discussions (e.g., Altman & Hernon, 1997; Neuman, 2011) focus on unethical behavior often stemming from the pressures researchers feel to make their arguments and build their careers. Failure to identify the shortcomings of one’s research or to suppress findings of “no difference” may be mildly unethical practices. Taking shortcuts that involve falsifying or distorting data or research methods, or actually hiding negative findings, however, are more egregious violations. Such fraud delays scientific advances and undermines the public’s trust in scientific endeavors. Plagiarism, another form of research misconduct, occurs when a researcher claims as his or her own work done or written by others (e.g., colleagues and students) without adequate citation. Although not technically illegal if the “stolen” materials are not copyrighted (e.g., presenting Ibn Khaldun’s words as one’s own), plagiarism compromises research integrity, which charges scholars to be honest, fair, and respectful of others and to act in ways that do not jeopardize their own or others’ professional welfare.

Classification of these behaviors as forms of scientific misconduct derives in part from philosophical principles similar to those underlying the concern for the protection of the welfare of human research participants. Israel and Hay (2006) analyze philosophical approaches to how people might decide what is morally right—what should be done—in certain circumstances. One approach, focusing on the consequences of a behavior, comes from the writings of utilitarian philosopher John Stuart Mill and invokes a cost–benefit analysis. Essentially, if the benefits that arise from a behavior outweigh the risks or harm associated with that behavior, then it is morally acceptable. This approach, however, begs the question of what constitutes a benefit or harm. In contrast, nonconsequential approaches, originating in the works of Immanuel Kant, suggest that what is right is consistent with human dignity and worth. This perspective also emphasizes duties, irrespective of the consequences per se.

Social psychologist Herbert Kelman (1982) emphasizes consistency with human dignity in his evaluation of ethical issues in different social science methods. Kelman notes two components of human dignity: identity and community. The former refers to individuals’ capacity to take autonomous actions and to distinguish themselves from others, whereas the latter regards the interconnections among individuals to care for each other and to protect each other’s interests. Thus, to promote human dignity requires people to accord respect to others, to foster their autonomy, and to care actively for their well-being. In so conceptualizing human dignity, however, Kelman also draws attention to nonutilitarian consequences: “Respect for others’ dignity is important precisely because it has consequences for their capacity and opportunity to fulfill their potentialities” (p. 43). For example, lying to colleagues about scientific results or deceiving subjects about the purpose or procedures of an experiment violates human dignity by creating distrust within a community and/or by depriving individuals of information to meet their needs or to protect their interests. The principle of human dignity, a “master rule” according to Kelman, may be useful in resolving conflicts that arise in the development of a research project by weighing the costs and benefits of taking various courses of action and then choosing the actions that are most consistent with the preservation of human dignity.

Kelman’s abstract approach to human dignity substantively undergirds the three more accessible principles promulgated in the Belmont Report (National Commission, 1979), which exists as the cornerstone for the federal requirements for the protection of human research participants. First, respect for persons captures the notion that individuals are autonomous agents and allows for the protection of those with diminished capacity (i.e., members of vulnerable populations with limited autonomy due to legal status, age, health, subordination, etc.). Second, beneficence refers to an obligation to maximize possible benefits and to avoid or minimize potential harms. This principle is consistent with Kelman’s emphasis on the means to resolve conflicts between rules by opting for the best means to preserve human dignity. Third, the principle of justice regards who ought to receive the benefits of research and bear its burdens. In this sense, justice pertains to the selection of research participants insofar as those who bear the burden of research should also be the ones to benefit from it. In addition, the justice principle requires reasonable, nonexploitative procedures. These elements of justice highlight Kelman’s emphasis on the community in protecting human dignity.

The principles of the Belmont Report encapsulate the extensive work of bioethicists such as Beauchamp and Childress (2008), who offer “calculability and simplicity in ethical decision making” (Israel & Hay, 2006, p. 18), especially in comparison to the lofty abstraction of other philosophical traditions. The abstract moral principles provide the larger framework for considering what is right and wrong in the pursuit of a scientific understanding of social behavior. In other words, it is important not to lose the fundamental concern with protecting human dignity—both for the individual and for the community—when designing a study, interacting with study participants, and communicating the study’s results.3 Researchers must consider the ethics of their research and take steps to protect study participants even when they are not strictly required to do so by federal regulations.

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124046818000029

Adolescent Participation in Research

Lorah D. Dorn PhD, in Adolescent Medicine, 2008

Rules and Regulations

Following publication of The Belmont Report, the U.S. Department of Health and Human Services (DHHS) developed regulations for research with human subjects that were included in Title 45, Part 46 of the Code of Federal Regulations. Subsequent changes to Part 46 included the addition of subparts addressing specific concerns for vulnerable populations. Subpart D pertaining to children and adolescents (see below) was added in 1983 and revised in 1991.

Consent and Assent: Federal regulations specify the elements of informed consent that are required for DHHS-related research (Box 9-5). However, investigators should follow institution-specific guidelines when preparing consent forms because the content and language of each element can be specified by the individual IRB.

Although parents typically provide consent for their adolescents' research participation, many adults are uninformed about the participation and do not understand terms such as randomization or placebo. Understanding has been shown to improve with the use of simple, brief documents; understandable language written at an appropriate reading level; the involvement of another family member in the consent process; sending the consent document home prior to initiation of the research protocol; provision of videotaped information; and face-to-face time.

If a parent consents but the adolescent objects to participation, the objection should be binding unless the research intervention directly benefits the adolescent and is unavailable outside the research context. A 1983 modification to the regulations requires the IRB to assure that provisions for child/adolescent assent are in place, unless the child/adolescent is incapable of providing it or there is no direct benefit. A more recent recommendation regarding child/adolescent assent, issued by the Institute of Medicine, is shown in Box 9-6.

Additional federal guidelines exist for obtaining informed consent when the research involves minors who are incarcerated, wards of the court, subject to shared parental custody, or in foster care.

Schools: Research conducted within a school often allows passive parental consent, in which a letter is sent home describing the study and informing the parent that the adolescent will participate unless the study personnel receive a written parental response stating otherwise. According to the Pupil Rights Amendment (i.e., Hatch Amendment), federally funded research requires active parental permission if questions focus on political affiliation; mental and psychological problems; sexual behavior and attitudes; illegal, antisocial, self-incriminating, or demeaning behavior; critical appraisals of other individuals with whom respondents have close family relationships; legally recognized privileged or analogous relationships, such as those of lawyers, physicians, and ministers; religious practices, affiliations, or beliefs of the student or student's parent; or income. The Congressional No Child Left Behind Act allows parental notification and inspection of surveys that are created by third parties and intended for student completion.

Level of Risk: A study is considered minimal risk if the likelihood and magnitude of the possible harm or discomfort is no greater than that encountered in routine life. The IRB can only approve research with children if the risk/benefit category is assigned at levels 1 through 3 of 4 (see Box 9-7). If it cannot be approved by the IRB, the study may be approved by an expert panel convened by DHHS, followed by an opportunity for public review and comments.

Waiver of Parental Consent: Section 46.408(c) of the Code of Federal Regulations allows an IRB to waive parental consent if the following conditions are met: (1) the research involves no more than minimal risk; (2) the waiver or alteration will not adversely affect the rights and welfare of the subjects; (3) the research could not be conducted without the waiver or alteration; and (4) whenever appropriate, the subjects will be provided with additional pertinent information after participation.

What is informed consent? Informed consent is one of the founding principles of research ethics. Its intent is that human participants can enter research freely (voluntarily) with full information about what it means for them to take part, and that they give consent before they enter the research.
Subsequently, 2 additional cases,15,16 Rolater v Strain and Schloendorff v Society of New York Hospital, established and solidified the principle of patient autonomy that ultimately formed the basis of the requirement for informed consent in medicine and research.
Informed consent upholds the ethical principle of Autonomy. Informed consent involves telling the patient the diagnosis, the treatment alternatives, and the risks involves for each treatment.
The ethical doctrine of informed consent says that disclosure is adequate if it allows patients to weigh intelligently the risks and benefits of available choices.