You are on page 1of 19

I

INTRODUCTION
.

Medical Ethics or Bioethics, study and application of moral values, rights, and duties
in the fields of medical treatment and research. Medical decisions involving moral
issues are made every day in diverse situations such as the relationship between
patient and physician, the treatment of human and animal subjects in biomedical
experimentation, the allocation of scarce medical resources, the complex questions
that surround the beginning and the end of a human life, and the conduct of clinical
medicine and life-sciences research.

Issues in Medical Ethics

Issues in Medical Ethics

The advent of new medical and reproductive technologies in recent years has complicated
how ethical decisions are made in medical research and practice.
Medical ethics traces its roots back as far as ancient Greece, but the field gained particular
prominence in the late 20th century. Many of the current issues in medical ethics are the product
of advances in scientific knowledge and biomedical technology. These advances have presented
humanity not only with great progress in treating and preventing disease but also with new
questions and uncertainties about the basic nature of life and death. As people have grappled with
issues on the frontier of medical science and research, medical ethics has grown into a separate
profession and field of study. Professional medical ethicists bring expertise from fields such as
philosophy, social sciences, medicine, research science, law, and theology.

Medical ethicists serve as advisors to hospitals and other health-care institutions. They have also
served as advisors to government at various levels. For example, experts in medical ethics
assisted the United States government from 1974 to 1978 as members of the National
Commission for the Protection of Human Subjects of Medical Research. The commission was
formed in response to several large-scale experiments that used human subjects who were tricked
into participating. In the late 1990s the National Bioethics Advisory Commission, at the direction
of President Bill Clinton, studied issues related to the cloning of human beings. Ethicists also
serve as advisors to state legislatures in the writing of laws concerning the decision to end life
support, the use of genetic testing, physician-assisted suicide, and other matters. Medical ethics
has even become part of the landscape in the commercial world of science. An increasing
number of firms involved in biotechnology (the business of applying biological and genetic
research to the development of new drugs and other products) regularly consult with medical
ethicists about business and research practices.

The field of medical ethics is also an international discipline. The World Health Organization
founded the Council for International Organizations of Medical Sciences in 1949 to collect
worldwide data on the use of human subjects in research. In 1993 the United Nations
Educational, Scientific, and Cultural Organization (UNESCO) established an International
Bioethics Committee to examine and monitor worldwide issues in medicine and life-sciences
research. The UNESCO directory lists more than 500 centers outside the United States. The
International Association of Bioethics was founded in 1997 to facilitate the exchange of
information in medical ethics issues and to encourage research and teaching in the field.

In the United States and Canada more than 25 universities offer degrees in medical ethics. In
many instances, the subject is also part of the curriculum in the education of physicians and other
health-care professionals. Many medical schools include ethics courses that examine topics such
as theories of moral decision-making and the responsible conduct of medical research.

Medical Ethics
INTRODUCTION
II HISTORY

The examination of moral issues in medicine largely began with the Greeks in the 4th century BC. The
Greek physician Hippocrates is associated with more than 70 works pertaining to medicine. However,
modern scholars are not certain how many of these works can be attributed to Hippocrates himself, as
some may have been written by his followers. One work that is generally credited to Hippocrates
contains one of the first statements on medical ethics. In Epidemics I, in the midst of instructions on
how to diagnose various illnesses, Hippocrates offers the following, “As to diseases, make a habit of
two things—to help and not to harm.”

The most famous ethical work—although the exact origin of the text is unknown—is the Hippocratic
Oath. In eight paragraphs, those swearing the oath pledge to “keep [patients] from harm and
injustice.” The oath also requires physicians to give their loyalty and support to their fellow physicians,
promise to apply dietetic measures for the benefit of the sick, refuse to provide abortion or euthanasia
(the act of assisting a chronically ill person to die), and swear not to make improper sexual advances
against any members of the household. “In purity and holiness I will guard my life and my art,”
concludes one section of the oath. For most of the 20th century, it was common for modified versions
of the Hippocratic Oath to be recited by medical students upon the awarding of their degrees. For
many people, the oath still symbolizes a physician’s duties and obligations.

The idea of ethical conduct is common in many early texts, including those from ancient India and
China—cultures in which medical knowledge was viewed as divine or magical in origin. Echoing the
Hippocratic Oath, the Caraka Samhita, a Sanskrit text written in India roughly 2,000 years ago, urges
the following commandment to physicians, “Day and night, however you may be engaged, you shall
strive for the relief of the patient with all your heart and soul. You shall not desert the patient even for
the sake of your life or living.” Similar sentiments can be found in the Chinese text Nei Jing (The Yellow
Emperor's Classic of Inner Medicine), dating from the 2nd century BC. This work stressed the connection
between virtue and health. Three centuries later, the work of the Chinese physician Sun Simiao
emphasized compassion and humility, “...a Great Physician should not pay attention to status, wealth,
or age.... He should meet everyone on equal ground....”

In Europe during the Middle Ages, the ethical standards of physicians were put to the test by the
bubonic plague, the highly contagious Black Death that arrived around the mid-1300s and remained a
threat for centuries. When plague broke out, physicians had a choice: They could stay and treat the
sick—risking death in the process—or flee. The bubonic plague and other epidemics provide an early
example of the challenges that still exist today when doctors must decide whether they are willing to
face personal risks when caring for their patients.

By the 18th century, particularly in Britain, the emphasis in medical ethics centered on proper,
honorable behavior. One of the best-known works from the period is Medical Ethics; or, a Code of
Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons, published in
1803 by the British physician Thomas Percival. In his 72 precepts, Percival urged a level of care and
attention such that doctors would “inspire the minds of their patients with gratitude, respect, and
confidence.” His ethics, however, also permitted withholding the truth from a patient if the truth might
be “deeply injurious to himself, to his family, and to the public.” At roughly the same time American
physician Benjamin Rush, a signer of the Declaration of Independence, was promoting American
medical ethics. His lectures to medical students at the University of Pennsylvania in Philadelphia,
spoke of the virtues of generosity, honesty, piety, and service to the poor.
By the early 19th century, it seemed that such virtues were in short supply, and the public generally
held physicians in North America in low esteem. Complicating the problem was the existence of a
variety of faith healers and other unconventional practitioners who flourished in an almost entirely
unregulated medical marketplace. In part to remedy this situation, physicians convened in 1847 to
form a national association devoted to the improvement of standards in medical education and
practice. The American Medical Association (AMA), as the group called itself, issued its own code of
ethics, stating, “A physician shall be dedicated to providing competent medical service with
compassion and respect for human dignity. A physician shall recognize a responsibility to participate in
activities contributing to an improved community.” This text was largely modeled on the British code
written by Percival, but it added the idea of mutually shared responsibilities and obligations among
doctor, patient, and society. Since its creation, the AMA Code has been updated as challenging ethical
issues have arisen in science and medicine. The code now consists of seven principles centered on
compassionate service along with respect for patients, colleagues, and the law. The Canadian Medical
Association (CMA), established in 1867, also developed a Code of Ethics as a guide for physicians.
Today the CMA code provides over 40 guidelines about physician responsibilities to patients, society,
and the medical profession.

In recent years, however, the field of medical ethics has struggled to keep pace with the many
complex issues raised by new technologies for creating and sustaining life. Artificial-respiration
devices, kidney dialysis, and other machines can keep patients alive who previously would have
succumbed to their illnesses or injuries. Advances in organ transplantation have brought new hope to
those afflicted with diseased organs. New techniques have enabled prospective parents to conquer
infertility. Progress in molecular biology and genetics has placed scientists in control of the most basic
biochemical processes of life. With the advent of these new technologies, codes of medical ethics have
become inadequate or obsolete as new questions and issues continue to confront medical ethicists.

III HOW ARE ETHICAL DECISIONS MADE IN MEDICINE?

Throughout history the practice of medical ethics has drawn on a variety of philosophical concepts.
One such concept is deontology, a branch of ethical teaching centered on the idea that actions must
be guided above all by adherence to clear principles, such as respect for free will. In contemporary
bioethics, the idea of autonomy has been of central importance in this tradition. Autonomy is the right
of individuals to determine their own fates and live their lives the way they choose, as long as they do
not interfere with the rights of others. Other medical ethicists have championed a principle known as
utilitarianism, a moral framework in which actions are judged primarily by their results. Utilitarianism
holds that actions or policies that achieve good results—particularly the greatest good for the greatest
number of people—are judged to be moral. Still another philosophical idea that has been central to
medical ethics is virtue ethics, which holds that those who are taught to be good will do what is right.

Many medical ethicists find that these general philosophical principles are abstract and difficult to
apply to complex ethical issues in medicine. To better evaluate medical cases and make decisions,
medical ethicists have tried to establish specific ethical frameworks and procedures. One system,
developed in the late 1970s by the American philosopher Tom Beauchamp and the American
theologian James Childress, is known as principlism, or the Four Principles Approach. In this system
ethical decisions pertaining to biomedicine are made by weighing the importance of four separate
elements: respecting each person’s autonomy and their right to their own decisions and beliefs; the
principle of beneficence, helping people as the primary goal; the related principle of nonmalificence,
refraining from harming people; and justice, distributing burdens and benefits fairly.

Medical ethicists must often weigh these four principles against one another. For example, all four
principles would come into play in the case of a patient who falls into an irreversible coma without
expectation of recovery and who is kept alive by a mechanical device that artificially maintains basic
life functions such as heartbeat and respiration. The patient’s family members might argue that the
patient, if able to make the decision, would never want to be sustained on a life-support machine. They
would argue from the viewpoint of patient autonomy—that the patient should be disconnected from
the machine and allowed to die with dignity. Doctors and hospital staff, meanwhile, would likely be
concerned with the principles of beneficence and nonmalificence—the fundamental desire to help the
patient or to refrain from harmful actions, such as terminating life support. Consulting on such a case,
the medical ethicist would help decide which of these conflicting principles should carry the most
weight. An ethicist using principlism might work toward a solution that addresses both sides of the
conflict. Perhaps the family and medical staff could agree to set a time limit during which doctors
would have the opportunity to exhaust every possibility of cure or recovery, thus promoting
beneficence. But at the end of the designated period, doctors would agree to terminate life support in
ultimate accordance with the patient’s autonomy.

Although some medical ethicists follow principlism, others employ a system known as casuistry, a
case-based approach. When faced with a complex bioethical case, casuists attempt to envision a
similar yet clearer case in which virtually anyone could agree on a solution. By weighing solutions to
the hypothetical case, casuists work their way toward a solution to the real case at hand.

Casuists might confront a case that involves deciding how much to explain to a gravely ill patient
about his or her condition, given that the truth might be so upsetting as to actually interfere with
treatment. In one such case cited by American ethicist Mark Kuczewski from the Center for the Study
of Bioethics at the Medical College of Wisconsin in Milwaukee, a 55-year-old man was diagnosed with
the same form of cancer that had killed his father. After a surgical procedure to remove the tumor, the
patient’s family members privately told his doctors that if the patient knew the full truth about his
condition, he would be devastated. In weighing this matter, a casuist might envision a clear-cut case in
which a patient explicitly instructs doctors or caregivers not to share any negative information about
prospects for cure or survival. The opposite scenario would be a case in which the patient clearly
wishes to know every bit of diagnostic information, even if the news is bad. The challenge for the
casuist is to determine which scenario, or paradigm, most closely resembles the dilemma at hand, and,
with careful consideration of the case, try to proceed from the hypothetical to a practical solution. In
this particular case, the cancer patient was informed that his tumor had not been successfully
removed and that more curative measures were called for. His treatments continued. In the end,
however, he died of the disease without ever being told of a terminal diagnosis.
IV CURRENT MEDICAL ETHICS ISSUES

Casuistry and principlism are just two of many bioethical frameworks. Each approach has its
proponents, and volleys of disagreement and debate frequently fly among the various schools of
thought. Yet each approach represents an attempt to deal with thorny, conflicting issues that
commonly arise in the complex and contentious arena of medicine. These issues can include the rights
and needs of the patient, who may, for example, decide to discontinue treatment for a life-threatening
illness, preferring to die with dignity while still mentally competent to make that choice. There is the
obligation of the doctor, whose duty it is to save and prolong life. There is the hospital or health-care
system, whose administrators must weigh the obligation to sustain life against the often-enormous
expense of modern medical methods. And there is the law, which seeks to protect citizens from harm
while at the same time respecting autonomy. The remainder of this article discusses some of the most
prominent dilemmas and decisions faced by modern medical ethicists.

E Physician-Patient Issues

Since the time of Hippocrates more than 2,000 years ago, a central concern of medical ethics has been
the relationship between physician and patient. Aspects of this relationship continue to be the source
of ethical dilemmas. For example, what is the extent of the doctor’s duty to a patient if treating the
patient places the doctor at risk? This issue was brought to the forefront in recent years by the advent
of the AIDS crisis. HIV, the virus that causes AIDS, can be spread by contact with blood and other
bodily fluids of an infected person. This poses a potential hazard for doctors and other health-care
workers. In the 1980s, during the early days of the AIDS epidemic, some doctors refused to treat
persons in high-risk groups for AIDS, such as homosexual men and users of intravenous drugs—even
though these patients were not known to be infected with HIV.

Is there an ethical obligation for doctors to treat patients with communicable and potentially fatal
diseases? In a statement in 1988, the AMA’s Council on Ethical and Judicial Affairs declared that no
patient should suffer discrimination or be denied care because of infection with HIV. Many states and
cities have passed laws barring health-care discrimination against persons with HIV and AIDS.
Nevertheless, the licensing boards that oversee the practice of medicine in each state have taken
varied approaches. The boards in some states have passed regulations against any refusal to treat
persons with HIV infection; other state boards specify that doctors may refuse to treat such patients
provided that they make a reasonable effort to secure alternate care. In 1998 the U.S. Supreme Court
ruled that denying care to an HIV-infected person violated the federal Americans with Disabilities Act.
AIDS advocates hope that this ruling will protect the rights of many people with AIDS.

F Human Experimentation

Ethical issues arise not only in the clinical setting of a hospital or doctor’s office, but in the laboratory
as well. A main concern of medical ethicists is monitoring the design of clinical trials and other
experiments involving human subjects. Medical ethicists are particularly interested in confirming that
all the subjects have voluntarily given their consent and have been fully informed of the nature of the
study and its potential consequences. In this particular area of medical ethics, one infamous period in
history has echoed loudly for more than half a century: the experiments conducted by Nazi doctors on
captive, unwilling human subjects during World War II (1939-1945). Under the guise of science,
thousands of Jews and other prisoners were subjected to grotesque and horrifying procedures. Some
were frozen to death, or slowly and fatally deprived of oxygen in experiments that simulated the
effects of high altitude. Others were deliberately infected with cholera and other infectious agents or
subjected to bizarre experiments involving transfusions of blood or transplants of organs. Many
underwent sterilization, as Nazi doctors investigated the most efficient means of sterilizing what they
considered inferior populations. In all, these inhumane acts so outraged the world that, after the war,
trials were held in Nürnberg, Germany, and many of the responsible Nazi physicians were convicted
and executed as war criminals.

These trials essentially marked the beginning of modern medical ethics. The international tribunal that
prosecuted the Nazi doctors at Nürnberg drew up a list of conditions necessary to ensure ethical
experimentation involving humans. This document, which came to be called the Nuremberg Code,
stressed the importance of voluntary, informed consent of subjects in well-designed experimental
procedures that would aid society without causing undue suffering or injury.

Unfortunately, not all scientists adhered to the Nuremberg Code. In the United States, the decades
following World War II saw several incidents of experiments on unwitting subjects who had not given
informed consent. During the 1940s and 1950s, for example, hundreds of pregnant women were given
a radioactive solution that enabled doctors to measure the amounts of iron in their blood. In the mid-
1950s scientists infected developmentally disabled children at a New York state hospital with hepatitis
in order to test a vaccine for the disease. In the early 1960s doctors injected cancer cells into the skin
of elderly, debilitated patients in a hospital in Brooklyn, New York, to study the patients’ immune
responses. Perhaps the most shameful episode in American medical history was the federal
government’s Tuskegee syphilis experiment. This 40-year study began in 1932 in Tuskegee, Alabama,
and tracked the health of approximately 600 African-American men, two-thirds of whom suffered from
the sexually transmitted disease syphilis. Most of the subjects were poor and illiterate, and the
researchers deliberately kept the syphilis victims uninformed of their condition. Worse yet, the
researchers did not treat the disease, even though a cure for syphilis was readily available during the
last 30 years of the study. Instead, the Public Health Service tracked the men, using them to study the
physiological effects of untreated syphilis. When the press broke the story of the Tuskegee experiments
in 1972, the revelations provided yet another spur to the development of modern bioethics standards.
(In 1997 President Clinton issued a formal apology to the survivors of the Tuskegee Study and their
families.)

Today clinical studies continue to present bioethical challenges. Designing safe clinical experiments
and balancing the need for scientific objectivity against concern for the human subjects can be a
difficult proposition. An ethical dilemma is often presented by the standard practice of using a placebo
in a trial for a new drug or other medical innovation. A placebo is an inactive substance that is given to
some subjects in a study in order to help researchers judge the real effects of the compound being
tested. But is it ethical in the trial of an AIDS drug, for example, to give a useless placebo to persons
suffering from a potentially fatal condition when other persons in the study are receiving what may be
a beneficial drug? That is just one question that medical ethicists weigh in the design of experiments
involving humans.

V UNRESOLVED ISSUES FOR THE 21ST CENTURY

A variety of issues will face medical ethicists in the 21st century, such as advances in cloning
technology, new knowledge of the human brain, and the wealth of genetic data from the Human
Genome Project. Population changes worldwide will also affect the course of medicine and will raise
issues of medical ethics. By roughly the year 2020, the number of Americans over the age of 65 is
expected to double. This aging of the population seems certain to increase the demand on the U.S.
health-care system—and to increase health-care costs. Issues concerning equitable access to medical
care will likely come to the fore, as resources for senior citizens compete with other costs that must be
borne by taxpayers. And, with an increase in the number of elderly citizens, ethical dilemmas
surrounding end-of-life issues seem certain to become more prevalent. Determining the quality of life
for aged patients sustained by artificial means, deciding when treatment has run its course for the
aged—these will be issues that medical ethicists will need to address. As they have for centuries,
medical ethicists will continue to ponder, debate, and advise on the most basic and profound questions
of life and death.

Medical ethics
From Wikipedia, the free encyclopedia

Jump to: navigation, search

Medical ethics is primarily a field of applied ethics, the study of moral values and judgments as
they apply to medicine. As a scholarly discipline, medical ethics encompasses its practical
application in clinical settings as well as work on its history, philosophy, theology, and sociology.

Medical ethics tends to be understood narrowly as an applied professional ethics, whereas


bioethics appears to have worked more expansive concerns, touching upon the philosophy of
science and the critique of biotechnology. Still, the two fields often overlap and the distinction is
more a matter of style than professional consensus. Medical ethics shares many principles with
other branches of healthcare ethics, such as nursing ethics.
There are various ethical guidelines. The Declaration of Helsinki is regarded as one of the most
authoritative.[1]

By the 18th and 19th centuries, medical ethics emerged as a more self-conscious discourse. For
instance, authors such as the British Doctor Thomas Percival (1740-1804) of Manchester wrote
about "medical jurisprudence" and reportedly coined the phrase "medical ethics." Percival's
guidelines related to physician consultations have been criticized as being excessively protective
of the home physician's reputation. Jeffrey Berlant is one such critic who considers Percival's
codes of physician consultations as being an early example of the anti-competitive, "guild"-like
nature of the physician community.[2][3] In 1847, the American Medical Association adopted its
first code of ethics, with this being based in large part upon Percival's work [2]. While the
secularized field borrowed largely from Catholic medical ethics, in the 20th century a
distinctively liberal Protestant approach was articulated by thinkers such as Joseph Fletcher. In
the 1960s and 1970's, building upon liberal theory and procedural justice, much of the discourse
of medical ethics went through a dramatic shift and largely reconfigured itself into bioethics.[4]

Since the 1970s, the growing influence of ethics in contemporary medicine can be seen in the
increasing use of Institutional Review Boards to evaluate experiments on human subjects, the
establishment of hospital ethics committees, the expansion of the role of clinician ethicists, and
the integration of ethics into many medical school curricula

Values in medical ethics

In the United Kingdom, General Medical Council provides clear overall modern guidance in the
form of its 'Good Medical Practice' statement. Other organisations, such as the Medical
Protection Society and a number of university departments, are often consulted by British
doctors regarding issues relating to ethics.

How does one ensure that appropriate ethical values are being applied within hospitals? Effective
hospital accreditation requires that ethical considerations are taken into account, for example
with respect to physician integrity, conflicts of interest, research ethics and organ transplantation
ethics.

[edit] Autonomy

Autonomy is a general indicator of health. Many diseases are characterised by loss of autonomy,
in various manners. This makes autonomy an indicator for both personal well-being, and for the
well-being of the profession. This has implications for the consideration of medical ethics: "is the
aim of health care to do good, and benefit from it?"; or "is the aim of health care to do good to
others, and have them, and society, benefit from this?". (Ethics - by definition - tries to find a
beneficial balance between the activities of the individual and its effects on a collective.)

By considering Autonomy as a gauge parameter for (self) health care, the medical and ethical
perspective both benefit from the implied reference to Health.

[edit] Beneficence

James Childress and Tom Beauchamp in Principle of Bioethics (1978) identify beneficence as
one of the core values of health care ethics. Some scholars, such as Edmund Pellegrino, argue
that beneficence is the only fundamental principle of medical ethics. They argue that healing
should be the sole purpose of medicine, and that endeavors like cosmetic surgery, contraception
and euthanasia fall beyond its purview.

[edit] Non-Maleficence

In practice, however, many treatments carry some risk of harm. In some circumstances, e.g. in
desperate situations where the outcome without treatment will be grave, risky treatments that
stand a high chance of harming the patient will be justified, as the risk of not treating is also very
likely to do harm. So the principle of non-maleficence is not absolute, and must be balanced
against the principle of beneficence (doing good).

Some American physicians interpret this principle to exclude the practice of euthanasia, though
not all concur. Probably the most extreme example in recent history of the violation of the non-
maleficence dictum was Dr. Jack Kevorkian, who was convicted of second-degree homicide in
Michigan in 1998 after demonstrating active euthanasia on the TV news show, 60 Minutes.

In some countries euthanasia is accepted as standard medical practice. Legal regulations assign
this to the medical profession. In such nations, the aim is to alleviate the suffering of patients
from diseases known to be incurable by the methods known in that culture. In that sense, the
"Primum no Nocere" is based on the realisation that the inability of the medical expert to offer
help, creates a known great and ongoing suffering in the patient. "Not acting" in those cases is
believed to be more damaging than actively relieving the suffering of the patient. Evidently the
ability to offer help depends on the limitation of what the practitioner can do. These limitations
are characteristic for each different form of healing, and the legal system of the specific culture.
The aim to "not do harm" is still the same. It gives the medical practitioner a responsibility to
help the patient, in the intentional and active relief of suffering, in those cases where no cure can
be offered.

"Non-maleficence" is defined by its cultural context. Every culture has its own cultural collective
definitions of 'good' and 'evil'. Their definitions depend on the degree to which the culture sets its
cultural values apart from nature. In some cultures the terms "good" and "evil" are absent: for
them these words lack meaning as their experience of nature does not set them apart from nature.
Other cultures place the humans in interaction with nature, some even place humans in a position
of dominance over nature. The religions are the main means of expression of these
considerations.

Depending on the cultural consensus conditioning (expressed by its religious, political and legal
social system) the legal definition of Non-maleficence differs. Violation of non-maleficence is
the subject of medical malpractice litigation. Regulations thereof differ, over time, per nation.

[edit] Double effect

Some interventions undertaken by physicians can create a positive outcome while also
potentially doing harm. The combination of these two circumstances is known as the "double
effect." The most applicable example of this phenomenon is the use of morphine in the dying
patient. Such use of morphine can ease the pain and suffering of the patient, while
simultaneously hastening the demise of the patient through suppression of the respiratory drive.

[edit] Informed consent


Main article: Informed consent

Informed consent in ethics usually refers to the idea that a person must be fully-informed about
and understand the potential benefits and risks of their choice of treatment. An uninformed
person is at risk of mistakenly making a choice not reflective of his or her values or wishes. It
does not specifically mean the process of obtaining consent, nor the specific legal requirements,
which vary from place to place, for capacity to consent. Patients can elect to make their own
medical decisions, or can delegate decision-making authority to another party. If the patient is
incapacitated, laws around the world designate different processes for obtaining informed
consent, typically by having a person appointed by the patient or their next-of-kin make
decisions for them. The value of informed consent is closely related to the values of autonomy
and truth telling.

A correlate to "informed consent" is the concept of informed refusal.

[edit] Confidentiality
Main article: Confidentiality

Confidentiality is commonly applied to conversations between doctors and patients. This


concept is commonly known as patient-physician privilege.

Legal protections prevent physicians from revealing their discussions with patients, even under
oath in court.

Confidentiality is mandated in America by HIPAA laws, specifically the Privacy Rule, and
various state laws, some more rigorous than HIPAA. However, numerous exceptions to the rules
have been carved out over the years. For example, many states require physicians to report
gunshot wounds to the police and impaired drivers to the Department of Motor Vehicles.
Confidentiality is also challenged in cases involving the diagnosis of a sexually transmitted
disease in a patient who refuses to reveal the diagnosis to a spouse, and in the termination of a
pregnancy in an underage patient, without the knowledge of the patient's parents. Many states in
the U.S. have laws governing parental notification in underage abortion.[3]

Traditionally, medical ethics has viewed the duty of confidentiality as a relatively non-negotiable
tenet of medical practice. More recently, critics like Jacob Appel have argued for a more nuanced
approach to the duty that acknowledges the need for flexibility in many cases. [5]

[edit] Criticisms of orthodox medical ethics

It has been argued that mainstream medical ethics is biased by the assumption of a framework in
which individuals are not simply free to contract with one another to provide whatever medical
treatment is demanded, subject to the ability to pay. Because a high proportion of medical care is
typically provided via the welfare state, and because there are legal restrictions on what
treatment may be provided and by whom, an automatic divergence may exist between the wishes
of patients and the preferences of medical practitioners and other parties. Tassano[6] has
questioned the idea that Beneficence might in some cases have priority over Autonomy. He
argues that violations of Autonomy more often reflect the interests of the state or of the supplier
group than those of the patient.

Routine regulatory professional bodies or the courts of law are valid social recourses.

[edit] Importance of communication

Many so-called "ethical conflicts" in medical ethics are traceable back to a lack of
communication. Communication breakdowns between patients and their healthcare team,
between family members, or between members of the medical community, can all lead to
disagreements and strong feelings. These breakdowns should be remedied, and many apparently
insurmountable "ethics" problems can be solved with open lines of communication.[citation needed]

[edit] Ethics committees

Many times, simple communication is not enough to resolve a conflict, and a hospital ethics
committee must convene to decide a complex matter. These bodies are composed primarily of
health care professionals, but may also include philosophers, lay people, and clergy.

The assignment of philosophers or clergy will reflect the importance attached by the society to
the basic values involved. An example from Sweden with Torbjörn Tännsjö on a couple of such
committees indicates secular trends gaining influence.

[edit] Cultural concerns

Culture differences can create difficult medical ethics problems. Some cultures have spiritual or
magical theories about the origins of disease, for example, and reconciling these beliefs with the
tenets of Western medicine can be difficult.
[edit] Truth-telling

Some cultures do not place a great emphasis on informing the patient of the diagnosis, especially
when cancer is the diagnosis. Even American culture did not emphasize truth-telling in a cancer
case, up until the 1970s. In American medicine, the principle of informed consent takes
precedence over other ethical values, and patients are usually at least asked whether they want to
know the diagnosis.

[edit] Online Business Practices

The delivery of diagnosis online leads patients to believe that doctors in some parts of the
country are at the direct service of drug companies. Finding diagnosis as convenient as what drug
still has patent rights on it. Physicians and drug companies are found to be competing for top ten
search engine ranks to lower costs of selling these drugs with little to no patient involvement[7]

[edit] Conflicts of interest

Physicians should not allow a conflict of interest to influence medical judgment. In some cases,
conflicts are hard to avoid, and doctors have a responsibility to avoid entering such situations.
Unfortunately, research has shown that conflicts of interests are very common among both
academic physicians[8] and physicians in practice[9]. The The Pew Charitable Trusts has
announced the Prescription Project for "academic medical centers, professional medical societies
and public and private payers to end conflicts of interest resulting from the $12 billion spent
annually on pharmaceutical marketing".

[edit] Referral

For example, doctors who receive income from referring patients for medical tests have been
shown to refer more patients for medical tests [10]. This practice is proscribed by the American
College of Physicians Ethics Manual [11].

Fee splitting and the payments of commissions to attract referrals of patients is considered
unethical and unacceptable in most parts of the world - while it is rapidly becoming routine in
other countries, like India, where many urban practitioners currently pay a percentage of office-
visit charges, lab tests as well as hospital care to unaccredited "quacks", or semi-accredited
"practitioners of alternative medicine", who refer the patient. It is tolerated in some areas of US
medical care as well.

[edit] Vendor relationships

Studies show that doctors can be influenced by drug company inducements, including gifts and
food. [12] Industry-sponsored Continuing Medical Education (CME) programs influence
prescribing patterns. [13] Many patients surveyed in one study agreed that physician gifts from
drug companies influence prescribing practices. [14] A growing movement among physicians is
attempting to diminish the influence of pharmaceutical industry marketing upon medical
practice, as evidenced by Stanford University's ban on drug company-sponsored lunches and
gifts. Other academic institutions that have banned pharmaceutical industry-sponsored gifts and
food include the University of Pennsylvania, and Yale University. [15]

[edit] Treatment of family members

Many doctors treat their family members. Doctors who do so must be vigilant not to create
conflicts of interest or treat inappropriately.[16][17].

[edit] Sexual relationships

Sexual relationships between doctors and patients can create ethical conflicts, since sexual
consent may conflict with the fiduciary responsibility of the physician. Doctors who enter into
sexual relationships with patients face the threats of deregistration and prosecution. In the early
1990s it was estimated that 2-9% of doctors had violated this rule[18]. Sexual relationships
between physicians and patients' relatives may also be prohibited in some jurisdictions, although
this prohibition is highly controversial.[19].

[edit] Futility

The concept of medical futility has been an important topic in discussions of medical ethics.
What should be done if there is no chance that a patient will survive but the family members
insist on advanced care? Previously, some articles defined futiliy as the patient having less than a
one percent chance of surviving. Some of these cases wind up in the courts. Advanced directives
include living wills and durable powers of attorney for health care. (See also Do Not Resuscitate
and cardiopulmonary resuscitation) In many cases, the "expressed wishes" of the patient are
documented in these directives, and this provides a framework to guide family members and
health care professionals in the decision making process when the patient is incapacitated.
Undocumented expressed wishes can also help guide decisions in the absence of advanced
directives, as in the Quinlan case in Missouri.

"Substituted judgment" is the concept that a family member can give consent for treatment if the
patient is unable (or unwilling) to give consent himself. The key question for the decision
making surrogate is not, "What would you like to do?", but instead, "What do you think the
patient would want in this situation?".

Courts have supported family's arbitrary definitions of futility to include simple biological
survival, as in the Baby K case (in which the courts ordered tube feedings stopped to a Downs
Syndrome child with a correctable tracheo-esophageal fistula, which the parents did not want
repaired based on their vision of "expected quality of life"; the child died 11 days later).

A more in-depth discussion of futility is available at futile medical care. In some hospitals,
medical futility is referred to as "non-beneficial care."

• Baby Doe Law Establishes state protection for a disabled child's right to life,
ensuring that this right is protected even over the wishes of parents or
guardians in cases where they want to withhold treatment.
Critics claim that this is how the State, and perhaps the Church, through its adherents in the
executive and the judiciary, interferes in order to further its own agenda at the expense of the
patient's. Ronald Reagan's Americans With Disabilities Act was a direct response to the Baby K
Case, in an effort to prop up "Right to Life" philosophies.

[edit] Medical research

• Animal research
• CIOMS Guidelines
• Declaration of Geneva
• Declaration of Helsinki
• Declaration of Tokyo
• Ethical problems using children in clinical trials
• First-in-man study
• Good clinical practice
• Health Insurance Portability and Accountability Act
• Institutional Review Board
• Nuremberg Code
• Clinical Equipoise
• Patients' Bill of Rights
• Universal Declaration of Human Rights

[edit] Famous cases in medical ethics

Many famous cases in medical ethics illustrate and helped define important issues.

• Willowbrook Study • Baby K


• Tuskegee Study • Sun Hudson
• Terri Schiavo case • Jesse Koochin
• Jack Kevorkian • Tony Bland
• Nancy Cruzan • HeLa
• Karen Ann Quinlan • TGN1412
• Jana Van Voorhis • Dax Cowart
• Doctors' Trial
• Sue Rodriguez
• Shefer case[20]

[edit] Distribution and utilization of research and care

• Accessibility of health care


• Basis of priority for organ transplantation
• Institutionalization of care access through HMOs and medical insurance
companies

When I lecture to audiences about the Jewish approach to medical ethics, one issue always lurks below the
surface: Why medical ethics? And why particularly Jewish medical ethics? Isn't it sufficient to allow the medical
field to police itself? Aren't physicians moral people?
Today the world is filled with institutional review boards and hospital ethics committees. Most, if not all, medical
schools in the United States now have curricula in medical ethics, a phenomenon that was not true even 15
years ago. Why is this so and what events brought about this increased interest in teaching ethics to doctors?

When we look back at the past hundred years, we face the uncomfortable reality that the scientific community,
and the medical community in particular, have been the impetus for some of the most barbaric and immoral
programs of the 20th century. It becomes painfully apparent that secular, scientific and medical credentials do
not imply moral rectitude. In fact, an immoral person with credentials is granted latitude that would not
otherwise be bestowed upon the average person.1

The ability to trample the rights of fellow


human beings without compunction is rooted
in a belief that the needs of society outweigh
the needs of the individual.
If we exclude those individuals who consciously choose to act unethically, we must still ask what motivates
people who are ostensibly dedicated to helping mankind to perpetrate injustice. The answer is often simpler
than we might imagine. Most major medical ethical lapses this century can be traced back to a single flawed
philosophy. The ability to trample the rights of fellow human beings without compunction is rooted in a belief
that the needs of society outweigh the needs of the individual.

While at face value, the concept of the needs of the many outweighing the needs of the individual may sound
reasonable, this idea is fraught with danger. Let me illustrate this point by examining some vignettes from the
20th century that deal with ethical lapses.

The following description of a lecture appeared in a certain magazine in 1968:

"[The speaker] inferred that 'we cannot continue to regard all human life as sacred'. The idea that every person
has a soul and that his life must be saved at all costs should not be allowed; instead the status of life and death
should be reconsidered. If, for example, a child were to be considered legally born when two days old, it could
be examined to see whether it was an 'acceptable member of human society'. It might also be desirable to
define a person as legally dead when he was past the age of 80 or 85, and then expensive medical equipment
should be forbidden to him . . .

"If new biological advances demand a continuous readjustment of ethical ideas, how are people to be
persuaded to adapt to the situation? Clearly by education, and [the speaker] did not think it right that religious
instruction should be given to young children. Instead they should be taught the modern scientific view of man's
place in the universe, in the world and in society, and the nature of scientific truth. Not only traditional religious
values must be re-examined, but also what might be called liberal views about society. It is obvious that not all
men are born equal and it is by no means clear that all races are equally gifted."

While the magazine states that the speaker "made it clear that he was not advocating them as such, but was
merely concerned to indicate the kind of ways in which society may be forced to reconsider conventional
ethics," the speaker clearly feels that his suggestions are among the acceptable alternatives to "conventional
ethics."

Who might the speaker be? What is he advocating?


First, he believes that ethics are completely relative. We can redefine life and death. Life begins at two days if
we say that it does. Life ends at 80 or 85 if we say that it does. It is not murder to kill someone, so long as we
have redefined him or her as no longer human.

His suggestions are also clearly racist. In fact, his theoretical approach rejects many of the common beliefs that
modern Western society have accepted as core values. Interestingly, he also wisely recognizes that the
impediment to instituting his plan is religious education, a force that traditionally advocates for morality. He
frames his battle well, as Judaism in particular has always been a voice for morality even in a world gone mad.
One of Adolf Hitler's accusations against the Jews was that they corrupted society by introducing the concept of
conscience.2

The ideas of the speaker may resemble those of Adolf Hitler or David Duke, but the speaker is neither of them.
If the speaker were an avowed racist, part of a fringe movement, we could more easily dismiss him. While this
passage may sound like a quote from Mein Kampf, it is really a quote from the November 2, 1968 issue of
"Nature," the prestigious science periodical.3

Why was there no outcry from the scientific


world? Because the speaker was also a Nobel
laureate.
Why was there no outcry from the scientific world? Even if the speaker himself did not necessarily advocate
these views, why was there no outcry that such ideas should even be considered in an ethical society?

Because the speaker, while clearly espousing immoral and racist ideas, was also a Nobel laureate. Who was
he? Francis Crick, the English scientist who shared the 1962 Nobel Prize in medicine and physiology for
establishing the function and double helix structure of DNA, the key substance in the transmission of hereditary
characteristics. The aura of the messenger's scientific credentials blinds us to the insidious nature of the
message. So much for scientific credentials bestowing moral rectitude!

What drives Dr. Crick to advocate such a horrendous model for the future? Before answering, let us examine
another quote, from Dr. Michael Thaler's article,4 "The Art and Science of Killing Children and Other
Undesirable Beings," published in the journal, California Pediatrician, Fall 1992:

"Physicians in Hitler's Reich participated in programs designed by other physicians to kill their chronically ill
patients, to destroy infants with inherited disorders and congenital malformations and to sterilize thousands of
victims against their will. By the time the Third Reich lay in ruins, German doctors had sterilized at least
460,000 men and women diagnosed as unfit or disturbed . . . dispatched 250,000 to 300,000 chronically ill
patients by means of starvation, gas inhalation, prolonged sedation and toxic injections; gassed and cremated
more than 10,000 infants and children with disorders ranging from congenital heart disease to epilepsy." In his
book, Hitler's Doctors, Kater has shown that nearly 50% of all German physicians were active in the Nazi party
by 1939. The true extent of their commitment to Nazi doctrine is seen particularly clearly in the context of
German society at large: by 1937, more than 7% of all German doctors had been inducted into the SS, a rate
14 times above that of the general population."

It is important to understand that this passage is dealing with German physicians killing their own Aryan
countrymen. The eugenics program of the German medical community, dating back to the 19th century, formed
the underpinning of its assault on its own people, and was the forerunner to the much greater Holocaust that
followed. The Nazi Party merely borrowed the philosophy of the physicians and used their own totalitarian
tactics to carry them to their logical conclusion. How could physicians have been so willing to participate in the
killing of their own neighbors, not to mention their participation in the Holocaust?
It is because they were willing to subvert the rights of the individual to the needs of the state. The needs of the
Reich were made paramount, with all individuals deriving value only with respect to their contribution to society.
The continued support of those who provided no benefit to the Fatherland, or even worse those who were a
"drain on society," could not be justified from an "ethical" standpoint. Is this not a familiar theme in the world
today? (See "Should Terri Schiavo live or die?")

One may object that the events in Nazi Germany were an aberration, not indicative of anything. But were they?

In 1924, Adolf Hitler wrote the first volume of his manifesto, Mein Kampf, clearly laying out his world view, a
view that only had room for the master race. In 1927, ten years prior to Hitler's election as chancellor, and
almost 15 years before the formulation of the Final Solution, the following words were penned in the United
States:

"We have seen more than once that the public welfare may call upon the best citizens for their lives. It would be
strange if it could not call upon those who already sap the strength of the state for these lesser sacrifices . . . It
is better for the world, if instead of waiting to execute degenerate offspring for crime, or let them starve for their
imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that
sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes. Three generations of
imbeciles are enough."

Like the quote from Dr. Crick, this quote closely mirrors Nazi doctrine. Just as Hitler claimed that the Jews and
other undesirables at home in Germany had sapped the strength of the great Aryan army and caused their
defeat in WWI, this author claimed that "degenerate offspring" were sapping the strength of the United States.
But the author is not a Nazi. The author is not a totalitarian dictator. The author is Oliver Wendell Holmes, Jr.,
associate justice of the United States Supreme Court, from his majority opinion in the 1927 case of Buck
versus Bell.5 This United States Supreme Court decision was based on the testimony of medical professionals.
Mr. Holmes' opinion was not an aberration, but reflected the mainstream view of eugenics in the early 20th
century.

The noted evolutionary biologist, Stephen J. Gould, researched this unbelievable case and discovered a
terrible tale.6 A perfectly normal girl was involuntarily sterilized merely because she became an unwed mother
after being abused. To guarantee that the court would uphold the right of the state to involuntarily sterilize those
citizens whom it found unfit, doctors and nurses were utilized to create false documentation of retardation in the
baby, the baby's mother, and the baby's grandmother.

Mr. Holmes opened his Supreme Court decision with the words: "Carrie Buck is a feeble-minded white woman
who was committed to the State Colony… She is the daughter of a feeble-minded mother in the same
institution, and the mother of an illegitimate feeble-minded child."

Hence, at the end of the opinion we arrive at the famous statement of one of our greatest jurists: "Three
generations of imbeciles are enough."

It would be a grave error to assume that the medical professionals who testified in that landmark court case
were just "bad apples." The United States, and the medical profession in particular, had a long history of
extensive involvement in eugenics programs. In fact, eugenics was widely practiced in the United States prior
to WWII. Eugenics, as a concept, was very popular until the Nazis just carried it "too far" and gave it a "bad
name."

To understand the full depth of depravity of the United States eugenics movement (a movement equally
embraced by other "civilized" nations), you need only contemplate the following model sterilization bill, written
by Harry Laughlin, superintendent of the United States Eugenics Office in 1922,7 and used as the basis for
most of the more than 30 states that passed eugenics bills by the 1930's.
"….to prevent the procreation of persons socially inadequate from defective inheritance, by authorizing and
providing for eugenical sterilization of certain potential parents carrying degenerate hereditary qualities."

What were the "degenerate hereditary qualities" for which the US Eugenics Office recommended involuntary
sterilization? Mr. Laughlin's proposed list included the "blind, including those with seriously impaired vision;
deaf, including those with seriously impaired hearing; and dependent, including orphans, ne'er-do-wells, the
homeless, tramps, and paupers." While most states did not go as far as Mr. Laughlin suggested, as Dr. Gould
describes in his article, by 1935, approximately 20,000 forced "eugenic" sterilizations had been performed in
the United States, nearly half in California.

Unfortunately, while America only sterilized tens of thousands of people, the Nazis, led by the medical
profession who developed and supported the Nazi eugenics program, carried out Mr. Laughlin's program with a
vengeance, sterilizing hundreds of thousands of people, most for "congenital feeble-mindedness" and nearly
4,000 merely for blindness and deafness.

One of the most egregious examples of the mentality that leads otherwise normal people to create laws
permitting involuntary sterilization and eugenics can be seen from the United States Public Health Service
Tuskegee Syphilis Experiment, which began in the 1930's. As James Jones describes in his book Bad Blood,8
poor, uneducated Black men were observed to see the effects of untreated tertiary syphilis and were barred
from receiving treatment even after penicillin was discovered to be efficacious for curing syphilis. The so-called
experiment ran until 1972, with multiple articles published along the way presenting the findings in prestigious
medical journals. The damage that this one experiment has caused with respect to erosion of faith in the
medical profession within the Black community cannot be measured. 9

So much for the medical profession policing itself.

What is the common thread in all of the historical cases that I have described? What shared philosophy binds
Francis Crick's lecture, Nazi Germany's physicians, Oliver Wendell Holmes' Supreme Court decision, American
eugenics programs, and the infamous Tuskegee Syphilis experiment?

The responsible parties were all individuals dedicated to helping mankind. The common denominator was the
belief that caring for society takes precedence over caring for the individual. In every case, society became the
patient and the individual became unimportant. It is exactly the desire to help mankind at the expense of each
man that leads to moral bankruptcy.

It is for this reason that ethics must be taught to medical students and doctors, that hospitals need institutional
review boards and ethics committees. When the medical profession strays from its mandate to protect health
and life and enters into the realm of social policy, it risks aligning itself with immoral forces that may pervert it
beyond recognition.

However, various ethical perspectives vie for acceptance in our society. This is why the Jewish approach to
medical ethics is so important. We have a great deal to contribute to the societal debate.

In my next article, I will explain the structure of the Jewish approach to medical ethics, how it differs from the
secular approach, and how it might prevent such abuses as I have described above.

You might also like