You are on page 1of 11

Lexie Lehmann

Nicole Callahan
March 28, 2018

Groundwork of the Metaphysics of Cambridge Analytica,


the Wealth of Facebook, and On Big Data

In 2014, British political data firm Cambridge Analytica harvested over 50 million
Facebook user profiles without their permission. The firm was subsequently hired to consult for
Donald Trump’s 2016 campaign for the presidency. The information collected by Cambridge
Analytica included “details on users’ identifies, friend networks, and ‘likes’”, which were then
used to map personality traits and target audiences with digital advertisements. According to the
New York Times’s first coverage of the event on March 17, 2018, this was “one of the largest
data leaks in [Facebook’s] history”. Since the information was released, several American
politicians have criticized Facebook and its chief executive, Mark Zuckerberg, for knowingly
exploiting the private information of its users to “target political advertising and manipulate
voters” (New York Times).
The data harvesting initially began with a technique developed by a research center at
Cambridge University. One of the professors in the center, Dr. Aleksandr Kogan, built his own
app for data collection and began sourcing profiles for Cambridge Analytica (New York Times).
Around the same time, Cambridge Analytica received a 15-million-dollar donation from Robert
Mercer, a wealthy Republican donor. In response to the recent controversy, Facebook admitted
that it routinely allows researchers to access user data for academic purposes, but that the selling
of user data to ad networks and data brokers is prohibited. Additionally, Cambridge Analytica
officials blame Dr. Kogan for illegally acquiring the data and passing it on to the consulting firm.
This data debacle has prompted questions about Facebook’s ethical obligation to protect user
data (New York Times). Further, it seems our democratic ideals have proved unsustainable in the
new age of rapid technological development. The purpose of this paper will be to ground these
concerns in the works of three critical philosophers of the Western canon: Adam Smith,
Immanuel Kant, and John Stuart Mill.

1
Adam Smith, Wealth of Nations

Adam Smith’s Wealth of Nations contextualizes Facebook’s actions within the context of
a profit-seeking business model. Facebook as a corporation operates according to the same
principles as Smith’s homo economicus: the idea that man is a rational being that will always act
in a way that maximizes profit and maximizes their preferences as consumers.

Division of labor
The first topic of the Wealth of Nations is the “Division of Labor”, which Smith asserts is
the “greatest improvement in the productive powers of labor” (Smith 3). Division of labor allows
different operations of a task to be divided up and later combined, which ultimately increases the
productivity of labor. Consequentially, the division of labor necessitates that human beings with
different jobs “truck, barter, and exchange one thing for another” (Smith 14). This system of
exchanges is based on mutual need: “give me that which I want”, Smith writes, “and you shall
have this which you want” (Smith 15). This transactional dynamic is central to the relationship
between Facebook and its users because, as Smith writes, the transactions are based on mutual
need rather than mutual obligation. In this way, Facebook’s users are its laborers. Like traditional
laborers, Facebook’s users produce a valuable product for the corporation in the form of their
data. Instead of wages, users receive free access to Facebook’s services, as well as a small
promise that the data they produce will not be used to harm them. In Smith’s opinion, it makes
sense for Facebook’s focus to be on the exchange of what they have for what they need. Users
have data, Facebook needs data, therefore the exchange is inevitable.
Adam Smith also writes that the goals of the “masters” and the “workmen” will forever
be at odds. The workmen desire to get as much as they can, while the masters intend to give as
little as possible. The former are disposed to organize in order to raise the wages of labor, the
latter in order to lower them. In this relationship, however, there is not much room for dispute
because the masters still have the advantage and hold most of the power to set the terms of the
agreement. This can be seen by the ways in which Facebook constructs its own licensing terms
and terms of the agreements to which the user blindly agrees. The power dynamic between who
constructs the terms of agreement and who is compelled to follow them also supports the
argument that this is more of a transactional agreement than a fair, contractual exchange.

2
Accordingly, while Smith’s words help to contextualize Facebook’s control over its users, he is
careful not to entirely justify such exploitative behavior.

Productive and unproductive labor


Adam Smith also outlines a division between productive and unproductive labor, and
frugal and prodigal behavior. Productive labor adds to the value of the subject upon which it is
bestowed, whereas unproductive labor does not. For example, the labor of a manufacturer adds
to the value of his materials, whereas menial servants have no value-add. Under this
classification, Smith would put Facebook users into the productive labor category, for their
contribution of users to the platform directly increases the platform’s profitability. The greater
number of people using the service, the more desirable Facebook is to companies who rent
advertising space on the site. As Smith advises, “a man grows rich by employing a multitude of
manufacturers: he grows poor, by maintaining a multitude of menial servants” (360).
Additionally, “the value of the manufacturer fixes and realizes itself in some particular subject or
vendible commodity, which lasts for some time at least after that labor is passed” (360-61). This
is exactly how information gathered from 50 million Facebook users in 2014 could still have just
as much relevance and scandal four years later.

The role of the government

Smith also argues that governments are not the best parties to regulate individual and
corporate activity, because “every individual… in his local situation… [can] judge much better
than any statesmen or lawgiver can do for him” (485). Smith asserts that the statesmen’s political
decisions are loaded with “a most unnecessary attention, but assume an authority which could
safely be trusted, not only to no single person, but to no council or senate whatever, and which
would nowhere be so dangerous as in the hands of a man who had folly and presumption enough
to fancy himself fit to exercise it” (485). On the contrary, Smith believes that every man should
instead be left free to pursue his own interests, and that the sovereign should be effectively
discharged from duty. Accordingly, Smith limits the duty of the sovereign to three pillars: firstly,
to protect the society from the violence and invasion of other independent societies; secondly,
the duty of protecting every member of the society and establishing an exact administration of
justice; and thirdly, the duty of erecting and maintaining certain public works and institutions. Of

3
these duties, only the second seems to provide any indication that the government would be
justified in intervening between Facebook and its users, and even allowance is quite vague. In
my opinion, Adam Smith would not fault Facebook for harvesting the data of its users, and for
using that data to meet its own profit-maximizing ends. Additionally, Smith would defer to the
market’s invisible hand to legislate when a corporation’s actions are right or wrong.

The invisible hand


Due to the fact that the government cannot provide the proper regulation for its citizens,
Adam Smith believes that the “invisible hand” is the true governing force by individuals in a
market are regulated (485). Adam Smith would cite the fact that Facebook has lost $80 billion
dollars in value since the data scandal was announced (CNN Money). Capitalism, it seems,
provides its own moral force that incentivizes moral behavior and dis-incentivizes immoral
behavior. Accordingly, corporations must find a balance between profit maximization and
respect for the user, for a social network that freely distributes user information to political
campaigns and advertising agencies would also not be favorable in the market.

Immanuel Kant, Groundwork of the Metaphysics of Morals

In his Groundwork of the Metaphysics of Morals, Immanuel Kant critiques the


reductionist approach of Smith’s profit-seeking economic model, because it disregards human
dignity and sees laborers and consumers as cogs in the machine of the global economy.
Additionally, the entire concept of “profit-seeking” behavior is driven by self-preservation that is
not grounded in a genuine, standard moral framework. In contrast, Kant’s framework of the
categorical imperative evaluates the morality of actions in a way that is derived from “a-priori”
self-evident rational principles, rather than from “a-posteriori” principles based on experience.

From duty
In the first section of the Groundwork, Kant asserts that good actions are driven by good
will and that moral actions have inherent value. He begins the discussion by focusing on moral
actions that are done under “certain subjective limitations and hindrances”, otherwise described
as actions done “from duty” (Kant 12-13). One analogy that he provides to understand actions
“from duty” is that of a shopkeeper and a merchant. The shopkeeper does not overcharge his

4
inexperienced customer, whereas a prudent merchant does this but also keeps a fixed general
price for all customers, “so that a child may buy from him just as well as everyone else” (Kant
13). The relationship between Facebook and its users is similarly transactional, as with the
relationship between sellers and buyers. Facebook’s users utilize the social network’s free
service in exchange for giving up some personal data. All users are subject to the same terms and
conditions, and in this way, Facebook is giving a fixed general price. By Kant’s metric, this is an
honest behavior, however acting in this way is also advantageous to the merchant, therefore the
action is ultimately done from a self-interested purpose rather than from the merchant’s duty to
be transparent and honest in its transactions. For this reason, Kant would argue that Facebook, as
a for-profit corporation, may prioritize tenants like accountability, honesty, and transparency, but
that these values are rooted in self-interest rather than a genuine belief in their goodness.

False promises and deontological ethics


Kant claims that morality is based upon the following categorical imperative: “act only
according to that maxim through which you can at the same time will that it become a universal
law” (Kant 34). Kant’s conception of the categorical imperative falls into a broader category of
ethics known as deontological ethics, which uses static rules to evaluate whether or not a
behavior is moral (Stanford Encyclopedia of Philosophy). A reoccurring example that Kant uses
to illustrate the categorical imperative is the idea of a false promise. In the case of Facebook, the
website falsely promised it would protect the privacy of its users’ data. Nonetheless, Facebook
willingly extended the information out to unsupervised and unregulated sources. With an a-
posteriori consideration of false promises, an individual might conceivably come up with a
variety of situations—experiences—in which Facebook’s false promise is justified, for example,
that the information was originally given for an academic purpose or that what users don’t know
about how their data is being used can’t hurt them. On the contrary, Kant’s a-priori analysis
would argue that Facebook’s false promise de-legitimizes the entire convention of promise-
making. If everyone were to make false promises as if it were a universal law, we would end up
in a situation in which “no one would believe he was being promised anything, but would laugh
about any such utterance, as a vain pretense” (Kant 35). Regarding Facebook’s obligation to
protect user privacy, Kant’s categorical imperative suggests that Facebook should protect and
preserve the privacy of individual user info, for to corrupt that responsibility would be to corrupt

5
the entire convention of privacy, and in the same way, a false promise corrupts the convention of
all promises.
Another problem with false promises is that they manipulate users as a means to an
individual—or a corporation’s—own end. Kant writes:
As far as necessary or owed duty to others is concerned, someone who has it in mind to make a lying
promise to others will see at once that he wants to make use of another human being merely as a means,
who does not at the same time contain in himself the end (42).

Facebook’s distribution of data harvested from user profiles is definitely an example of one party
using humans as a means to their own ends. The data collected from Facebook users is a means
to achieve profit maximization, because user data can be used to better optimize advertising
campaigns and marketing dollars. This is what Facebook desires, which is particularly why Kant
would be so displeased with the corporation’s actions. In fact, Kant believes that truly moral
actions must be unpleasant and inconvenient for the individual. He adds,
...If adversities and hopeless grief have entirely taken away the taste for life; if the unfortunate man, strong
of soul, more indignant about his fate than despondent or dejected, wishes for death, and yet preserves his
life, without loving it, not from inclination, or fear, but from duty; then his maxim has moral content (13).

In terms of Facebook’s obligations, Kant would believe that the proper action is the most
unpleasant and inconvenient because it is grounded in a genuine moral framework rather than a
self-interested, profit-seeking one.

John Stuart Mill, On Liberty

The previous two philosophers have articulated contrasting perspectives on the extent to
which Facebook is obligated to protect the data of its users. John Stuart Mill’s essay On Liberty
occupies a more moderate space in this debate. If we can understand Kant’s perspective to be
one that uses the categorical imperative to determine what is best for the whole society (i.e. the
public), and Smith’s perspective to be one that justifies any sort of profit-seeking behavior
because it ultimately benefits the entire Wealth of a Nation (i.e. pro-private economic
development), then perhaps we can insert Mill as one who complicates the binary between public
and private stakeholders. Additionally, Mill brings up social concerns about privacy and
autonomy that are not raised by the previous two philosophers.

Privacy and autonomy

6
Mill states that the object of his essay is to examine the extent to which power can be
legitimately exercised by society over the individual. Mill concludes that:
“the only part of the conduct of any one, for which he is amenable to society, is that which concerns others.
In the part which merely concerns himself, his independence is, of right, absolute. Over himself, over his
own body and mind, the individual is sovereign” (13).

Accordingly, Mill prioritizes each user’s individuality, arguing that individuality should assert
itself, so long as the expression of individuality does not constitute a positive instigation to some
mischievous act against another (73). By understanding the benefit of each individual user, Mill
complicates the conflict between what’s best for the general public and what’s best for the
private corporation by inserting the rights of the private individual as a third stakeholder in the
conversation. Moreover, Mill believes that every individual should act as their own sovereign,
which fits into the broader argument in favor of individual privacy and self-autonomy.
Mill states that the strongest argument against public interference in personal conduct is
that in most cases, the public’s intervention is incorrect because the opinion of the public is that
of an overruling majority. Moreover, for questions of social morality and duty to others, the
public opinion is more often right because on “such questions they are only required to judge of
their own interests; of the manner in which some mode of conduct, if allowed to be practiced,
would affect themselves” (Mill 81). Furthermore, the individual knows best how to govern their
own actions, and therefore should be able to advocate for their own interests. In this way, the
individual user both has a right to the protection of their individual data, but also a right to
concede ownership of their data to a service like Facebook. The corporation, however, is not
entitled to entirely absorb sovereignty over the individual. In the same way that a state can act as
an overruling majority, Facebook cannot act as the sole arbiter of authority over its users.

Innovation
Another latent idea within Mill’s conception of individuality is his promotion of
innovation. This can be seen through Mill’s conception of a political state in which there is
always a part of order and stability opposed by a party of progress or reform, both of which are
“necessary elements of a healthy state of political life” (47). That is, until, “the one or the other
shall have so enlarged its mental grasp as to be a party equally of order and of progress, knowing
and distinguishing what is fit to be preserved from hat ought to be swept away” (47). In this way,
Mill might defend Facebook’s leniency towards Cambridge Analytica by saying that Facebook,

7
itself a platform that has revolutionized communication among people, would be hypocritical to
stifle data innovation. As for the threats that come with treading into those uncharted waters,
Mill argues that:
It is important to give the freest scope possible to uncustomary things, in order that it may be in time appear
which of these are fit to be converted into customs. But independence of action, and disregard of custom,
are not solely deserving of encouragement for the change they afford that better modes of action, and
customs worthier of general adoption, may be struck out. (66)

Additionally, Mill believes that there is always a need to discover new truths and to identify
when old truths are no longer true, in order to continually “commence new practices and set the
example of more enlightened conduct” (63). In the same way that diversity of opinion benefits
the progress of society, so does novel and untraditional innovation, such as technological
advancement.

Conclusion

While conversations on morality and responsibility have been ongoing for centuries,
many place quandaries surrounding the implications of technological advancement into a new
category of philosophy entitled “Computer Ethics” (Stanford Encyclopedia of Philosophy). The
leader of the movement is James Moor, the named professor of Intellectual and Moral
Philosophy at Dartmouth College. As Moor writes, “the potential applications of computer
technology appear limitless”, and therefore “the computer is the nearest thing we have to a
universal tool. Indeed, the limits of computers are largely the limits of our own creativity”
(Moor). Additionally, the domain of Computer Ethics is most similar to utilitarian
consequentialist philosophy: if computers have X ability, then Y could happen, and would Y
justify the tool of X?
Most of the discourse in this sphere is centered around two topics. Firstly, many scholars
of Computer Ethics have identified that there is a “policy vacuum” due to a lack of clarity in how
technology should be used given consistent, recurring developments in what computers can do.
To combat the vacuum, one scholar in the computer ethics space demands that the same “core
values” that individuals use to evaluate communities—such as life, health, happiness, security,
resources, opportunities, and knowledge—must also be applied to computer ethical topics like
privacy and security. These core values may help to revise existing policies that “eliminate the

8
policy vacuum and resolve the original ethical dilemma”. Having considered the three
philosophical perspectives detailed in this paper, I assert that Kant’s deontological approach is
the best framework with which to approach future conversations about big data’s regulation, and
about how corporations themselves should legislate their approaches to data usage.
Secondly, there is a question of whether or not the ethical dilemmas imposed by
technology are completely novel or unique, or if there are comparisons that can be made to non-
technological analogies. My intention in this paper was to demonstrate that the philosophical
frameworks of the past definitely have relevance in contemporary issues. I personally would
advocate for interdisciplinary, humanities-based thinking to be a required component of every
public and private decision. It is my opinion that all stakeholders would benefit from hearing
philosophical considerations, both old and new, in order to have the intervention of an ethical
advocate in all decision-making. Only then can the implications of each new development be
considered in a way that places human dignity at the center.

9
Works Cited

Bynum, Terrell. “Computer and Information Ethics.” The Stanford Encyclopedia of Philosophy,

edited by Edward N. Zalta, Winter 2016, Metaphysics Research Lab, Stanford University, 2016.

Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/archives/win2016/entries/ethics-

computer/.

DeCew, Judith. “Privacy.” The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta,

Spring 2018, Metaphysics Research Lab, Stanford University, 2018. Stanford Encyclopedia of

Philosophy, https://plato.stanford.edu/archives/spr2018/entries/privacy/.

Facebook’s Role in Data Misuse Sets Off Storms on Two Continents - The New York Times.

https://www.nytimes.com/2018/03/18/us/cambridge-analytica-facebook-privacy-data.html.

Accessed 28 Mar. 2018.

Granville, Kevin. “Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens.”

The New York Times, 19 Mar. 2018. NYTimes.com,

https://www.nytimes.com/2018/03/19/technology/facebook-cambridge-analytica-explained.html.

How Trump Consultants Exploited the Facebook Data of Millions - The New York Times.

https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html.

Accessed 28 Mar. 2018.

Hursthouse, Rosalind, and Glen Pettigrove. “Virtue Ethics.” The Stanford Encyclopedia of

Philosophy, edited by Edward N. Zalta, Winter 2016, Metaphysics Research Lab, Stanford

University, 2016. Stanford Encyclopedia of Philosophy,

https://plato.stanford.edu/archives/win2016/entries/ethics-virtue/.

Korsgaard, Christine M. Kant: Groundwork of the Metaphysics of Morals. Translated by Mary Gregor

and Jens Timmermann, 2 edition, Cambridge University Press, 2012.

10
Mill, John Stuart. On Liberty, Utilitarianism and Other Essays. Edited by Mark Philp and Frederick

Rosen, New edition, Oxford University Press, 2015.

Monica, Paul R.La. “Facebook Has Lost $80 Billion in Market Value since Its Data Scandal.”

CNNMoney, 27 Mar. 2018, http://money.cnn.com/2018/03/27/news/companies/facebook-stock-

zuckerberg/index.html.

Moor: What Is Computer Ethics.

http://web.cs.ucdavis.edu/~rogaway/classes/188/spring06/papers/moor.html. Accessed 28 Mar.

2018.

Smith, Adam. The Wealth of Nations. Edited by Edwin Cannan, Sixth Printing edition, Modern

Library, 1994.

Waldron, Jeremy. “Property and Ownership.” The Stanford Encyclopedia of Philosophy, edited by

Edward N. Zalta, Winter 2016, Metaphysics Research Lab, Stanford University, 2016. Stanford

Encyclopedia of Philosophy, https://plato.stanford.edu/archives/win2016/entries/property/.

11

You might also like