You are on page 1of 12

# Version 1.

02/05/2012

## On Brand, Values and Choice

1. Introduction
The main topic of this paper is the choice the process by which a set of decision-makers selects from a set of choices the best/most suited option. We aim to achieve a threefold purpose: a. To have a better understanding of the mechanics of the choice process b. To be able to predict the choices the decision-makers will make c. To be able to find the drivers by which the choice can be changed/influenced.

2. Definitions/Notations
2.1 Decision Makers and Choices Sets
Lets use the following notations: A the set of decision makers B the set of choices A few comments here: a. It is important to realize that very often the set of the decision-makers A is a subset of a larger set Atot, (in the context of the larger set A is sometimes referred to as the target set). It is important to always keep an eye on which the target set really is and not to be misled by the size of the total Atot set. This comment is going to be particularly useful in the context of the determination of the dimensionality of the values space (see below for details). For instance, the electoral body in a country is a sub-set of the total population of the country (but for this particular example, do note that the electors deciding not to go to the vote are still a part of the decision makers set, except their choice is NO ONE candidate.

b. A particular case is the one where the set of choices is a sub-set of the set of the decision makers, . For larger A sets this doesnt have major implications, but for smaller sets (dozens to hundreds of items) there can be a significant alteration of the choice process, especially when the items are somehow inter-related and can reciprocally influence their choices. 1|Page robert.komartin@gmail.com

Version 1.3

02/05/2012

For instance, when electing the leader of a class, the set of choices are some of the students, which are at the same time elements of the decision makers set. An issue which has a certain importance here is the self-voting (specifically prohibited in certain choice procedures, but allowed in others).

c. In practical terms, the set of decision makers is much larger than the set of possible choices (typically, with orders of magnitude). This has implications on the effective computational possibilities when analyzing the two sets (we can very often analyze the elements of the set of choices one by one, and in most cases we will do so, while doing the same for the set of decision makers is highly impractical and statistical aggregation or clustering techniques need to be employed for effective handling). For instance, the set of buyers of a certain automotive brand is in the range of millions (or tens of millions, depending on the market), while the set of possible choices is in the range of dozens. While we can obviously analyze the automotive brands one by one, we need to find a suitable way of aggregating the set of buyers in order to allow practical handling of the decision-making process.

2.2 Choosing
By choosing we understand that for each x in A we associate an y in B. In other words, choosing is a function c : A B, c(x) = y. Related to this, a few more comments: a. In effect, the choice of the decision makers is not forever, but it is quite obviously changing in time. Therefore, a more accurate form of the function is c : A x T B, c(x,t) = y, where t is the temporal moment of the choice. For instance, I might like a certain brand of jeans today, but this might or might not be the case in a year time (or even in a few weeks time).

b. In practical terms, we have to distinguish between the potential choice that any decision maker makes at any given moment t in time, and those moments which have a particular importance well call these choice materialization. For example, a buyer might at all times have a choice for a particular brand of cars, but this becomes materialized only when the buyer actually pays for a certain car. Same, the members of the electoral body have their voting preferences at all times, but these become materialized only at voting time, when the elector expresses his/her choice by voting one of the candidates.

2|Page

robert.komartin@gmail.com

Version 1.3

02/05/2012

c. Very often, the actual outcome of the choosing process is more complicated than the simple assignation of a particular choice to each decision maker. In some cases, this is indeed the case (as when we analyze the patterns of buying in a certain market basically in this case were interested by the percentage of materialized choices for each element of the choices set or the market share of each choice, as it is sometimes referred to), but in other cases the end result of the process is one single choice (as it is the case when one single candidate is chosen, namely the one gathering the most of the votes). Furthermore, the process can be complicated by introduction of multiple-stages choice (as it is the case with two-rounds voting procedures), and/or by introduction of mediated choices (as it is the case with the electoral processes where the voters choose a set of electors which in turn select the final candidate) or even can go through intermediate steps of choices aggregation and redistribution (as it is the case with some European countries parliamentary voting procedures). In any case however, we must be interested by the very basic outcome (the capability of forecasting for each x A, at a given moment t, the association with the corresponding y B) the rest of the complications being a relatively straightforward application of the selection rules (interesting enough, the process is not so simple in the case where knowing the desired final outcome we want to determine the influences to the initial process as some of these rules are functional transformations where it is very difficult to determine the corresponding inverse transformation).

## 2.3 Further Assumptions

The attempts to find analytical expressions for the choices of the decision makers have been at the core of the economical research throughout the last two centuries. However, no matter how appealing these might have been theorized and mathematically conceptualized; most of the attempts have failed to pass the test of reality, being rather used as mathematical constructs in the microeconomic theories. We can mention here two main classes of attempts: a. The ones which have attempted to map the choices to real numbers an index-type of construct referred to as the utility index. Simply put, we find a way to calculate the magical number, and then the choice will simply be the option holding the highest calculated number. While in theory this has allowed for spectacular developments, in practical terms we still have no formal description of the way such an analytical expression (mapping choices to real numbers) can be achieved, except for very particular cases. Also, as a side note, we firmly believe all the standard selection procedures followed in public (or private, sometimes) tenders based on selection criteria with weighted criteria are intrinsically flawed, as the attempt of giving the criteria (economical, functional, etc.) various importance factors it is rather a post-factum attempt to rationalize choices which have already been made by the key decision makers based on the mechanisms we are further going to describe.

3|Page

robert.komartin@gmail.com

Version 1.3

02/05/2012

b. The ones which have attempted to infer the actual choice from the ranking of the buyers/consumers preferences the so-called preferences theory (i.e. Anna prefers apples to bananas, and cherries to apples, hence she will choose cherries). This theory fails to the reality test because of the intransitivity of choices (i.e. the above sentence can be continued: but Anna prefers bananas to cherries). This has led to interesting developments to what is called behavioral economics, exploiting the so-called consumer irrationality, but unfortunately has provided no operational manner to infer the actual choices from the knowledge we have regarding the decision makers and their options. We believe the problem is rather related to the attempt of finding a direct form of correlation between the set of decision-makers and the set of choices, when in fact the correlations is mediated by the space of values the set of decision makers believe in, and the set of choices exhibit (manifest). Please note that although the word value is highly charged with ethical and moral connotations, we assign none of these to the above-mentioned space of values. We are rather talking about an intermediate real vector space, inspired by the concept of semiotic spaces (following the sense given to the concept by the semiotic algebra, which I have approached with more details in a couple of my earlier articles). The method in itself it is not new the attempt to influence buyers choice by carefully crafting and manipulating the core values and the identity of the product is the quasi-standard approach/road traveled by marketing/branding firms; we believe the novelty comes rather from the capabilities to formalize the method and to use this formalism for forecasting and influencing purposes.

## 3. The Proposed Approach

3.1 Core Concept
The method we propose is the decomposition of the choice function c into two functions compounded as follows (we assume t constant): c(x)=f(g(x)), where: g:A f:S S, and B which we will refer to as the values space.

## with is multi-dimensional real vector space, S

In plain English, the function g is associating to each x decision maker a multi-dimensional point (or a vector), which we will call the personal preferences point (jokingly, this is decision makers point of view), while function f is associating to each multi-dimensional point/vector in the values space a unique point in the set of choices (easier said than done, as the natural connection really is the function f : B S, associating to each choice the point in the values space; therefore, we will need to assume that function f can be reversed, which can be somewhat more difficult to demonstrate). 4|Page robert.komartin@gmail.com

Version 1.3

02/05/2012

## 3.2 Multi-Dimensional Values Space

The first step we need to take is to define the multi-dimensional values space. Basically, for each value (okay, lets call it attribute to make things more neutral) that can be identified in relationship with our set of choices, we will define a dimension/a coordinate measuring the degree to which that attribute is or not met. In order to make things simpler, we will need to norm the space (not from 01, but rather from -11, to capture the negative preferences too). Taking an example, if one value for the choice of jeans is to be cool, the dimension will measure this attribute (the degree of coolness) from +1 (total coolness) to 1 (total lack of coolness). Therefore S is in fact the [-1, 1]n, real vector space. We recommend using positive (in meaning) attributes for the dimensions, where the attribute is negative in meaning replace it with the positive version (e.g. if the attribute corruption appears, it is to be replaced with correctness or another close synonym for the context). In order to determine the dimensionality of our value space a classic type of marketing research can be performed with the set (or a random representative subset) of the decision makers, design to extract all the keyword (i.e. attributes, values, characteristics) to be followed as dimensions in our space. It is to be noted that although the resulting spaces are domain-specific, some of the dimensions could be seen as belonging to multiple research areas (and therefore they could be stored in choice databases and reused when useful). A major issue to be solved is how the actual numbers will be inferred from questionnaires, without introducing implicit correlations between the attributes. The resulting space will look as follows (we assume a number of three key values/attributes we track):

5|Page

robert.komartin@gmail.com

Version 1.3

02/05/2012

As a note, all figures throughout this paper are represented as 3D images, as this is the highest level of complexity the human eye/mind can directly visualize. However, one must remember that the practical applications will routinely have (much) more than 3 dimensions meaning the approach will have to be either purely abstract, or by visualizing sections of the values space. It is obvious that in order to calculate numerical (i.e. conversion to real numbers) values for the distances between two points in this space (or, in order to measure the length of the resulting vector to a given point from origin) we will use the Euclidean metric, rather than the nave attempt to get to one number by compounding the factors by addition of each factor weighted with importance:

It is important to make all the efforts possible to have an orthogonal space (to simplify the demonstration, this means that the attributes chosen have to have negligible correlation coefficients). In case the dimensions are not orthogonal, then we will either have to eliminate the redundant dimensions the ones having correlation coefficients close to 1, or to apply calculation corrections to straighten them. Furthermore, for spaces with a big number of dimensions, we can mentally represent the sets of correlated dimensions as beams of related values. Geometrically, having such dimensions would look something like the figure below:

The obvious idea would be to assimilate the angle between the dimensions with a transformation/function of the correlation coefficient between the values/attributes considered, and hence: 6|Page robert.komartin@gmail.com

Version 1.3

02/05/2012

However, due to the current lack of practical attempts to build an actual values space, one has to take into consideration that we might be simply unable to avoid the issue of the oblique coordinates system (it might be that correlated dimensions prove to be the typical, rather than the exceptional situation, and one has to keep in mind that there is one more factor which we mentioned in the beginning and then taken out of the analysis, namely the time: it might be the case not only that the dimensions are oblique in this space but also the angle between them has a time variation). In this case, we anticipate the need of using the tensor analysis techniques (oblique coordinates systems/tensors in rectilinear multidimensional coordinates) in order to cope with the non-trivial calculations of distances and coordinates transformations. For instance, the simplest (starting step) would be the transformation from the contra-variant to covariant coordinates following the formula:

## 3.3 Theoretical Choices

In theory (by that we mean that a. we can analyze the set of decision makers one by one and that b. for each point in the space S of values we can find a choice embodying that particular set of values/attributed), the process of determining the choice will go as follows: 1. We determine for each decision maker the mix of attributes which best describes its choice pattern (the way to extract this is quite well known from the traditional marketing toolset). 2. We identify the choice corresponding to the above-mentioned mix. It is however easy to see that the above process is utopian (as a matter of fact, we introduced the concept of space of values especially because the straightforward selection of choices does not work)

## 3.4 Practical Considerations

As one can easily see, the main issues with the approach sketched at 3.3 above are related with the assumptions we made (that we can actually analyze the set of decision makers one by one and that we can find a choice corresponding to the exact multi-dimensional values space point). In order to go around these issues, we suggest the following approach: a. Regarding the first issue, we propose the clustering of decision-makers set as a solution, having the center of weight as the most representative point in the space of values. This solution roughly corresponds to the concept of persona from the traditional marketing research (the target decision makers are grouped into groups of like-minded people, and often times demographic characteristics are listed for each cluster). The difference in this case is however that we can actually apply quantitative techniques of clustering (single-linkage clustering, complete linkage clustering, UPGMA, etc.) starting from the statistical sets of

7|Page

robert.komartin@gmail.com

Version 1.3

02/05/2012

random representative samples analysis. The result would look somewhat like in the following figure (we kept the assumption of three dimensions/values in our space):

Quite obviously, the actual number of resulting clusters is actually dependent on the chosen sensitivity threshold (under what distance are different items classified as being part of one cluster) - but while we can have any number of clusters from 1 to the very number of items, probably a useful selection would lead to 2-7 clusters. Further, we will assume that the entire cluster decision is replaced by the decision of its center (and once knowing the percentage in total population for the entire cluster the calculations will be pretty straightforward). As a note, although this may seem an artificial construct, it is in fact quite similar with the method by which humans operate classifications. Moreover, by using data mining techniques, one could infer extended demographic characteristics of these clusters/personae.

b. Regarding the second issue, we will replace the calculation of the reverse function f by a decision function defined as as the selection of the closest option to the cluster center of weight again based on Euclidean distance metrics: f(s) = y for which the following is met: In geometrical terms, this would look as follows:

8|Page

robert.komartin@gmail.com

Version 1.3

02/05/2012

What the above is basically trying to convey is a quantitative method for identifying the closest brand (from the choice set) to the typical preferences of a given cluster of decision makers. An interesting comment here is that the multi-dimensional representation of the clusters can lead to already well known results when projecting it on lesser dimensional (n-k) spaces. For instance, the famous reversed U shape of the preferences by an attribute is in fact a representation of the density of the choices along a 1-dimensional projection through a cluster. In plain English imagine an arrow entering a cluster: it will hit (or be near) first a few choices, then more, until reaching a maximum, and then less choices until finally it exits the cluster. This, of course, would also explain the multiplicity of Ushapes depending on the different target groups/clusters. There are obviously several massive simplifications in the above method and further work is needed to reach to better/more accurate results. The most important problem which the method can raise is that in reality, the cluster never takes 100% the same choice its members can reach to different decisions (and the reason for that is that while the center of a cluster can be at minimal distance from a certain choice point, some of the cluster members can be at minimal distance from another choice point). One way to circumvent that (while still keeping aggregated analysis) is to treat the distance function as a distribution of probabilities (hence, the cluster has a certain probability to make choice A, another probability to make choice B, and so on and so forth). Another way which may be considered is to count the votes starting from the choices (which are a single point in the multi-dimensional space), by looking which decision-makers fall into the choices sphere of influence with the gradual increase of the distance from the choice (actually, the word sphere here is more than a figure of speech the actual method to do this in practical terms would be to model spheres with centers in the respective choices and to inflate them by increasing the radius of the sphere looking then how they incorporate parts of the decision-making clusters).

9|Page

robert.komartin@gmail.com

Version 1.3

02/05/2012

Another issue (related to the one above in its implications) is the actual shape of the cluster while we assumed the cluster are more or less spherical, in practice much more asymmetrical patterns can be encountered, which can reduce significantly the predictive power of the analysis. For instance:

Again, we are not to be fooled by the above: a cluster is not something having a real existence (it is not a thing) it is merely a hidden convention the brain is using to classify things. Therefore, the first figure above can be seen as one cluster or as two clusters, depending on the level of granularity we look at the group of decision makers. More precisely, the key is the threshold distance which allows one to see a point as being in or out of the cluster.

## 4. Influencing the Selection Process

Just as a brief enumeration, based on the above mentioned techniques, one can gain additional insight into the possible ways of influencing the outcomes of the decision process. Obviously, when we discuss about this, we place ourselves into the partisan position where there is one (or more) choices we prefer and all the other choices (alternate choices) which we envisage to decrease in importance. The main categories of influences can be grouped as follows: a. Influences on the position or structure of the choices in the values space: - moving the position of a choice closer toward massive clusters (in electoral terms, this corresponds to changes in the candidate speech in order to please a major electoral group)

10 | P a g e

robert.komartin@gmail.com

Version 1.3

02/05/2012

- moving the position of an alternate group further from massive clusters (negative campaigns determining the move of the competitive choices far from the major groups) - adding supplemental choices which could act as attractors of decision-makers, with either the purpose of increasing market share (for the sake of it or for later rounds of decision making) or with the purpose of just decreasing the basin of attraction of a competitive choice.

b. Influences on the structure of the clusters in the values space this refers mostly to the clustertargeted messages which could steadily getting it closer to the desired position of a given choice, by influencing its: - Overall shape by providing splits within clusters (secession moves) or by joining clusters together (unionist moves), and/or - Position targeted marketing in order to approach the overall cluster position towards the position of the choice (something like Pepsi is the choice of the new generation).

c. Influences in the structure of the values space itself this is perhaps the most subtle (but a t the same time difficult) approach consisting in the alteration of the values defining the framework of reference, by: - Adding supplemental dimensions add into the discourse a completely novel element (to which our preferred choice is/could be closer) which would in turn force an element of differentiation not present before (leading to the limit even to fragmentation of major clusters of decision making). In a geometrical fashion, the addition of supplemental dimensions would look something like:

11 | P a g e

robert.komartin@gmail.com

Version 1.3

02/05/2012

- Deleting dimensions this would basically consist of deprecation/elimination from the public speech of one of the elements (by placing it into derision mode, for instance), in order to void a key differentiator of an alternate choice. - Changing the angle between dimensions by changing the correlation coefficient between the considered attributes (communication implying that two categories are more like the same, e.g. free economy is liberty, or the opposite, tat two categories are in fact more and more different, e.g. capitalism does not mean freedom.

5. Conclusions
We will conclude by saying that we believe the above-sketched method provides the basics of a novel approach, rooted in classical statistical methods coupled with concepts taken from semiotic algebra, which could provide a better understanding of the decision making/selection processes, improving the forecasting capabilities of the selection processes, as well as providing insights into the possible ways to influence/change/alter these. To a large extent, we believe that the process described is simply put an attempt to model the natural way the brain performs its choices (a succession of neural nets fired up in the following sequence: concept -> associations/attributes -> choices), rather to introduce a new way of thinking the topic. Therefore, most of the technical complexities are rather related to the fact we are trying to formalize the process in a mathematical way and to make it accessible for simulation by sequential computing machines, while the brain processes the flow as a natural massively parallel computing instrument. Of course, the author is the first to reckon that significant additional work is needed, mainly for the following directions: a. The refining of the data gathering methods (first and foremost to decide what techniques are to be used for data gathering, and then the detailed description of the ways of quantifying those). A key issue here might prove also to be the how of the actual dimensions selections b. The improvement of the actual calculation methods (pilot test the various clustering algorithms, verify options available for turning the calculations from simple real numbers into distribution of probabilities, and perhaps even more importantly the application of the tensor analysis to the value spaces) c. The development of a step-by-step/cookbook minimal methodology, which would allow the application of the method in practical circumstances d. The detailed analysis of the possible ways of altering/changing the outcomes of the decision-making processes e. The development of automated software tools which would allow for fast calculations of each of the steps of the method, as well as for additional what-if analysis of alternate decision scenarios (although large parts of the analysis can be covered with traditional math and statistical packages).

12 | P a g e

robert.komartin@gmail.com