Conformity and Peer Pressure — Part 3

Click here to read part 1.
Click here to read part 2.

Peer Pressure

A closely related topic to conformity, which overlaps with developmental psychology, is peer pressure. Because peer pressure is a specific manifestation of conformity, it was essential that we began by covering the latter. Peer pressure is something with which we have to deal all our lives, from childhood to adulthood, although I will be focusing primarily in the middle, the transition point between immaturity and maturity—adolescence. For it is during this time that we struggle most with our identities, who are Unknown-2.jpegfriends are, and who are social self will be. The socialization process begins in early childhood, when we interact with peers of our same age and befriend them in virtue of similar interests. As we realize that people have intentions similar to our own, we grow closer to them, become more intimate. Then early adolescence kicks in, and peer groups come under the spotlight. A peer group is a close-knit system of friends, classmates, and other people around our age. As such, it extends beyond our close circles, to even classmates with whom we are less familiar. These people, even if we do not think about, exert a considerable influence on us, without our knowing it. But it is also these people from whom we derive emotional support, feedback, and insight. We depend on others for how we perceive ourselves. Since we do not always have a mirror handy, it is our peers who reflect our image back at us. They tell us about ourselves, based on our actions and words, and this lets us know about our personality—what it is, how it can be improved, and so on. Like I said, this peer group really flourishes starting in middle childhood, although its effects come to the fore throughout our teen years. It is worth including at the outset that age and self-blame correlate to conformism rates, as this will set up the stage for the remainder of this post (Costanzo, P.R., 1970). The age groups of 7-8 and 19-21 that experience low feelings of self-blame report only conforming about 19% of the time, whereas those in early adolescence, the age range being 12-13, experience high levels of self-blame, reporting conformism rates of 63%, which is a formidable increase. We can conclude from these figures that it is during the onset of adolescence that kids focus on their image as Unknown-4.jpegperceived by others, for it is a time of vulnerability. Adolescence as the midpoint between childhood and adulthood is the most stressful, because younger kids are less self-conscious and adults are more self-assured, which explains the self-blame correlates. Yet another distinguishing feature of adolescence is the preeminence of the peer group in relation to parental figures. Many think the parents fade into the background while the friends come into the foreground; however, this is not the case, in that the parents are not done away with entirely—it is not like they just disappear and lost importance—instead, the peers just become as important as the adults, i.e., parents do not lose importance, friends just gain importance. There is no replacement, then. Only change. The adult loses their place as the confidant. Teens look to their friends and other peers for support, but the parents are still there, no doubt.

Given this context, it is understandable why conformity would play such a big role in adolescence. Surrounded by friends, curious about who we are, freer than we have ever been before, adolescents look for acceptance and belonging in their complex peer groups, which form extensive networks. This need to belong, to be like “one of the guys,” Unknown.jpegto be cool—this is the leading cause of risky behavior, behavior which poses a risk not only to oneself, but to others, and, in other, more dangerous circumstances, one’s community. One thing that is misunderstood by parents, primarily, is how friends go about getting themselves involved in these groups. Parents seem to think that their kid is being pressured by their old friends, when such is actually not the case. In reality, kids tend to choose their friends knowing what kind of people they are. It is more about teens choosing such people than existing friends pressuring them. Take vaping, for example. From a parent’s perspective, it may seem like their child is being pressured by their friends from elementary school to vape; really, though, it is the child who chooses a new group of friends they already know vapes. Looked at this way, more power is in the teen’s hands than their peers’. If I want to smoke, then I am more likely to conform to a group of smokers than to be pressured by an old friend of mine. Cliques gain more appeal.

The conformity dynamic is varied in adolescents, of course. To summarize one study done, there are two main personality types, or “trajectories,” that develop through high school: The “steady conformist” trajectory and the “accelerated ego development” trajectory (Hauser, S.T., & Follansbee, D., 1989). As can be gathered from the name, the steady conformist personality is defined by its tendency to conform. These kinds of people are more concerned with their self-images than others, and they want more than anyone else to belong. On their own, these wants are not negative, but carried to extremes, they can be detrimental to one’s development, as in the case of the steady conformist, who, in endlessly seeking to be accepted, unquestioningly and obediently Unknown-1.jpegfollows, believing in stereotypes, prejudices, and clichés. Compare this to someone with accelerated ego developmental. They follow their own, inner beliefs, and have a heightened moral compass to which they are attuned, unlike other people at their age, which allows them to develop self-reliance, a skill that is extremely important later on. People with accelerated ego development show greater levels of emotion, logic, intelligence, self-control, and adaptability. These two types are thus at opposite ends of the spectrum. The more one conforms, the lower one perceives their own social skills (Costanzo, P.R., 1970). David Riesman, in The Lonely Crowd, identified two personality types in America that almost perfectly match those just outlined. He observed in the American populace the outer-directed man, who relies on others and openly conforms whenever possible, and the inner-directed man, who is the paragon of independence and autonomy and who does what he wants, not what others want of him. As an American myself, I will admit that I am biased toward individualism, and I value independence over interdependence (although by now, you probably could have already told).

We have explored the nature of conformity in adolescence, although it remains—what is the nature of peer pressure, exactly, in adolescence? You might be surprised to know that, contrary to popular conception, peer pressure is actually more indirect than direct, more implicit than explicit. By this, I mean that pressure is internally generated. Picture the following: An unassertive, young, naïve teen is invited to his very first party, and he Unknown.jpeggladly accepts, eager to see what all the excitement is about, to meet new people outside his normal friend group, but he finds, when gets there, that there a lot of older classmates there, people he does not even know, people who look like they are not even in high school, and feeling nervous, he hangs out by the refreshment table, frequented by drunk party-goers, who, noticing his unease, offer him a drink, which he refuses, but which is insisted upon by the others, now in a large group. This might be vivid for some, yet it is not a correct image, for peer pressure does not happen when others coerce each other, or when we force each to do things; no, peer pressure is self-generated. What really happens at that party is, the teen looks around, thinks to himself, Everyone else is drinking, so I guess I should, too. The behavior to which one conforms is a misperceived threat. At no point does anyone come up to him and pressure him to drink; he pressures himself to drink. Noticing a predominant behavior, he feels left out, so to compensate, he joins in (Sherif, M. & Sherif, C.W., 1964). Just how pressured one feels to conform is inversely proportional to one’s relationship with their parents (Kandel, D.B., & Andrews, K., 1987). Moreover, disagreeing with adults in general, not just parents, makes teens more disposed to conformism. Of those who smoke marijuana, 68% said they did it because it was “pleasurable,” while only 1% claimed it helped with their personal issues (Mizner, G.H., Unknown-1.jpegBarter, J.I., & Werner, P.H., 1970). These statistics are worrying and show little promise, to say the least. We live in a material-hedonistic century, when pleasurable products dominate our lives, depriving us of proper coping mechanisms. While there are no statistics I can find at the moment, I can say with confidence that similar figures apply to vaping. Nowadays, teens vape “for the sake of vaping,” because the flavors taste good, or the tricks are cool, or because—alas, everyone else is doing it! Conformity, indubitably, plays an enormous role in drug use. I personally have yet to hear a valid reason for teens to vape. As Goldstein writes in his Handbook of Drug Abuse, “The heavier the involvement with marijuana, the more likely that one is embedded in a friendship network in which marijuana use is a characteristic of behavior” (342). Now, we cannot neglect the 1% of people who found solace in drugs for their problems, although the percentage, in truth, is negligible.

The Other Side of Peer Pressure

So far, though, we have only been looking at peer pressure from one side; peer pressure, like conformity, is bidirectional. In Sherif’s famous Robbers Cave Experiment, groups of boys were split into factions that hated each other, but they united through superordinate tasks—problems that required that they put aside their differences and team up to achieve something greater than both of them. Teamwork led to the absence of Unknown-2.jpegprejudices (Sherif, M., Henry, O.J., White, B.J., Hood, W.R., & Sherif, C.W., 1961). Through interactive modeling, peers can have a positive influence (Berndt, T., 1989). Someone who acts rather than speaks, who walks the talk, is more beneficial than someone who merely says things, understandably. This effect is buttressed if the person is a source of support. If I have a friend who is always encouraging me, giving me positive feedback, and he practices what he preaches, then I will 10 times out of 10 follow him. As such, a beneficial friend or peer is someone who says not, “Do this!” but, “Watch this.” A good influence does not verbalize norms; they create them. Compare this to the distinction between descriptive and injunctive norms discussed earlier. This leads further to the notion that peer groups do not usually create novel behaviors; rather, they reinforce existing ones (Mosbach, P., & Leventhal, H., 1988).  

Now to address some biases and problems with studying peer pressure. The biggest difficulty presented to researchers is how to obtain data. On the one hand, there are primary sources—the teens themselves. Then there are secondary sources—the parents. However, there are problems with both sources, namely that both are biased. Teens Unknown-3.jpegnaturally undervalue the role their peers play in pressuring them because they want to perceive themselves as autonomous and independent, and parents naturally overvalue the role of peer pressure because they take a protective standpoint, so the end of focusing only on the negative aspects of peer pressure while neglecting the possible benefits thereof. Even if one decides to go ahead with studying teens, another problem arises: “The adolescent may perceive his or her behavior as highly individual because it differs markedly from that of parents and other adults; in contrast, the observer’s attention is likely to focus on the similarities of the adolescent’s behavior to that of immediate peers.”[1] Once again, perspective plays a massive role, as does the observer effect. In evaluating nonconformity, the teen judges themselves in relation to adults, authority figures, and spectators judge them in relation to their peers, around whom they spend a lot of their time. What this means for us is that, again, peer pressure is neither good nor bad. It is entirely correct to say that teens are independent inasmuch as they separate themselves from their parents, but it is also correct to say that teens are dependent inasmuch as they connect themselves to their peers. It is an interplay. Finally, a few words on expectations and popularity:

Other things being equal, social acceptance by peers is desirable—particularly if it is based on mutual helpfulness and shared interests. In our opinion, however, there is Unknown-4.jpegcurrently an overemphasis, particularly among upper- and middle-class parents, on the pursuit of popularity for their children. A greater emphasis by parents and other significant adults on the importance of being oneself and of remaining faithful to individual values and goals, and a downgrading of the importance of popularity and superficial appearances—of fitting in at all costs—is certainly to be strongly encouraged. But as we have already noted, to expect the average developing adolescent—unsure of his or her own identity and unclear about the demands that will be made in a confused, rapidly changing society—to be immune to the favor of peers would be unrealistic and inappropriate. Most adolescents, at one time or another, feel that they do not belong, and the pain, however temporary, can be very real; parents’ overdetermined insistence on popularity can only further compound the adolescent’s difficulties.[2]

Unknown-5.jpegIn conclusion, “The question, then, is not whether peer pressure and peer group affiliations are basically positive or negative forces in teenagers’ lives, because they are obviously a mixed blessing. The challenge… is to clarify how and in what circumstances peer groups aid in or detract from healthy development.”[3] If there is one thing we learned in this post, then it is that conformity and peer pressure are morally ambiguous, and there are two sides to both of them. Both, ultimately, are resources for survival, and we are, to put it bluntly, stuck with them. We cannot escape from conformity. We cannot run from peer pressure. They are simple facts of our existence, and without them, we would not exist as a society. But while the total concepts of conformity and peer pressure are inherently ambiguous, their subtypes are not. By looking at different forms of both social influences, we have seen that there are concrete situations in which they can be used for good or for bad. Using injunctive norms, good habits like recycling can be advertised, whereas unquestioning pressure can result in widespread obedience, as in Unknown-6.jpegthe people of Germany during World War II, who conformed at the mass level, just doing as they were told. When we publicly comply with norms without accepting them privately, we do ourselves and others an injustice. We must stay true to ourselves and do the same for others, encouraging them, supporting them, lending them a hand when it is needed to keep them from falling victim to negative influences, and guiding them on the right path, away from risky behavior, toward self-actualization and self-fulfillment. To conform is to give up one’s individuality. To conform is to stay alive, though. Hence, we are condemned to conformity. But at the same time, we can be ourselves and dissent. It is all a matter of choice.



[1] Conger, Adolescence and Youth, p. 329
[2] Id., p. 344
[3] Feldman, At the Threshold, p. 194


For further reading:
At the Threshold: The Developing Adolescent by Shirley S. Feldman (1990)
Adolescence and Youth
3rd ed. by John Janeway Conger (1984)


Conformity — Part 2

Click here to read part 1.

Unknown.jpegSocial psychology is the study of human behavior in and with other people. Of the many fascinating concepts within this area of study, social influence is easily one of the most important, especially regarding our need to conform, both on a small scale, as when we encounter our peers, and on a large scale, as when entire populations are influenced, either for the better or the worse. Built into our nature is the desire to belong with others, to be accepted by them, and to not be left out. Throughout history, across cultures, the necessity of conformism has changed drastically, yet no matter how strong the compulsion is, the choice is still there, and we all, on an individual scale, have to decide whether or not to go along with others. As I noted in the previous post, the real question concerning conformity is not “Should we conform?”—to answer “No” as such is simply impossible, for human life as we know it would not exist without conformity—but “How much should we conform?” Taken this way, it is a question of quantity, meaning the question is asking more specifically, “Under what conditions, where and when, should we conform?” This is a more nuanced perspective, because if one wants to undertake an ethical analysis of conformity, then it must be nuanced. Therefore, still building off part one, I will be describing the process of conformity and its subtler functions in more detail, in conjunction with the phenomenon of “peer pressure,” a specific type of social influence to which teens are most vulnerable, as well as how the two can be analyzed, together with possible defenses against them.


Unknown.pngFirst of all, what is conformity? To put it simply, conformity is changing one’s behavior in order to fit in with a group. For example, walking through the park one day, minding my own business, I spot a pile of trash in the middle of the path, and after watching another person pick up one of the pieces and put it into recycling, I feel inclined to do the same thing, so I help out by also picking up a piece of trash, which might, in turn, cause someone else to join in. If I see someone do something, then I have the choice of consciously adopting their behavior for myself; and on this occasion, I did it for a good cause. On the other hand, I might witness someone eating a bag of chips, only for them to carelessly drop it in the middle of the path. Because they did it, I feel like I can do it, too, so I follow after their example, dropping my trash. In either case, there is a change in my behavior, a change oriented toward what others are doing. The thing is, most of us if asked will deny that we ever conform. Especially in places like the U.S., where individualism is cherished as a value, we are tempted to fall victim to what might be called a “nonconformist bias” or “nonconformist fallacy”: We tend to attribute to others a willingness to conform, thereby setting up the idea that we never conform ourselves. Indeed, through rationalization, we deny we conform at all (Jetten, 2004). “I only did it because…,” we say, trying to explain our behavior, when, in reality, the simple explanation is that we were images.jpegconforming. We look at others and point out that they do what others do, but when asked ourselves, we project it onto others, absolving ourselves of our guilt, putting us in denial. This subject-object view of things is better emphasized when it is noted that we judge others by their outward actions, whereas we judge ourselves through introspection (Pronin, 2007). In other words, if we are asked to explain why someone did something, we judge them by the consequences of their actions; and if asked why we did something, we judge ourselves by our intentions or internal motivations, sort of like consequentialism versus deontology (consequence vs. intention). Such is the nature of conformity.

In the 21st-century, when nearly everyone has access online, people are prone to conformity as well (McKenna & Bargh, 1998). On websites like Reddit, on which threads are made for like-minded communities, users can create discussions. The anonymity provided by the Internet allows us to do things without taking responsibility for them. As such, threads are sites (no pun intended) on which conformity can happen. People with similar opinions band together, while those with other, differing opinions are cast out, ignored, or talked over. Users are encouraged to adopt the viewpoints of their peers if they want approval or if they want to stay on the thread. In fact, not being a part of a thread, or, more generally, not being added to a group chat, implies to the person left out Unknown-1.jpegthat they have been rejected, leading to depressive symptoms (Smith & Williams, 2004). Many a person experiences FOMO (fear of missing out) because they have been left out of on their phones. They begin thinking about what their friends are saying about them, or to what parties their friends have been to, but to which they have not been invited. Anxiety builds up. This sort of ostracism causes dejection, apathy, and loneliness (Williams et al., 2002). One psychologist found that social pain correlates to physical pain (Eisenberger et al., 2007). The same parts of the brain activated during physical harm are activated when we are left out. Outside of technology, conformity affects us in a much more fundamental way. A study reports that conformity literally changes our perception of the world. Put into an fMRI, subjects were asked, alongside people who were in on the experiment, to rotate two 3D objects and determine whether they were identical. Nearly half of the subjects, 41%, conformed to the majority opinion, making their decision based on what the others said. The fMRI showed that when they made their decision, the brain was active not in decision-making, but in perception. What this means is that when the people were looking at the shapes, forming their judgments, they were not doing so with logic; they actually saw the shape itself differently, according to the others’ conclusions (Berns et al., 2005). Therefore, when we conform, we see the world differently.

Like a drug addiction, conformity can be a bad condition to be in, and trying to get out of it can be even more detrimental. Consider the process of trying to resist conformity: Peers are initially confused, and will try to convince you to see things their way again (Garfinkel, 1967), then team up and tease you until you give in (Festinger & Thibaut, 1951), then straight up ignore you. It can be summarized like this. First, they will try to Unknown-2.jpegunderstand your behavior, what is wrong with it; second, they will try to change your behavior to fit theirs; third, they will bully you to make you give in; and fourth, as a final resort, having lost you, they will avoid you altogether. Such behavior is childish, at best, which is why it is so prevalent in adolescence, when friends may want to try new things, only to be shot down by their friends, who will shoot down any differences, in an attempt to preserve unity and homogeneity, tolerating no divergences, which causes rifts in friendships, both parties ending up with hurt feelings. It is no wonder that betrayals between friends during high school can be so painful. One question that may arise is whether conformity differs between genders. Are men or women more likely to conform than their counterpart? According to studies, the likelihood of conforming boils down not to gender but to familiarity. In other words, it is not intrinsic to who you are, or with what you identify. Conformism is based more on what we know culturally. Men, for instance, are more likely to be knowledgeable about sports, and women fashion. When faced with fashion, men will conform more; with sports, women. Furthermore, in public, we are pressured based on societal expectations and how we are perceived by others, so we conform to stereotypes and gender roles. Put in a pressured situation, men will be more aggressive, and women more polite (Eagly, 1981, 1987).

Types of Conformity

So far, we have taken a rather broad look at conformity. It is time to consider it in its nuanced forms. Conformity can be divided into two types: Informational and normative. Informational conformity is the tendency to seek knowledge from others when we are Unknown-3.jpegunsure. Test-taking is a prime example of this type of conformity because students will look at each other’s tests and copy them when they do not know the answers to the questions. It has been found that the better the motivation behind getting the answer correct, the more likely a person is to conform for information. When an offer of money is made, one’s reputation is at stake, or the difficulty is raised, one is more likely to conform, and even more so when all three conditions are met (Baron et al., 1996). Going back to the test-taking example, during high school, students are under a lot of pressure to get good grades, lest they fail their class, fail to go to school, and/or fail their parents, so the motivation behind getting information is very high, meaning they will be tempted to look to others when they are unsure, when the pressure proves too much. If doubt kicks in, and they are under pressure to get the questions right, youths will conform to their peers, resulting in cheating, all to get credence in their answers. Results from Baron’s study show that when subjects were unmotivated, 39% of them conformed, whereas those who were highly motivated conformed 51% of the time. The appeal of cheating, then, is understandable, if the conditions prove right. Another study shows the influence of informational conformity. Discussing music is the most popular form of small talk in bonding because it lets people learn about one another’s personality, because music gives us insights into who we are (Rentfrow & Gosling, 2006). Using this as a starting point, some researchers gathered 14,000 participants, of which half were control and the rest experimental, and showed them 38 songs. The control group was given them afresh, while the experimental 916HhkwSwFL.pnggroup was able to see the number of downloads each song got. Naturally, the experimental group downloaded the most-downloaded songs and ignored the least-downloaded ones. Compare this to the control group, which listened to and downloaded the songs indiscriminately (Salganik, Dobbs, & Watts, 2006). This illustrates informational conformity to the extent that the experimental group relied on what the previous group had done to guide their actions. Whereas the control group was ignorant of the number of downloads, given that they were the first to be exposed to them, the experimental group had this information with which to work, and they used it to influence their decision of what and what not to listen to. Consequently, their behavior changed in order to match the information given to them by those who preceded them. It is easy to see the ambiguity of informational conformity, how it can be both positive and negative, beneficial and harmful. Where one may be tempted to cheat, one may be tempted to gain certainty of themselves.

Normative conformity is perhaps that with which we are most familiar, and it resembles most the definition of conformity as a whole: Normative conformity is changing our behavior to belong with a group, to avoid exclusion. One may ask, “Why do we conform?” The answer is that, without conformity, we would most likely be dead. Without conformity, rules and laws would not be possible, and nothing would be common to us as a people. Civilization needs conformity to maintain its cogency and stability. But this is not what we are really aiming at when we ask, “Why do we conform?” Instead, we are asking, “For what reason did we conform? For what reason is conformity essential to our nature?” The evolutionary psychologist will answer this by Unknown-4.jpegsaying that superficial instances of conformity, say fashion, evolved from a more primitive need for conformity, and that it has simply stayed with us since, despite maybe being less important. Based on this viewpoint, conformity was needed for survival. During our hunter-gatherer days, we traveled day and night in packs, watching each other’s backs, killing things together, abiding by social contracts. If someone wandered from the group, then it was certain they would die by themselves. It was essential that everyone stay with everyone, that everyone abide by the same laws. Being alone led to death. Evidently, in today’s world, we are no longer hunter-gatherers, and being left behind obviously does not mean we will die; yet the natural instinct is still preserved in us—the need to belong with a group. For our ancestors, ostracism meant death; for us moderns, ostracism means social death. As we are more rational than our predecessors, we are more calculative when it comes to conforming normatively: We evaluate the need to conform based on both the quantity and quality of a group (Wilder, 1977). At school, seeing a group of friends, I may think to myself, Well, there are five of them, which is quite a lot, and they are all important to me, their being my best friends, so I decide to conform. If I decide that there are not enough people, or that these people are not as significant to me, then I will not conform, seeing as it does not matter much. Often, we like rather inaccurately to estimate in our minds the amount of normative influence there is at work. Taking to mind a specific behavior, Unknown-5.jpegwe think about much our peers do it, in most cases overestimating it. This is known as “pluralistic ignorance.” To illustrate this concept, we can look at at a high school, where students overestimated how cool with drinking their peers were, leading to increased amounts of drinking. Put another way, when students were asked if their peers drank alcohol, they incorrectly put the figure high, higher than it actually was, and this encouraged them to drink, since “everyone else was okay with it”; in reality, few students were actually okay with drinking, but the belief was unstated and therefore implied (Prentice & Miller, 1996). Pluralistic ignorance, if you think about it, is kind of like a self-fulfilling prophecy: If a large enough group of people thinks a certain norm is accepted, they will engage in it, too, even if the norm is really rejected; the group’s unstated belief is still powerful enough to get the ball rolling, and to get people conforming.

Unknown-6.jpegWe have still not arrived at a complete picture of conformity yet, though. Verily, the division as it stands, namely conformity as either informational or normative, can still be added onto with the mutually inclusive terms private acceptance and public compliance (Allen, 1965; Kelman, 1961). Beginning with the former, private acceptance is changing one’s inner beliefs or opinions. This involves a change in one’s point of view. A person may be a theist one day, but might undergo a change in opinion, making him an atheist, or vice versa. The point is, this change takes place within us, and it involves our incorporating a new idea. When someone experiences private acceptance, it is known as conversion, for the person has switched their ideas. In moral terms, private acceptance is authentic because it is on the part of the person themselves; no one can change their minds but themselves—they may be convinced, but the choice to change is theirs alone. In contrast, public compliance is a change in one’s outward behavior, open, public, which often manifests as superficial, as it stems from pretending, putting up a façade, so it is not what one really believes, deep down. The process of publicly complying is compliance, the act of bending to what others want of us. In moral terms, public compliance is inauthentic because the person may do something of which they do not actually approve. The difference between private acceptance and public compliance and their moral worth can be summarized like this: “In contrast to informational social influence, normative pressures usually result in public compliance without private acceptance—people go along with a group even if they do not believe in what they are doing or think it is wrong.”[1] This quote also illustrates that the two types Unknown-7.jpegof conformity—informational and normative—align with a respective change—private acceptance and public compliance. In class, a student may not have been listening to the teacher, causing them to miss the instructions, such that, when the teacher lets the students loose, they do not know what they are supposed to do, so they ask a classmate what the teacher said. When their classmate tells them what they are supposed to do, the student experiences informational conformity, and they then internalize this through conversion, whereby they privately accept the information. Seeking information and acting on it is the essence of informational conformity. While at camp, some kids come across a divergence in the woods, and they wonder which one to take, having forgotten the route. One of them, who actually remembers the path, says to go left, but nobody listens to them, for they are not well-liked, so the rest of them go down the right path. However, the kid who knew the left path was the right one, seeing their peers go the other way, feels left out and does not want to be left alone, so abandoning what they know to be the truth, they go right. Despite knowing objectively the right answer, the kid publicly complies, contrary to what they privately accept. Such are two vignettes of the two types of conformity.

socimpact.gifSocial impact theory is one framework psychologists use to explain why and how we conform. According to the theory, conformity depends on three conditions: Strength, immediacy, and number. It bears resemblance to the fact that we judge whether or not to conform based on the quality and quantity of a group. The strength of a group is dependent on how much we want to impress them, or how impactful they are as a group. A more powerful group will elicit more conformism. Immediacy is how close a group is to oneself, both proximally and socially. In other words, if I have close relationships with the people involved, or if the group is near to me in any other way, then I will conform to them. Lastly, number dictates how many people compose the group. Psychologists have found that a medium number is most potent, as too little will yield not enough influence, whereas too many will lose the effect. Based on this information, a group of five or so friends with whom one is comfortable will most likely cause us to conform; a large party with lots of people whom we do not know will not cause us to conform; and a one-on-one or possibly three-way meeting will not move us in any way.

As stated in the introduction, my goal is to objectively describe conformity while also subjectively evaluating it. Whether conformity itself is good or bad cannot be determined, since it is an objective, neutral tendency; what can be judged is how conformity is used. Regarding the use of normative conformity, there are two values at play: Injunctive and descriptive (Reno, Cialdini, & Kallgren, 1993). The two can be explained in two sets of philosophical terms. In philosophy, description refers to how things are, and prescription refers to how things should be. Similarly, in ethics, there is the is-ought gap, which states that what is the state of things cannot come from what the state of things should be. Taken this way, injunctive norms are prescriptive and involve “oughts,” and descriptive norms are descriptive (obviously) and form the “is.” Studies find that injunctive norms are more compelling than descriptive norms. In other words, a status quo in which something is asked of us is more effective than one in which things are simply described as they are. Conformity, then, can be put to good use, as in the example of littering less when prescribed, as opposed to littering more when described. People go along with what their peers do. Therefore, it is best we enact injunctive norms.

Resisting Conformity

How else might negative conformity be counteracted? When faced with informational pressure, it is sensible to take one’s time, first and foremost, to think things through, which involves not acting rashly. Then, you can ask yourself two questions: Where can I learn more? and Is our behavior moral right now? If you have no idea what is going on at a given moment, seek information immediately, and always find out whether it is credible before you act on it. If what people are doing seems unethical, do not follow them. Likewise, for normative pressures, first acknowledge that such pressures exist before doing anything else. Two options present themselves next. One can either stop doing what everyone else is doing, or one can find like-minded people for support, because banding together is incredibly effective, as shall be explored now.

Oftentimes, in a given situation, there will be a majority and a minority. The large group will think one thing, the lone individual another. Any differing judgment by itself greatly reduces conformity rates. In Asch’s famous experiments, if just one person supported the Unknown.jpegsubject, then conformity dropped 80%. Whether this other person who dissents has a similar opinion or not does not matter. “Any dissent—whether it validates an individual’s opinion or not—can break the spell cast by a unanimous majority and reduce the normative pressures to conform.”[2] An important part of this quote is that conformity is likened to a “spell cast by a unanimous majority,” and dissenting opinions are able to “break the spell.” This reveals the enchanting nature of conformity, seeing as it literally binds us in a spell, making us believe certain things from a one-sided perspective. When another side is presented, however, we are made to see things anew. Being a lone wolf is pretty much impossible, in reality. Through dissent, a minority may be able to sway a crowd with a “consistent style,” by which it is meant that the minority must stay true to their cause in a distinctive way. People who are passionate, for example, who stay that way, will impact a majority more. Persistence and open-mindedness are key when it comes to minority dissent, as a lone wolf needs to be resilient, confident, and passionate while also being friendly, open, and tolerant. By grabbing a group’s attention, the dissenter can stand out and make their point to open ears. Why the dissenter is successful comes down to three theories: First, through repetition and emphasis, the speaker draws attention to themselves by establishing their importance. Second, by being obstinate, by refusing to change one’s mind, a dissenter can cause the majority to capitulate and seek a compromise. Dissent causes unease, so it will urge the majority to want amelioration. Nobody wants disagreement perpetually. Third, dedication and passion show people that one has a Unknown-1.jpeggood point—it shows one has allegiance to their ideas. “Because of social pressures, we may not openly admit to the influence [of a minority], but the change is unmistakable (Wood et al., 1994).”[3] A good demonstration of this is Atticus Finch from To Kill a Mockingbird. After presenting his case thoroughly and ardently, after arguing on behalf of a minority, Atticus makes an indelible impression on the judges, who postpone the verdict longer than usual to discuss. In the end, while it appears Atticus fails considering his client was condemned guilty, he actually wins the case, in a sense, insofar as he inspires private acceptance, but not public compliance. The judges, based on the time period and the prejudices brought with it, sentence Tom Robinson, a black man, to jail, even though they know, deep down, that he is innocent, and that their views are wrong; they privately accept that, in face of the evidence, he is innocent, yet they publicly comply with the dominant beliefs of the time, overtly judging him guilty to appease the folks of Maycomb. So while the judges’ change in perception is imperceptible, it is very much there. Objective decisions, matters of fact, are decided by the majority, while subjective decisions, matters of opinion, are Unknown-7.jpegdecided by the minority. To better explain this, we can take the controversial subject of vaping among teens. The objective decision of whether they are healthy or not can easily be decided by the majority by looking at and evaluating evidence for or against. However, a subjective decision, such as Why should teens vape? is better decided by a minority. There is more room for sway from a minority in this realm (Maass et al., 1996). Dissent within a group, interestingly, makes problem-solving stronger and more efficient, and it facilitates deeper, more creative thinking among its constituents (Nemeth, 1986, 1990, 1996, 2001). This makes sense because introducing differing viewpoints allows us to see problems from another angle. And recount how conformity literally changes our perception.

Another fascinating way of combating conformity is through the use of what are called “idiosyncrasy credits,” a kind of social currency that can be exchanged between people. The thinking is that if you conform to a group enough times, then you get credit with everyone in the group, showing them that you are a part of it, and this credit can then be used to do something divergent from the group, something not normally accepted. Idiosyncrasy credits can be regenerated afterward. Let us say a teen wants to get in with images.jpegthis one group, so he tries to earn their approval by doing what they like doing. Once he is accepted, he is then permitted to act against them since he has already established he is with them (Hollander, 1958). Another practice is “attitude inoculation,” in which peer pressure is resisted through the use of “small doses of logic,” just as a vaccine uses a small dose of a virus to strengthen the immune system. Kids are given increasingly more complex arguments against behaviors such as drunk driving so that, when the time comes, they can resist pressure from people who want them to drive drunk. This prevents emotional appeals to one’s freedom and fear of rejection. Reactance theory, on the other hand, suggests that too strong a prohibition makes a kid want to do what they are told not to do even more than before. Just as Adam and Eve were told by God not to eat from the Tree of Wisdom but were piqued by His warning, so kids, when told not do something, will be tempted to do it, tempted to eat the Forbidden Fruit.  



[1] Aronson, Social Psychology, 6th ed., p. 242
[2] Kassin, Social Psychology, 8th ed., p. 262
[3] Id., p. 266


For further reading: Social Psychology 6th ed. by Elliot Aronson (2007)
Social Psychology 8th ed. by Saul Kassin (2011)

What is Dreaming, What Do Dreams Mean, and Why Do We Dream?

170419.jpgMartin Luther King, Jr. once had a dream—and last night, so did I. At the end of a long day, we all get in bed, close our eyes, and go to sleep. Then the magic happens. It is as if when our eyes close, the curtains of the mind are opened, and a lively and magical performance begins on the stage of our unconscious, with dances, songs, and monologues. Bright, intense, and nonsensical, these images in our heads visit us every night, although we are quick to forget them, as they soon fade away, almost as though they never happened. Dreams feel real, yet they are unreal, illusory. Sometimes they capture things that have happened to us, but sometimes they show us things that have not yet happened, and sometimes yet they show us things that are happening. Dreams are the greatest mysteries of the night, which is why they have attracted so much attention, both from individual thinkers and collective civilizations, who have attributed to dreams some sort of importance. What are dreams, really? Why do we dream? Do other animals dream? These are all questions psychology has been asking and will continue to ask. As of right now, none of these questions has a confident answer, but is constrained to theory. We humans will not rest (no pun intended), though, until we get the answer; we will refuse to just “sleep on it”—literally, because we cannot. So in today’s post, we will be exploring the science behind dreaming, the history of dreaming, and the different interpretations of dreaming that have been proposed. Although no definitive answers will be yielded, we will still gain some valuable insights on the nature of dreaming.

What is dreaming?

Types-of-brain-waves.jpgIt is not like we start dreaming as soon as we get into bed. Instead, sleep has to pass through several stages in order for dreaming to initiate. Researchers study brain waves with electroencephalograms (EEG’s)—a fancy word that refers to the skill of finding and interpreting electrical activity in the brain at a given moment. With these brain waves, psychologists have found that there are at least four stages that occur in the sleep process: First, in our everyday waking lives, our brain produces beta waves, which are released when we think, usually at 13 or more cycles per minute (cpm); second, when we close our eyes and start to relax, alpha waves start to kick in at 8-12 cpm; third, theta waves are produced at 4-7 cpm when we enter light sleep, or NREM, and begin to feel drowsy; fourth, we experience delta waves, which are 4 or fewer cpm, created during deep sleep, known as REM. It is in this last stage, when Delta waves are produced, that we experience most of our dreams. But what is a dream?

A dream is “a sequence of images, emotions, and thoughts passing through a sleeping person’s mind.” In addition, “Dreams are notable for their hallucinatory imagery, discontinuities, and incongruities, and for the dreamer’s delusional acceptance of the content and later difficulties remembering it.”[1] One important thing which is to be gleaned from this definition is the fact that dreaming is not just confined to visual displays and imagery in pictures; rather, dreaming can involve many other senses. A lucid_Dreaming.jpgquestion many people are curious about is whether blind people can see things in their dreams, or deaf people hear. What has been found is that people with blindness, because they have never seen anything, dream using senses other than sight, and the same thing applies to deaf people. In other words, people with congenital blindness, who were born blind, can hear or smell different things since they have been exposed to such stimulation, but not with their eyes. Another interesting thing about dreams is that, besides not just being about pictures, dreams can also communicate intentional states, i.e., motivations, fears, desires, etc. Dreams are set apart from waking life due to their being illogical. Whereas there is a logical cause-and-effect and sequence of narration that follows a common story in real life, there are random and disorganized events in dreams. As such, they are characterized as fantastical, belonging more to fantasy and fiction than reality, adopting unrealistic exaggerations and possibilities, more incredible than realistic. When we say there is a uniform narrativity to life, we mean there is a set plot, with a beginning, middle, and end; but with dreams, there is no such narrativity, for there is nothing that links events together in any reasonable way.

Now, regarding what actually happens during dreaming: Once we reach deep sleep, once the brain starts putting out Delta waves, we switch between two stages, REM and NREM. REM stands for “rapid eye movement,” and there are about 4-5 of them that cycle through the night, 90 minutes at a time. While we are in REM, brain waves paradoxically Unknown.jpegresemble those that happen when we are awake. If we were to look at the brain when we were awake, then it would look just like how it is when we are in REM—despite the fact that REM is deep sleep, when the entire body is paralyzed and in total relaxation. This is a kind of “dream-state,” as psychologists like to refer to it, in which animals who undergo it are very stimulated. All mammals, not just humans, experience this dream-state. The only difference is how long each animal spends in the dream-state; depending on the average lifespan of the animal, they will dream more or less. We humans are in the middle. The reason REM is named so is because the eyes literally twitch rapidly while shut, which seems to contradict all logic, and the reason why psychologists are so puzzled about the phenomenon. Speaking of puzzling phenomena, some people experience “sleep paralysis” when they regain consciousness during sleep, only to find their bodies rigid, unable to move, as if stapled to their beds, their throats pinched. Why we wake up randomly, we do not know. Why we are paralyzed—this we do know: Psychologist Michel Jouvet found in an experiment that the pons, located in the lower region of the pons2.jpgbrain stem, actually inhibits all motor activity, meaning the muscles are completely stopped. He performed a study in which cats had their pons removed, and then he watched them at night to see what happened. Because he got rid of the part of the brain that stopped muscles from being used, the cats, he observed, actually moved around quickly and ferociously, in a predatory manner, because they were, Jouvet supposed, dreaming about catching mice. What this revealed is that, if the pons were not activated during sleep, there would be many more injuries at night. It has been reported by a number of people that they experience a sort of “falling reflex”; upon falling in their dream, they wake up, as if reacting to the fall and catching themselves. Imagine, then, what would happen to many of us in some of our more threatening dreams, if it were not for the pons in the brain stem.

What about NREM? NREM stands for “non-rapid eye movement”—I know, creative. As is to be expected, NREM is not called “deep sleep” for a reason; NREM is a lighter form of sleep that is not as engaging. To better illustrate the difference, take people who can Unknown-1.jpegsleep through their alarms, and those who cannot; the former are in deep sleep, the latter in light. For a time, it was thought that dreaming only occurred during REM; however, later studies disproved this, stating that dreams do occur during NREM, just that they are less memorable and exciting. Other things that have been found about dreaming regard the environment and dream content. The external environment of a sleeper has been discovered to affect their dreams. For example, a case had test subjects enter REM-sleep, then the tester would spray them with water. Upon waking, the subjects said they dreamt of some form or another of water, be it seeing a waterfall or swimming in a pool. What surrounds a dreamer or what they touch can create associations related to the outside stimulus, or effector. Such dreams are “self-state dreams,” since their content is centered around the state in which the self finds itself. Sometimes, self-state dreams can also lend insight into future actions. One thought-provoking fact is that 80% of reported dreams are negative (Domhoff, 1999). Accordingly, for every five dreams we have, only one of them does not involve bad things happening to us.

Another subject of inquiry—one which is unbelievably trippy—is lucid dreaming. When dreams are very high in lucidity, or clearness, we are aware of ourselves as dreaming. lucid-dreaming1.jpgLet us put it this way: Lucid dreaming is knowing that we are dreaming. But are we just dreaming that we are dreaming? If you want a quick look at the philosophical problem of dreaming, then you can read here! Aside from the armchair philosophy of dreaming, there is a little more substance to lucid dreaming. For instance, lucid dreamers feel like they have no sense of touch, allowing them to pass through otherwise impassable obstacles, and they also apparently lack any other sense beside sight. Lucid dreams are also said to be more bright than regular dreams. When aware of dreaming, dreamers can ignore natural laws, doing things that defy logic and physics. All of this raises the question of why we even dream in the first place. If sleep is necessary for us to rest our bodies, then why not just sleep, why have hallucinatory visions at night? Unfortunately, we have no solid answers. There is only speculation. I will discuss these speculations in further detail at the end, but for now, here is a brief overview.

  1. Wish-fulfillment. According to this theory, dreams are symbolic representations of repressed, unconscious urges that are mostly erotic. The problem with this theory is that, surprisingly, dreams with sexual content are actually quite rare and uncommon (recall that 80% of dreams are negative).
  2. Memory storage. Those who support this theory argue that because memory is improved during REM, it stands to reason that the purpose of dreams is to filter out the day’s experiences. If you have ever heard that it is unwise to study right before going to bed, then it comes from this. Just like your body, your brain needs time to recover, so if you jam it with knowledge right before bed, then you will overload it, and your learning will not be as effective; the brain works more efficiently if it takes in smaller chunks over a longer amount of time.
  3. Neural pathways. Random sensory information from outside stimulates the brain as it sleeps, strengthening their neural connections. Thus, this theory says dreaming’s purpose is to solidify our neural pathways.
  4. Biological model. Activation-synthesis is the theory that the brain stem creates random imagery that is interpreted by the limbic system, which colors it. Hence, seemingly meaningless visuals are turned into emotional, colorful images that resemble conscious life.
  5. Cognitive development. For some, dreams reflect our cognitive development. As evidence, they use the fact that children have relatively simple, crude dreams, whereas adults have more complex, egocentric dreams. The complexity of dreams depends on how much knowledge one has.

A History of Dream Interpretation

Egypt ba.jpgSince the earliest civilizations of man, dreaming has held an important place in our culture. If we explore the human mind over 4,000 years ago, then we will find the earliest records of dreaming to date. A document known as the “Chester Beatty papyrus” was excavated and is dated to be from around 2,000 B.C. On it are written 200 dreams that were reported and interpreted by the Egyptians. Based on Mesopotamian mythology, and adapted from Assyrian sources, this Egyptian dream codex reveals the universal nature of dreaming. The fact that these three great civilizations—Egypt, Mesopotamia, and Assyria—all gave such immense attention to dreams, that they were related in study, shows how intimate dreams are to the collective conscious of a people. In all three societies, dreams were ways of contacting invisible realms through the guidance of either good or bad spirits. Then came Abrahamic monotheism. Christianity, Judaism, and Islam all interpreted dreams as coming directly from God to them in their sleep. Understandably, these dreams were heavily filled with religious metaphors and symbolism.

A little later and the Greeks would become fascinated with dreams. The Greeks had their own religious groups—some might say cults—called “Mysteries,” and many a Mystery was focused on dreaming. In order to have better dreams, Greeks encouraged sleep to each other with oils and drugs, so that they would be more immersed. An important aspect of Greek life was the oracle: Each day, hundreds of travelers would go to oracles to have their fortunes told. Dream interpretation was done in the same manner. Specialized interpreters would have a place in the temple, where they were surrounded by natural smoke that they would read and decode, then pass onto the dreamer. During the Archaic period, though, a shift occurred. The Pre-Socratic philosophers began to steer away from religion and toward scientific, rational thought. Mystery and dream divination, or magic (oneiromancy), would be replaced with more empirical observations. Each of the following philosophers accurately predicted modern-day conclusions by themselves.

  • Heraclitus (c. 535-475 B.C.) claimed dreams were nothing more than day residue, i.e., recollection of things that happened throughout the day.
  • Democritus (c. 460-370 B.C.) thought dreams were the result of the external environment acting on an individual’s consciousness.
  • Plato (428-348 B.C.) proposed that dreams were a manifestation of wish-fulfillment based on repressed desires in the soul. He also thought dreams were the divinely inspired and could grant people creative impulses.
  • Aristotle (384-322 B.C.) argued against prophetic interpretations, instead declaring dreams to be the result of sensory stimulation that could potentially affect our future actions based on their content.

Unknown-2.jpegThus, the study of dreams officially became scientific in nature. Artemidorus, coming 400 years after Aristotle, born in the same country as Heraclitus, wrote the largest encyclopedia of dreams for his time, the Oneirocritica. In it, he distinguished between two types of dreams: Insomnium and somnium. Insomnium is a dream-state whose contents are about the present. These are dreams that deal with current problems and daily occurrences. Somnium is a dream-state whose contents are about the future—self-state dreams, in other words. These dreams are “deeper,””more profound,” than insomnium dreams because they give us insight. But Artemidorus came up with even more fascinating idea, one that has hitherto been neglected and still does not receive a lot of merit today: Dream interpretation reveals more about the interpreter than it does the dreamer. Apparently, according to Artemidorus, by gaining the background of a person, by interpreting their visions in light of this, we gain insight about ourselves because we mix in our own beliefs and symbolism that they would otherwise miss. Contemporaries of the Pre-Socratics in the East—the Chinese, Buddhists, and Hindus—were the heirs of the Egyptians forasmuch as dreams were glimpses of a higher realm, a truer reality, to them. In their dreams, they would experience the transcendence of their souls from the corporeal world.

The scientific study of dreams would come crashing down in the Middle Ages, which saw a reversion back to religious symbolism. Only this time, the underpinnings were moral and theistic. The problem of interpretation came down to the whether the dreams were communicated by God or not, in which case it was either angels, and therefore holy, or demons, and therefore wicked. Thus, medieval dreamers had to discern between truth and untruth. A few hundred years more, and we get the great rebirth, the Renaissance. It is from the Renaissance that we get our contemporary connotations of dream interpretation, for it was during this time that divination once again became dominant. The Renaissance saw a surge of interest in practices like occultism, mysticism, numerology, astrology, prophecy, and hermeticism—in a word, magic. Nowadays, these associations still carry over, so when we hear people talking about interpreting dreams or discussing horoscopes, we tend to brush them off as useless, arcane magic.

Fast forward 400 years to the Modern Age in the 19th century. Still traumatized by the Renaissance, people in the 1800’s were hesitant to study dreams or consider their importance seeing as dreams were seen as “unscientific” and therefore unworthy of serious thought. The magical side of dreams was not wholly abandoned or dismissed, contrary to what some might think; literary geniuses celebrated dreams for their Dr_Jekyll_and_Mr_Hyde_poster_edit2.jpgcreativity. Famous Romantic poet Samuel Taylor Coleridge wrote his unfinished poem “Kubla Khan” after an inspiring dream, but he never finished it because he was interrupted by a mysterious “person from Porlock”; novelist Robert Louis Stevenson wrote Strange Case of Dr. Jekyll and Mr. Hyde based on a dream he had, too, in which he saw his hidden, unconscious urges battling his outward, conscious behavior; and Edgar Allen Poe also said his dreams contributed to his works. Around this time, in the mid 1800’s, anthropology was becoming a much-studied topic, so anthropologists were traveling around the world studying primitive tribes. What they found predated Jung’s theory of archetypes, and they also found that these tribes usually made their decisions based on dreams they had—the resurgence of prophecy. Next comes the 20th century and the rise of psychoanalysis, dominated by two figures, Sigmund Freud and Carl G. Jung, to whom we shall presently turn.

Modern Day Dream Interpretation Models

maxresdefault.jpgBefore discussing the psychoanalytic tradition, we will first return to the earlier models of dream interpretation (the cool name for which is oneirology) we discussed. The first model is the cognitive model, according to which dreams are a means of thinking about the day during the night. When we dream, our mind is thinking just as we normally would, but with multiple layers of metaphors emphasized unconsciously. In this way, everyday imagery is “translated,” so to speak, into metaphorical forms that stand in for ordinary things. These forms, furthermore, are colored by our emotions, so that they reflect our inner world of moods and feel significant to us. This theory also groups together the cognitive development one, so dream quality will differ based on one’s brain development. Some scientists contend that dreams are important for problem-solving. There is a scientific basis for the phrase “sleep on it,” after all. When we sleep, our unconscious and subconscious are most active, so thoughts we did not know we even had float around, and some by chance end up back in our conscious, while those in our conscious sometimes drift off into the Unknown-1.jpegsubconscious. Either way, ideas move around. A friend of mine told me the story of how he lost his headphones, only to dream about how he lost them two months later, whereupon he found them in the exact location of which he dreamt. How did something so insignificant, something that happened two months in the past, chance to occupy his dreams? The best explanation, I told him, was that after a while, his brain, by its own whims, conjured up the memory of where he left it. Why it took so long, I do not know. Whether timing is important or not and how long an average memory takes to resurface are also questions worth asking. Over time, the brain will relax, and things that were troublesome and problematic will be relieved, I can only theorize. This leads to the next idea, namely that dreams reflect our current state and condition, environment, gender, age, and emotions, according to the cognitive model.

Another model we discussed briefly was the biological model. In light of biopsychology, dreams are nothing more than mere creations of neuronal firings processed by the thalamus into visual displays that make no sense. As such, interpreting dreams is useless considering they have no inherent meaning. Personally, I am not proponent of the biological method for two reasons: First, (I know this is a terrible reason) it is too bland and boring, and it is too reductive for my tastes; and second, if these neuronal firings are so random, then how can they create coherent (in the sense of “being familiar”) images that do make sense and that resemble complete narratives and sequences? This is not to say that the cognitive model is more correct than the biological model—not at all. As I have said, these are just theories, and neither has been verified indubitably.

Freud-b.jpgMost famous, hands down, is the psychoanalytic theory, first propounded by Freud, and then expanded upon his student, Jung. Starting with Freud, he described dreams like this: “Dreams are the means of removing, by hallucinatory satisfaction, mental stimuli that disturb sleep.”[2] In Freud’s eyes, dreams arise from the irrational, hidden side of ourselves—the unconscious. As a result, dreams need to be interpreted by a therapist. Dreams work through association, creating nonsense connections between ideas that are seemingly unrelated. Since dreams are irrational and incoherent, interpreters use a technique called “free association” that Freud loved to use. The analyst says a word, and the patient says whatever comes to mind. The logic goes that if the dream is formed by associations, then the intuitional associations said by the patient will point to their roots. Having done this, the analyst can then find associations of which the patient was initially unaware. One thing Freud did that remains of a subject of interest is his splitting of dream content into manifest and latent content. Manifest content is the storyline of the dream, the surface-level meaning. On the other hand, latent content is the deeper, symbolic, underlying meaning of the dream. Whereas the dreamer has access to the manifest content, only the analyst has access to the latent content, because latent content is unconscious and therefore hidden from view; it has to be uncovered through free association. What is this elusive latent content, and why does the mind go through the trouble of disguising it? Freud said that dreams protects us from waking up due to “mental stimuli”—but to what kind of mental stimuli was he referring? He believed that the latent meaning of dreams were repressed, unacceptable ideas.

The basic formula for a Freudian dream is “any kind of trivial occurrence + a traumatic childhood memory.” Subsequently, dreams take some kind of ugly truth and dress them up with ordinary occurrences. This is why Freud said that dreams protect us from disturbances. If these unacceptable ideas were to be shown to us in full light, then we would never be able to sleep; we would be too disgusted or traumatized. Dreams prevent Unknown-2.jpegus from waking up by playing out fantastical scenarios that reflect our wishes, goals, and fears. By hidden means, the dream releases our repressed memories. Freud posited a theoretical “censor” inside the mind, a kind of watchguard that makes sure nothing from the unconscious creeps into the conscious. Obviously, then, a feeling of aggression cannot be made manifest; instead, the unconscious is clever, so it disguises the feeling of aggression, such that it is able to sneak past the sentry and make it into the conscious in the form of a dream that makes no sense, but which nonetheless has a deeper meaning. This explains why dreams are confusing and unclear, yet meaningful. How the unconscious goes about disguising the repressed ideas is called the “dream-work.” Its four methods are condensation, displacement, symbolization, and secondary elaboration.

  1. Condensation is what happens when two or more ideas are merged together into a single thought.
  2. Displacement is what happens when an emotion is misdirected toward something other than its target.
  3. Symbolization is what happens when an object is made to stand in for another.
  4. Secondary elaboration is what happens when the subject tries to recall their dream, only to distort the facts.

Unknown-3.jpegBy using all four tricks, unconscious impulses manage to invade the conscious mind. Freud went further and identified two types of dreams. Dreams of convenience are dreams related to one’s day. Closely linked to day residue, dreams of convenience focus on some kind of fear or wish that occurred during the day visually. The other type of dream is one of wish-fulfillment, for which Freud is most well-known. Basically, he said that dreams are a way of satisfying our desires with our imagination. Because we cannot satisfy these desires in reality, we are forced to do so in sleep, in ideality. These desires are either erotic or aggressive. To use an example, one night I was really thirsty, and I went to bed on my trampoline (for fun, of course!). I dreamt I got out of the trampoline, went all the way inside the house, got a drink of water, walked back to the trampoline, and fell asleep. When I woke up, I had no memory of getting up, and I realized that I could not possibly have gotten water, as it was far too cold, and it was a long walk. Thus, I came to the conclusion that I dreamed about getting water in order to satiate my thirst before going to bed. To summarize, here are Freud’s ideas about dreams:

  1. Repressed childhood memories are revealed through associations.
  2. Said memories are either painful or unrefined, which is why they are repressed.
  3. Dreams are illogical, resembling an infantile imagination.
  4. Dreams have sexual and/or aggressive themes.
  5. Dreams are disguised wish-fulfillment.

6534180_orig.pngThe reason we no longer believe in the psychodynamic model of dreams is because, simply put, there is no evidence at all that supports it. Carl Jung was Freud’s student, although he would later distance himself from his teacher’s ideas in order to develop his own in more detail. To begin, he classified dreams into three categories. The lowest level of dreams are day residuals and just focus on things that happened throughout the day. Above these are self-related dreams, dreams that are about us, our mental states—stuff like that. The highest dreams, however, are archetypal dreams, which are the deepest ones possible, for they connect us with each other through the collective unconscious. I feel the quickest way to present Jung’s views are by enumerating them and then contrasting them to Freud’s:

  1. Dreams are essentially creative.
  2. Dreams are a part of the collective unconscious. Each of us, no matter who we are, shares the same symbols and universal characters, or archetypes.
  3. Dreams reveal the personal unconscious, too. We learn about the hidden parts of who we are through dreams.
  4. Dreams give insights into the future.
  5. Dreams are positive and constructive, providing insights to the self.

And as contrasted to Freud:

  1. Dreams are meaningful in- and of-themselves, not by interpretation.
  2. Dreams represent present, not past, problems.
  3. Dreams are best interpreted based on patterns and recurrences rather than individual interpretations. Rather than look at each dream by themselves, it is better to look at them together.
  4. A holistic analysis of dreams is more efficient than free association.
  5. Symbolism is not repressed, but archetypal.

If we want a quick summary of the psychoanalytic model, then we can say that Freud’s focus was sexual, and Jung’s archetypal. But while they differed in many respects, they also had these traits in common with the modern world:

  1. Dreams give clues to life.
  2. Dreams bring the unconscious to the surface.
  3. Dreams are based on day residue.
  4. Sensory stimulation affects our dreams.
  5. Universal archetypes are a part of our collective unconscious.
  6. Dreams are a.) repressed or b.) creative.

1370918.large.jpgIn conclusion, while there is a rich history of studying dreams, there are also countless unanswered questions regarding dreaming. Will we ever know them? Who knows. Until then, we can only dream of what they might be. Since the Egyptians, who believed in otherworldly journeys, to the modern psychoanalysts, who believed in hidden symbols, there have been many views of what dreams are, and many revisions, too. What we can see from the history of oneirology is that how dreams are interpreted depends upon the culture in which one finds oneself. Where one lives, how one lives, what language one speaks—these can all affect how we interpret dreams. Does this mean that there is no objective meaning of dreams, that the purpose of dreams differs between peoples? The question remains of whether dreams are even meaningful in the first place, or whether they are, in fact, just biological accidents created by the brain. These questions create a living nightmare for psychologists. One thing that is for certain is that dreams are very personal, intimate things that happen to all of us, that are unique, and that are private to us alone. I have my dreams, and you yours. (Get ready for the cliché ending…). But then again, what if this is all a dream?  



[1] Myers, Psychology, 8th ed., p. 285
[2] Freud, The Interpretation of Dreams, p. 499c*

*From Adler’s Great Books of the Western World, Vol. 54


For further reading:
The Encyclopedia of Human Behavior Vol. 1 by Robert M. Goldenson (1970)
Psychology: Mind, Brain, & Culture 
2nd ed. by Drew Westen (1999)
In Defense of Human Consciousness 
by Joseph F. Rychlak (1997)

Introduction to Psychodynamics by Mardi J. Horowitz (1988)
Schools of Psychoanalytic Thought
by Ruth L. Munroe (1956)
The Secret Language of the Mind
by David Cohen (1996)
8th ed. by David G. Meyers (2007)

4 Strategies To Stay Motivated

Unknown.jpegEvery Thursday, we dread coming to class. Slowly, nervously, we walked into the gym, not knowing into what we were walking or what to expect. He calmly sauntered ahead of us, set down his clipboard and music box, opened the door for the girls, and stood there, arms crossed, as if plotting his latest machination—of which we, the students, were the victims. We got in our lines, got through our warm-ups, then stood there dumbly, looking amongst ourselves with frightened eyes, shrugging, asking with our eyes, “What is it today?” with desperation, with full knowledge that none of us would walk out of there alive. Suddenly, after clearing his throat, our P.E. teacher announced, “Alright, get behind the sideline and listen up.” We got behind the sideline. He turned to face us. He gestured. “Today, for your fitness test, you will sprint from here to the sideline and back, followed by a burpee. You will repeat this, each time adding one burpee on, until you get to 10 burpees. When you are done, shout ‘Time!’ and go get some water.” So that’s what our fitness test would be that day. It sounded terrible. In total, we would be doing 20 sprints and… oh god… 55 burpees. I looked at my friend who was next to me. How are we gonna survive? Are we going to die today? These questions would be answered shortly. Until then, it was just I and the present moment—just I and the workout. And the key to it all: Keeping the right mindset to stay motivated and get through it.

images.jpegMotivation, we all know, is a complicated and fickle thing, a thing that usually comes and goes without our willing it, as though a fairy sprinkles her magic dust on us, and we become motivated, only for it to vanish into thin air when we are done, leaving us unmotivated and lazy, incapable of doing anything more. There are no real shortcuts to becoming motivated. Most of the time, it just has to happen. When I say, “I am motivated,” with “motivated” being in adjective form, I say it as such because it is done to me. Really, I am implying that there is something actively motiv-ating me. As such, I am passive. I am the recipient of motivation, whereupon I am motivated to do something. Whether it is doing a fitness test like I have to do every Thursday in P.E. or going to go a job that one hates, the only way to get through it, the only way to survive—is to be motivated. In tough moments, when we are pushed to our limits, when our arms feel like they are gonna fall off, when the stacks of paper that have to be read are piled to the roof, when all seems unbearable, when all hope seems lost—it is at these moments that we need motivation the most. To get through them, we must stick with them and try to stay motivated.

As it turns out, I did not, in fact, die that Thursday after completing my 20 sprints and 55 burpees, although it almost felt as if I died. I got through it, though, by keeping the right mindset. Today, I will be sharing my 4-step method of staying motivated, from which you can hopefully benefit, too! This can be used during exercises, work, or anything else, if you make it work. I have yet to give it a catchy name, but for now, it is the MMAA method:

  1. Macro. The first tactic I used was thinking at the macro, or large, scale. In the back of my mind, I always had an idea of how far I was in the workout. For example, I would remind myself, “I have ‘x’ sprints left and ‘y’ burpees left.” This way, by Unknown-1.jpegthinking about it in terms of the absolute, the ultimate, the whole, I was able to keep track of my progress. Taking inventory of where one is and where one has to go, allows for clearer thinking and planning. The macro aspect is the long-term. It takes into account the beginning and the end, the start and the finish, but not the middle in between, because then one gets caught up in the details; on the contrary, one must keep their eyes set on the whole, the bigger picture, in relation to which the smaller parts stand. Thinking macro is absolute and always directed toward the bigger sets, the bigger picture overall. 

  2. Micro. Second is thinking on the micro, or small, scale. During the workout, once I had established where I was in terms of the macro, I could then break it down into smaller units, into sets, and from there, into individual repetitions. This way, a larger workload became a series of smaller, more manageable ones. The macro makes way for the micro. To use an example: If I had to do nine burpees, then having to do nine burpees would be the macro approach, but the micro approach would be doing three sets of three. The bigger picture—nine burpees—was broken into the smaller pictures—manageable sets, three sets of three—which could easily be completed. The two work together. Illustrating further, if I were still sticking with the 3×3 burpees, and I was completing the first three, then the next 2×3 would then be the macro, and the current three the micro: This is because the micro is oriented, or Mosaic-Magic-840x400.jpggrounded, rather, in the present, in the relative and relational. Micro thinking is always a part of the whole, as opposed to macro thinking, which is the whole itself. The macro makes a mental map, and the micro draws the pathways connecting the landmarks. If one only thought macro, then they would be overwhelmed; if one only thought micro, then they would be lost. As such, the two mutually coexist and are dependent upon each other. Another idea that I touched on is that of the present. Because the macro takes into account the future, the micro takes into account only the present—not what I will do, in the future, what is still left, but what I am doing, right now, at this moment. While the macro image of three sets of three burpees exists in my mind, projected into the future, the micro conception of  “I am doing one burpee at the moment, out of three” is being done at the moment. What this means is that the micro, unlike the macro, is twofold: It simultaneously breaks down the macro and enacts it. In summary, the macro is a long-term projection of the bigger picture and what needs to be done, and the micro is the short-term breaking down of the macro into smaller parts that can be completed realistically.

  3. Action. Next is action. The name does not say anything important, nor does it seem groundbreaking. To be motivated requires that some action be done, does it not? Is not action redundant, then? Only to an extent, insofar as it is never considered in itself. Going back to the fitness test, I would find myself in the second half of the workout frequently asking how I would get through it. On the macro level, I had 10 burpees to do, and on the micro level, I had two sets of five to do. However, as I Unknown-3.jpegjumped, squatted, then pushed myself to the ground, I struggled, both physically and mentally. Already I had done 45 burpees, so my arms and legs were tired, and I was out of breath. Oh, if the workout could just end already! I thought. But this got me on a train of thought: Time is that through which things unfold, and unfolding is an action, meaning the only way to pass time is to act; and what this meant was, the sooner and quicker I acted, the sooner the workout would be over. Let me put it another way: Just sitting there on the gym floor hoping for the workout to end, acknowledging the pain and fatigue I was feeling, thinking both macro and micro—none of these would make the workout end quicker unless I actually did them. So while I knew I was tired, and while I knew I had to push out these last reps, the longer I dwelled on these things, the longer it would take me to finish, meaning the longer I would dwell, the longer I would hurt. Ultimately, thinking too much causes delay. Another way of thinking about action: Overall, the macro plan is to do my final 10 burpees and two sprints, yet having this plan is but what sets me on my way to doing them. Having this big picture in my mind does not change anything, per se. All it does is linger as a thought. It has no potent effect. I could sit on the sideline the entire day repeating to myself, “You have 10 burpees and two sprints,” but those numbers will not go down until I start on them. Until then, the numbers remain the same. Until then, nothing will change. So, in those moments when I found it nearly impossible to finish my reps, and when I asked, “How will I do the last four burpees?” the answer was, “By doing the last four burpees.”

  4. Absurdism. No matter what task it is we are doing, we at one point or another ask ourselves, “Why are we even doing this? Why should I even be doing it? What consequences are there if I do not do them?” That Thursday, in the midst of the fitness test, these questions came up many times in many forms. For comfort, I like to think back to Existentialist Albert Camus’ response to the problem of suicide. In Unknown-2.jpeghis essay, Camus references Sisyphus of Greek mythology, who has been punished by the gods to indefinitely push a boulder up a hill, who, having pushed the boulder to the top, watches it roll down to the bottom, forced to start all over again, ad infinitum. What does this have to do with anything? Well, Camus said that, although this is not the best of circumstances, we must bear it the best we can. Applying this reasoning, we can all find solace and wisdom in our goals: While a hard, laborious, and tedious task may be imposed upon us, and we do not want to do it, we might as well do it happily and do it to the best of our ability. If you think about it, there really is no reason to do it, no overarching purpose. But if we are doing it already, and if it is expected of us, why not jump in and make something of it? Sweating in the school gym, feeling like spaghetti, I knew that I could at any minute stop doing whatever I was doing, give up, forfeit, throw in the towel, call it quits—I could surrender to meaninglessness, to the absurd—or I could overcome the absurd, triumph over it. I could take the meaningless and make it meaningful. I could fight against the pain and turn it from suffering into vanquishing. It is a process of strengthening. There was no universal law that I had to do a fitness test, and by all means, I did not have to do it; but I decided that, despite its purposelessness in the long run, I might as well push through it and prove myself in spite of the void it presented to me and my classmates.

images-1.jpegIn conclusion, motivation is not a singular, simple thing—yet then again, we already knew that. I had conceived of this blog during a nap, and I had planned out a perfect image of it in my head; but as soon as I started writing out the strategies, I found that it did not correspond with the image in my head, and I felt like it had been a waste; I wanted to rewrite the whole thing—but I lacked, of all things, the motivation to do so! Somehow, out of sheer willpower, I managed to jump back and rewrite it; hence, what you are now reading. The MMAA method, albeit widely applicable, is certainly not the approach for everyone, and it may not work for every single task. Howbeit, the four steps need not be taken together as a package, no; rather, you are free to do whatsoever you like with any of the methods, be it adapting them to your own strategy, or taking one or two and starting from there. Ultimately, it is subjective, considering that is the very nature of motivation—it differs for everyone. The main takeaways, in summarizing the four strategies are:

  1. Have a clear idea of the bigger picture, including reference points, and a clearly defined beginning and end.

  2. Think about the bigger picture in small terms, in terms that are doable, that can be done mindfully.
  3. Plan, but do not plan such that it gets in the way of enacting that plan. Reflecting too much on the plan prevents it from coming into play.
    And finally:
  4. There may not be an immediate meaning behind your work, and you have to be fine with that: Make your own meaning, and embrace it. Maybe it is not the best thing to be doing, and yes, maybe you have better things to do, but for now, you might as well have fun doing it!

And yes, many of the ideas expressed herein are not new, and perhaps you have read something similar before; but hopefully, you have gleaned at least something of value that you can apply to your life!

Stay motivated, readers! Keep reading!

Plato and Plotinus on Love and Beauty

Unknown.pngWhat makes something beautiful? What is love (Baby don’t hurt me)? These are questions that we ask in our lives because we experience them both every day. They make up a large part of our experience, and without them, we know not what life would be like, nor whether it would be worth living. For this reason, these questions have been asked by philosophers, who, thinking about æsthetics, the philosophy of beauty and art, have also questioned these fundamental aspects of reality and the human condition. One of the most enduring contributions is from Plato. In today’s misguided world, many people, without having even read Plato’s principle work The Symposium, talk about “Platonic love,” throwing it about in conversations with friends and family, thinking, mistakenly, that it refers exclusively to a non-sexual relationship between two people. People like to claim that they and their coworker have a “Platonic relationship” without knowing what they are really saying, or without bothering to see what the great Greek philosopher himself had Unknown.jpegto say regarding love; for while the non-sexual aspect is important, this understanding is commonly used, but it does not capture the whole picture. Little do they know Plato originally referred to pederasty—relationships between older men and young boys, a common practice in Ancient Greece! A spiritual interpreter of Plato, Roman philosopher Plotinus continued Plato’s work in his Enneads. Together, Plato and Plotinus represent the ancient views on both beauty and love in their transcendental nature, whose ideas have shaped our understanding for ages to come.

symposium-vase.jpgThe Symposium is one of the more fun dialogues by Plato. In it, Plato, Socrates, and Aristophanes—a famous comic playwright—join a symposium, or drinking party, in which they go around the table sharing speeches, engaging in intellectual discussion on the subject of love, each of them drunk. Pausanias’ turn comes up, and he begins his speech by identifying two types of love. According to him, the other speakers had been mistaken in not defining what kind of love they were praising. So Pausanias corrects them by asserting that there actually two kinds, aligning with the two goddesses representative of them: The Common Aphrodite and the Heavenly Aphrodite. Beginning with the Common Aphrodite, Pausanias says that this kind of love, which is purely erotic—that is to say, inspired by Eros (Έρως)—is a shallow kind of love, insofar as it is a love of the body. Of the two kinds, this is the “wrong” love. Common love is temporary; because it is of the body, and because the body is temporal, subject to change with time, impermanent, it means the love, too, will be temporary. This Common love is very common these days; we see it all the time when we hear people saying, “This person is so hot” or “They are so beautiful.” This is not to say that it is wrong to call someone beautiful; rather, the problem lies in the intent. Are you attracted to this person purely for their looks, or is that an added benefit? There is nothing wrong with saying someone is beautiful—in fact, if you think that, then you should tell them. However, the problem with loving someone for their looks, Pausanias argues, is that their body will inevitably age and deteriorate. Interestingly, in the Buddhist tradition, if you are infatuated with someone, then you are instructed to meditate upon their decaying body as a reminder that their body is not images.jpegpermanent, but will wither with time, turning your mind off of their physical beauty, and onto their spiritual beauty, which is permanent. This same line of reasoning will be used by Pausanias. So what happens when someone, loving another for their looks, years later, does not look at this person the same, but decides they love them no more since they have changed? Well, because their love was attached to something temporary, their love is temporary, and so, Pausanias continues, the lover will flee. They were just in it for the beauty, yet when the beauty is gone, so are they. Similarly, he warns against loving someone for their possessions, namely their status or wealth. As with beauty, one’s reputation and financial situation are not always going to remain the same. If you love someone, and they lose all their money one day by chance because money is unreliable and everything can change in a moment, then you will love them no longer; the attachment was to a temporary thing. One’s money is not a part of them; it is external to them. Likewise, the regards of many are fickle. Who knows if someone will retain their reputation? Love must be directed toward the right object. Such material objects are just that, and they lack significant value. A Common lover is immature. He is not emotionally prepared for a committed relationship. He is full of energy, but empty in compassion. He wants passionate, sexual love. But once he wants it no more, he will leave. He is interested in one-night stands, not a devoted romantic relationship. Common love is short-lived.

Next, he explicates Heavenly love. This kind of love, as opposed to the Common, is of the soul and, therefore, righteous. Unlike Common love, Heavenly love is not shallow, but deep, in that it is spiritual and mutual: It is spiritual because it is literally of the spirit, the breath, the soul, and it is mutual because it is reciprocated—both lovers are Unknown-1.jpegin it for the sake of the other. It is also mutual in the sense Aristotle thought it mutual, namely that the lovers, in entering a romantic pact, agree thenceforth to help perfect each other; that is, they serve both themselves and the other, each aiding the other. Say one lover is trying to form a habit, the other to break a habit. In this situation, the lovers will love each other while at the same time mutually helping and perfecting themselves. It is two-way. Heavenly love is between two lovers, two subjects, not a lover and a beloved, a subject and an object. Heavenly love is profound, and reaches to the lowest depths. Temporary and lowly is Common love; permanent and transcendent is Heavenly love. The latter is permanent because it is not of the body, but of character. One’s looks can change very easily, and while one’s character is not exempt from changes, it is much slower and intentional than the body. Psychologists (and even Socrates will eventually say the same thing) argue that character is not a permanent thing, changing with age much as looks do. For the most part, however, character is a pretty stable, consistent thing, and it takes a lot to change it dramatically. Is it really worth loving someone who is physically attractive if they have a combative, unfriendly personality? In 40 years, will they still look the same as when you first loved them? No. In 40 years, will they still be combative and unfriendly? Yes. As such, a person’s body is not righteous, whereas character, one’s soul, is. Heavenly love is also transcendent. It is transcendent because it steps over the appearance of a person, the outer boundaries, the external face, the artificial construction, and it pierces through them, gives insight, sees not outer beauty, but inner beauty. Transcendental love loves a person for who they are inside, not outside. It is a love of their essence. And in contrast to the immature Common lover, the Heavenly lover is mature, prepared, and ready. This is a devoted, long-term relationship.

To evaluate Pausanias’ position, let us look at whether his views make sense. Just as he distinguishes between two kinds of love, one short and exciting, one long and content, so psychologist Elaine Hatfield distinguishes between two types of romantic love: Unknown-2.jpegPassionate and companionate. The first, passionate, is sexual and full of intense energy, although it only lasts for a short time. This is the kind of love teens have, when they are full of idealism and optimism, expecting great things from a partner; they are excited and will jump too quickly into things in the heat of the moment. This is embodied by Common Aphrodite. The second, companionate, is calm and full of compassion. Think not of teens in love, but a couple who has been married for 20 years. Here, you will see two people deeply in love with each other, neither of whom would leave the other at the drop of the hat, but who are, at their core, devoted to each Unknown-3.jpegother, devoted to perfecting each other. They have arguments, but they resolve them. They love, and will continue to love, each other. This is embodied by Heavenly Aphrodite. It seems Pausanias was spot on! Most often, this is the paradigm that is titled “Platonic love.” Plato gets a lot of backlash for his views these days. To “love someone for their personality” has become a universal joke. This is often said facetiously, with a smile on one’s face, meant to be ironic or sarcastic. And regarding those who actually mean it—they are met with derision. Consequently, almost nobody really means it when they say it. Yet then again, this is only a fraction of what “Platonic love” truly is.

The next speaker, Aristophanes, is the favorite of many, for his speech is the most remembered, the most entertaining, and, perhaps, the most influential even today. His is the speech on soulmates. Back in the day, relates Aristophanes, man and woman walked alongside a third sex, which was a combination of the two: A half-man, half-woman. It was a single organism, with two of every body part, seeing as it was two people put Unknown-4.jpegtogether, in a perfect, rolling circle, a symbol of perfection and completion, as Nussbaum points out [1]. These humans, composed of two people, were thus twice as powerful, and twice as ambitious. They decided, like the Giants, to attack the gods, which was a bad idea; Zeus promptly split up these dual humanoids. As a result, the two halves went about looking for their other half desperately, hoping to be reunited. Filled with longing and Eros, they wandered sadly, bereaved, dejected, almost to the point of depression. The halves could not function on their own; they needed each other. Since they spent all their time moping, busying themselves with finding their other halves, they were unable to make sacrifices for the gods. Zeus took pity on them and moved their sexual organs to the front to make mating easier. When two soulmates find each other, they immediately embrace, pressing their bodies together in an attempt to become one again, to press themselves into each other. They hug and kiss, holding themselves close, wrapping their arms around the other, then pulling tightly. Yet no matter how hard they try, no matter how hard they embrace each other, they cannot put themselves together again.

It is such reunions as these that impel [lovers] to spend their lives together, although they may be hard put to it to say what they really want with one another, and indeed, the purely sexual pleasures of their friendship could hardly account for the huge delight they take in one another’s company. The fact is that both their souls are longing for a something else—a something to which they can neither of them put a name, and which they can only give an inkling of in cryptic sayings and prophetic riddles (The Symposium, 192c-d).

So what is love? As Aristophanes reports, when lovers are asked this very question, they cannot answer. If you were to ask a teacher what teaching is, then you would expect them to know—it is their business. By nature, then, should not lovers, who are held tightly in the grip of love, know in what state they are? Surely, they should. On the contrary, love is such a powerful, binding force, such an irresistible pull, such an enigmatic drive—who could possibly define it while in its throes? Well, to answer the question of that at which love aims, Aristophanes proposes the following: Say Hephæstus were to ask the two halves if they wanted to be welded together so as to be inseparable for the rest of their lives, not even “until death do they part” (as they would remain together in the Underworld), a single entity forever. No one would refuse such an offer, for they want, deep down, to be “merged … into an utter oneness with the beloved” (The Symposium, 192e). The idea of soulmates is still popular till this day. Many of us believe we are just walking through life without an aim, a sinking feeling of incompleteness pervading our being, as though there is something more to life, something, someone, out there waiting for us, our other half, who is perfect, who is everything we want them to images.jpegbe, who will make us happy, who will be the missing piece to this jigsaw puzzle we call life, the summum bonum, the most absolutely beautiful person—and it is just a matter of finding them; but until then, we remain incomplete and, therefore, unhappy. This mythological story is at once humorous and enchanting. I really like the idea of hugging as an attempt to bring the other person to oneself, to make oneself complete; it is a creative, thoughtful moral that is poetic in its presentation, and I think it is very powerful. Whether or not this story is true, many of us still believe it, and it is yet another part of “Platonic love.”

Unknown-2.jpegThen comes Socrates’ turn. It is his speech which is left out of the everyday conception of “Platonic love,” despite Socrates’ being Plato’s mentor. In the dialogue, Socrates speaks on behalf of Diotima, a woman he met who taught him about the nature of love. What is love, exactly? Love is a desire, and a desire is for something, and if one already has what one desires, then it is not a desire any longer; therefore, love is a longing for something one does not have. What is this something? Is it Aristophanes’ other half? No. Love, says Socrates, is a desire for the Good, with a capital “G,” meaning the highest good, the ultimate good, that from which good things derive their goodness. Hence, what is beautiful is what is good and noble. Everyone wants goodness to an extent. This requires qualification. First, all objects of our desire, be it a living thing or a goal, are good. For example, if I want to write a blog, if my desire is to write a blog, then I am aiming at something which, if I investigate further, is essentially good since it is of benefit to me. Second, everyone, regardless of their disposition, wants the good, whether they know it or not. A doctor and a murderer both seek the good, although we say the latter is errant in his ways, or is ignorant thereof. In other words, even if we do not have an idea of what the Good is, we still want it anyway. It is natural. It is human. Nobody intentionally desires what is bad for them. But what separates desiring from loving is immortality, states Diotima. Whereas if my goal is to exercise more often, then I am seeking the Good, if I love someone, then I am seeking the Good in them, and, from what I gain therefrom: Longevity. It is a strange idea to read. However, what Socrates is saying is that we want the Good forever. We always want to have in our possession the Good—not today, not tomorrow, but for time immemorial. When we love someone, we tend to analyze them, parse them into traits, which we then classify as positive or negative. We look at people’s love-1.jpgpro’s and con’s. As is our nature, we like good traits and dislike bad traits in people. I like a person for her altruism but dislike her for her stubbornness. So when I say I like “her,” I really mean: I like the Good in her. This is similar to something Pascal wrote 2,000 years after Plato, that we love people not for themselves, but for their qualities. The reason we like good qualities in people is that they are reminiscent of the Good, and what is Good is good for us; a person’s good personality helps us to flourish. Using the previous instance, the altruism of a girl will help me, but her stubbornness will not. Furthermore, because we are mortal and fated die, and because we are terrified of death, we try to find ways to achieve immortality, at least artificially. We do this by creating something by which we will be remembered. We want a lasting name for ourselves. Some people do this by two means: Having children, so as to carry on the line, to bear one’s name, and creating art (art, here, is to be interpreted broadly as any kind of creation), so as to have a creation which manifests one’s ideas. Before continuing we can summarize Love in three points: First, love is of the Good and Beautiful (the two are synonymous); second, love is the same object for every desire and goal; third, love is for creation, be it through children or art, with the goal of longevity.

If the Beautiful is behind all things, and if we desire it so much, then how do we encounter it? What is the true purpose of love? Diotima introduces Socrates to a ladder, or ascent, of love, which leads up to Beauty. The ladder starts at the bottom and ends at Unknown.jpegthe top, rising from particulars to universals, concrete to abstract. Starting with a single, individual body we consider beautiful, we meditate upon it, find everything there is that is beautiful in it. In modern terms, we look at someone we love and find desirable traits, traits valued by our culture, traits that make someone beautiful. Having done this, we can then realize that the body of one person is just as beautiful as the body of another. There is a good message here: Everyone is beautiful in their own way. Each has their own unique beauty. While this person is beautiful for x reasons, this person is beautiful for y reasons, although they are both beautiful in the end. Once we grow accustomed to this, we can grasp that the mind and soul are more noble than the body. We move away from Commonly love and toward Heavenly love. Beauty is seen as permanent and virtuous. Next, we ascend to ideas, laws, customs, institutions. We learn to see knowledge as beautiful. Finally, once we have seen the Beautiful in all earthly and intellectual things, we can perceive Beauty as such, Beauty itself. The journey upward can be summarized thus:

And the true order of going, or being led by another, to the things of love, is to begin from the beauties of earth and mount upwards for the sake of that other beauty, using these as steps only, and from one going on to two, and from two to all fair forms, and from fair forms to fair practices, and from fair practices to fair notions, until from fair notions he arrives at the notion of absolute beauty, and at last knows what the essence of beauty is (The Symposium, 211c-d).

In the ascent, in other words, we abandon the individual for the absolute. Love is no longer person-centered but idea-centered. The intellect takes over for the eye. Senses are devalued to thought. Instead of the material and lower, we see the Beautiful in the higher and spiritual. Once we have loved the Good, Beauty as such, we can find Beauty in all things. In short, there is no more favoritism. What this means is: No longer do I see Unknown-1.jpegbeautiful and ugly people, but I only see the Beauty in them. There is no one more beautiful than another, since we all share in the same Beauty. A true lover of Beauty does not discriminate, but rather sees Beauty everywhere, from people to animals to nature. Beauty is no longer temporary but permanent. The lover need not depend on a specific person or artwork to see Beauty, for it is everywhere. Suppose I derive a great pleasure in van Gogh’s “Starry Night,” but in no other piece. This is an undeveloped love. However, after I have attained a vision of the Good, I soon find that every artwork is beautiful, not just “Starry Night”; for this reason, I am not dependent on a single beautiful thing to know Beauty. Universal love can be found anywhere once envisioned. And unlike the body, subject to change, Universal Beauty is changeless. Love is the guide up the ladder; it draws us toward the Beautiful through Eros, the daimon of Love. Plato compared “the soul of a philosopher, guileless and true” to “the soul of a lover, who is not devoid of philosophy” (The Phædrus, 249a). The philosopher, or lover of wisdom, is the same in purity as the lover of Beauty; for in wisdom, there is Beauty. What is the beautiful like? In this quote, Plato describes Unknown-2.jpegwhat the famous Realm of Forms is like: “There abides the very being with which true knowledge is concerned; the colourless, formless, intangible essence, visible only to mind, the pilot of the soul” (The Phædrus, 247c). From this we can gather that the Form of the Good or Beautiful is permanent and unchanging. It remains the same eternally. The Beautiful is absolute, not relative. Things are not “more beautiful” but are either beautiful or not-beautiful. Beauty, lastly, is the same to all things. A statue has as much beauty as does a shoe. It achieves this through instantiation: The partaking of instances. Explained in another way, beauty instantiates itself, by which it is meant that, a particular instance of beauty, for example Michelangelo’s “David,” is beautiful precisely because Beauty is inside of it. Love is a form of madness, Plato famously wrote. In a very poetic (and long) passage, Plato illustrates what it is like to be in love:

But he whose initiation is recent, and who has been the spectator of many glories in the other world, is amazed when he sees any one having a godlike face or form, which is the expression of divine beauty; and at first a shudder runs through him, and again the old awe steals over him; then looking upon the face of his beloved as of a god he reverences him, and if he were not afraid of being thought a downright madman, he would sacrifice to his beloved as to the image of a god; then while he gazes on him there is a sort of reaction, and the shudder passes into an unusual heat and perspiration; for, as he receives the effluence of beauty through the eyes, the wing moistens and he warms. And as he warms, the parts out of which the wing grew, and which had been hitherto closed and rigid, and had prevented the wing love-on-a-swing-Cropped.jpgfrom shooting forth, are melted, and as nourishment streams upon him, the lower end of the wing begins to swell and grow from the root upwards; and the growth extends under the whole soul—for once the whole was winged. During this process the whole soul is all in a state of ebullition and effervescence,—which may be compared to the irritation and uneasiness in the gums at the time of cutting teeth,—bubbles up, and has a feeling of uneasiness and tickling; but when in like manner the soul is beginning to grow wings, the beauty of the beloved meets her eye and she receives the sensible warm motion of particles which flow towards her, therefore called emotion, and is refreshed and warmed by them, and then she ceases from her pain with joy. But when she is parted from her beloved and her moisture fails, then the orifices of the passage out of which the wing shoots dry up and close, and intercept the germ of the wing; which, being shut up with the emotion, throbbing as with the pulsations of an artery, pricks the aperture which is long-distance-relationship-advice.jpgnearest, until at length the entire soul is pierced and maddened and pained, and at the recollection of beauty is again delighted. And from both of them together the soul is oppressed at the strangeness of her condition, and is in a great strait and excitement, and in her madness can neither sleep by night nor abide in her place by day. And wherever she thinks that she will behold the beautiful one, thither in her desire she runs. And when she has seen him, and bathed herself in the waters of beauty, her constraint is loosened, and she is refreshed, and has no more pangs and pains; and this is the sweetest of all pleasures at the time, and is the reason why the soul of the lover will never forsake his beautiful one, whom he esteems above all (The Phædrus, 251-2)

Anyone who has ever been in love—in other words, all of us—can appreciate the beauty with which Plato speaks here. “If … man’s life is ever worth living,” Diotima confides to Socrates, “it is when he has attained this vision of the very soul of beauty” (The Symposium, 211d).

What are we to make, then, of Platonic love? Despite all its transcendent glory, the ideal of Platonic love has its flaws. A professor of the Classics, Martha Nussbaum criticizes Plato’s account of love on three grounds: Compassion, reciprocity, and individuality.

  1. Unknown-1.jpegCompassion: According to Nussbaum, Platonic love lacks compassion. The practices for which he calls require that one look down upon “worldly” things as beneath oneself. Bodies, for example, are to be dismissed as gross presentations, renounced instead for mental pleasure. This kind of attitude instills an egotistical superiority. One thinks oneself superior to others, who are reduced to objects of desire; and these people are then devalued. The lover takes precedence. Also, suffering, which is a temporary condition, is frowned upon, demanding that the lover take on a Stoical indifference to pain, which is unnecessary. Homeless people, for example, are objectified as suffering for no reason, instead of contemplating the Forms.
  2. Unknown.jpegReciprocity: Platonic love is one-sided. To engage in this kind of love is to be egocentric. Only the self exists, and the opinions and emotions of others are not gauged, but ignored. It does not matter how the other person feels, as long as the lover, gets what they want: The Good. It is not like you love someone, and they love you back; rather, it is just you loving someone. In this sense, the beloved is not an end-in-themselves, but a means-to-an-end. You love someone not for their sake, but in order to reach the Good. The agency and autonomy of the beloved are ignored. They cannot act for themselves.  
  3. images.jpegIndividuality: Lastly, in pursuing Platonic love, the individual, the beloved, is dropped. When we say we love someone, do we ever consciously think, “I love x because in them is instantiated the Good”? No. We say we love them for who they are. The person with whom we are in love is considered unimportant in the long run, used as a stepping stone to the Good, a step ladder that will be discarded, cast away once it has been climbed. By treating the beloved as a sacrifice to reach the Good, we are, in effect, denying their faults, the things that make them different; i.e., we are denying their uniqueness, their individuality. As Nussbaum jokingly puts it, “‘I’ll love you only to the extent that you exemplify properties that I otherwise cherish.’”[2]

In short, Nussbaum argues that Platonic love is just far too objective, idealistic, and detached to be applicable. This is just one side, though. Others, like Paul Friedländer, cite that Platonic love actually does incorporate the individual beloved, and awards them a higher place. From personal experience, I agree that Platonic love tends to dismiss the beloved; but I do think the idea of Beauty manifest in individuals is quite real. Tell me your experiences in the comments, and whether or not you agree with Plato!

220px-Plotinus.jpgFrom hence we move to Plotinus, the Egyptian-Roman founder of Neoplatonism, whose spiritual ideas were based on Plato’s theories, and who influenced a nascent Christianity. Although we have covered the argument that Plato’s conception of love is idealistic, looking at Plotinus’ views makes Plato sound like a common-sense realist. Plotinus is even more spiritual than Plato, and even more contemptuous of the physical world, which he viewed as a hindrance. It is recorded that Plotinus constantly remarked that his body was ugly and that he looked forward to being released from it. In one anecdote, his student Porphyry wrote that an artist came to Plotinus’ school because he wanted to make a portrait of Plotinus; but Plotinus turned him away, ashamed to be seen in his body—how ghastly it would be to have a representation of such a hideous thing! Love for Plotinus is a unio mystica, a mystical union, drawing upon similar imagery to that of Aristophanes, but with God, whom he calls “the One.” Beauty lies in symmetry, in wholeness. When it comes to a certain instance of beauty, the whole is both greater than and equal to the sum of its parts—but this does not make a whole lot of sense. The whole is greater because it partakes in the Beautiful. It is equal because it must be constituted by only what is Beautiful. His reasoning is that all parts must be beautiful in order to be Beautiful. Beauty + beauty = Beauty, but beauty + ugly ≠ Beautiful. Therefore, a Beautiful Unknown.pngthing must be greater than its parts, but must also be composed of all-Beautiful parts. Put together, they all form a harmony in union. Evidently, Plotinus borrows Plato’s theory of instantiation: “[T]he material thing becomes beautiful—by communicating in the thought (Reason, Logos) that flows from the Divine” (The Enneads, I.VI.2). Put another way, a beautiful thing is beautiful because Beauty is in it. If there is no Beauty in it, then it is not beautiful. The things which make up the art are not beautiful in themselves; it depends on their symmetry in an arrangement. The Idea of Beauty is thus imposed on Matter itself. Imagine a blank canvas. It is not beautiful. Then, a bucket of different colors of paint is thrown onto the canvas. In this image, the canvas is matter, and the paint is Beauty. It is only when the canvas is so arranged that the paint can make it beautiful that it becomes Beautiful. Plotinus also references Plato’s ascent up the ladder, with a little change:

It [the Realm of Ideas] is to be reached by those who, born with the nature of the lover, are also authentically philosophic by inherent temper; in pain of love towards beauty but not held by material loveliness, taking refuge from that in things whose beauty is of the soul- such things as virtue, knowledge, institutions, law and custom- and thence, rising still a step, reach to the source of this loveliness of the Soul, thence to whatever be above that again, until the uttermost is reached. The First, the Principle whose beauty is self-springing: this attained, there is an end to the pain inassuageable before (The Enneads, V.IX.2).

istock-653098388-b874e6221d237c909723bbf13f388fadaa20e281-s900-c85.jpgJust like Plato, Plotinus believes the philosopher is most inclined toward love of the Beautiful. Also, the two agree that love ascends from the soul to virtue to knowledge to customs to Beauty itself. The difference lies in the starting point. For Plato, the lover begins with a person with whom they are in love; for Plotinus, the lover begins by shunning the person, by turning away from all things physical and material, jumping straight to the soul. Why does one jump immediately to the soul? Because the soul, Plotinus claims, is itself beautiful. There is a metaphor of “falling” in Plato and Plotinus, which mirrors that of Adam and Eve’s fall in The Bible, in which the immortal souls of men lived in the Realm of Forms, only to succumb to temptation, thereby causing it to fall into the material world of change and impermanence. This means that, just as Adam and Eve received Wisdom right before the Fall and retained some of it, so the souls of men received a vision of the Beautiful right before the Fall and retained some of it. By falling into the physical world, the soul became impure, ugly. As Plotinus puts it, “[A] soul becomes ugly … by a fall, a descent into the body, into Matter” (The Enneads, I.VI.5). The religious metaphors here are obvious. The soul thus becomes “ugly,” associated with grime and dirt. In my blog about Orphism and its influence on Pythagoreanism, we see the same kind of thinking: The body (σωμα) as a tomb (σημα), the pure trapped in the impure, seeking release, yearning for reunion with the World-soul, or, in this case, the self-love.jpeg.pngOne. Despite being a radical purist, Plotinus is a very wise guy with a lot of good things to say, and we should heed him. The following is a much-celebrated excerpt of Plotinus, one read and admired by many who find in it a beautiful and inspiring message, written with much the same elegance as Plato, considered the best of his writing. In it, he tells us all to look inside ourselves and realize that, deep down, beneath our appearances, we all have an inner beauty. Sometimes, we just need some self-love, and Plotinus reminds us to give ourselves this much-needed assurance. Read it for yourself:

Withdraw into yourself and look. And if you do not find yourself beautiful yet, act as does the creator of a statue that is to be made beautiful: he cuts away here, he smooths there, he makes this line lighter, his other purer, until a lovely face has grown upon his work. So do you also: cut away all that is excessive, straighten all that is crooked, bring light to all that is overcast, labour to make all one glow of beauty and never cease chiselling your statue, until there shall shine out on you from it the godlike splendour of virtue, until you shall see the perfect goodness surely established in the stainless shrine (The Enneads, I.VI.9).

Unknown-1.pngWhat have we learned today? Well, what we have not learned for certain is what love and beauty are. Despite the brilliance of these thinkers, they are no closer to the truth than we are. As to what love and beauty are—my guess is as good as yours, and that is not a bad thing; I think it is rather a good thing, really, and perhaps it should stay that way. We should all ask ourselves what love and beauty are, because they are essential to a well-lived life. To ask what love and beauty are, and to experience them fully and intimately—this is a part of the examined images.pnglife. Plato and Plotinus’ ideas have survived for ages and shall continue to influence us in the future. Yet their wisdom is not perfect, and their theories are not flawless either. It has been shown that their views, debatably, are impractical. From soulmates to the Ancient Christians with their agape to the modern philosophers like Pascal to contemporary man seeking love in an unloving world, we are all asking the same question as Haddaway: What is love? A most mysterious emotion it is, one we barely beginning to understand. What is life without love? Without beauty? As soon as we start asking these questions, we are on the way to wisdom. To actively pursue the answers to these questions requires that we all be philosophers. If we want to know beauty and love, we must be lovers of wisdom, philo-sophers.  



[1] Nussbaum, Upheavals of Thought, p. 483
[2] Id., p. 499


For further reading: The Greek Thinkers Vol. 2 by Theodor Gomperz (1964)
Upheavals of Thought by Martha Nussbaum (2001)
Plato: An Introduction by Paul Friedländer (1958)
On Plotinus by C. Wayne Mayhall (2004)
The Enneads by Plotinus (1991)
The Symposium by Plato (1973)
The Phædrus by Plato (1973) 

Do Babies Exist?

My friends and I were sitting on the deck one Summer afternoon sipping cokes by the pool while discussing different philosophical matters. It was a hot day, and I was introducing Descartes’ philosophy to them—as any normal person in an everyday conversation does—and explaining why it was important and what it meant for us. I set Unknownit up like this: He asked if his whole life were an illusion, a dream, and if there were an Evil Demon that was deceiving him, causing his senses to be misleading. It is impossible, I explained, to distinguish between waking reality and a dream, according to Descartes. However, searching for a first principle, a single starting point of knowledge from which to start, he realized he had been thinking this whole time. The process of questioning whether he was in a dream presupposed that there was a questioner who was doing it. This led him to remark, “Cogito, ergo sum,” or “I think, therefore I am.” By doubting all his senses, he was led to the conviction that he could not doubt that he was doubting in the first place; for otherwise, he would not be able to doubt: He would have to exist first before he could be deluded.

UnknownAfter hearing this, my friends seemed pretty convinced, and pondered it a bit. Out of nowhere, one of them said, “Well, babies aren’t self-conscious.” A pause. “So do babies exist?” Taken aback, unprepared for such a response, I readily dismissed the notion, called it absurd, and tried to think of an answer. We began debating whether or not babies knew they existed, or whether they could even think about thinking. Of course, the question itself—do babies exist since they are not self-conscious?—is actually grounded in a misunderstanding: Descartes was not trying to prove his existence; rather, he was trying to prove he had certainty, something undoubtedly true. But for the sake of argument, we entertained the idea. Common face shouts till it is red in the face, “Obviously, yes, babies exist! Only a madman would doubt their existence. I mean, we see them right in front of us—they’re right there, they exist!”[1]

This prompts the question: If we are conscious of a baby existing, yet they themselves are not conscious of themselves existing, do they exist? Babies are fascinating creatures. They are copies of us, miniature humans who must learn to cope with and understand the world in which they are living through trial-and-error. Seeing as they are capable of such amazing cognitive feats like cause-and-effect and language acquisition, investigating their conscious abilities sounded intriguing. A delve into developmental psychology, the study of how humans develop through life, yields interesting insights into this psycho-philosophical problem.

Unknown-1.jpegJean Piaget was a developmental psychologist who studied the development of children throughout the 20th-century. Today, his influence is still felt in psychological literature and continues to impact thought regarding childhood development. For years he observed, tested, and took notes on infants, from birth to early adulthood, using the data to devise his famous theory of cognitive development, which takes place in four stages: Sensorimotor, preoperational, concrete operational, and formal operational. The first stage, sensorimotor, takes place starting at birth and ending at the age of two. During this period, the baby’s life is geared toward adjusting to the world. Babies are “thrown” into this world, to use a Heideggerian term. They are born immediately into life amidst chaos, with all kinds of new stimuli to which to react. Confused, unable to make sense of things, exposed to strange sights and sounds, the baby cries and thrashes about, trying to find some sense of security. It is bombarded all at once by sensations and experiences. It is disoriented. This is a brave new world, and it is full of data that needs to be interpreted and sorted out in the baby’s mind. In order to navigate through the world, the newborn uses its motor skills and physical senses to experience things. The baby interacts with its environment, including people, grabbing with its hands, sucking with its mouth, hearing with its ears, and smelling with its nose. Imagine being in a cave for years, devoid of all sensory information, when, one day, you are let out and, having forgotten what it was like to experience the world, you are overcome by the magnitude of the environment, so you try to relearn as much as possible, greedily taking in everything that you can—well, being in the womb is kind of like being in a cave for the baby, meaning it is doing the same thing: It is getting a grasp of reality by engaging its senses in any way that it Unknown-3.jpegpossibly can. The baby is an empiricist who delights in its senses as though life were a buffet. Oh, there is something I can touch! Ah, that smells nice, let me smell it! While it cannot yet register these sensations, the infant uses its senses to obtain a primitive understanding. They are actively mapping out the world according to their perceptions, simple though they are. According to Piaget, babies eventually learn to pair coordination, knowledge of their body and its movement, with determination. Once they are able to effectively use their body parts in a way that is conducive to their survival, they develop their sense of where these limbs are in relation to each other, called proprioception. This allows them to use determination in regard to this newly acquired coordination. Babies can now direct themselves with autonomy and do something. However, this is a simple form of determination; it is not like the baby has free will and can decide or choose to do this or that. Whereas the baby can move toward a particular object, it cannot decide mentally, “I am going to crawl over to that thing”; it just does it out of pure, unthinking volition.

At three months, a baby can sense emotions and, amazingly, recreate them. Seeing their parents sad, an infant can react to this with a fitting response, as in being sad themselves. By being able to tell what someone is feeling, the baby can imitate them, showing that the baby has at least a simple recognition of empathy. Around this time also, the baby actively listens to their social scene, picking up on spoken language. It is incredible (in both senses of the word) because it is now that the infant unobtrusively Unknown-4.jpegand quietly internalizes and processes everything it hears like a sponge, learning speech cues, such as when to talk and when to pause; the rhythms of speech, including cadence; vocabulary; and nonverbal communication, which makes up the majority of social interaction. Here is a tiny little human just crawling around the house on all fours who cries and eats and goes to the bathroom, all the while they are actually learning how to speak—who could possibly fathom what is going on in that small, undeveloped mind! A little earlier, around two months usually, the baby already shows signs of early speech when it babbles. Nonsense sounds are uttered by the baby, who is trying to imitate speech, but who is not complex enough to reproduce it entirely. Four to five months into development, the baby can understand itself as a self-to-Others, or a self-as-viewed-by-Others. I have my own image of myself, but I understand that I am perceived by other people, who form their own images of me. One study shows that, from four to nine months, the infant has changing patterns of involvement in play. In the earliest stage, the baby will, if it is approached by the parent, play peekaboo. Because they have not yet learned that things exist independent of them in time, babies think that the parent disappears when they are covered, and is surprised to find they are still there. A few months later, nine months, the baby is able to take on the role of the initiator who wants to play peekaboo, instead of the responder who will play peekaboo if asked. This proves that babies learn to combine determination with intention (Bruner, 1983).

Just three months later, when the infant is officially one year old, it achieves a self-image. Looking in the a mirror, it can recognize itself and form an early identity. Like chimps, babies can now respond to themselves as an actual self in the mirror, noticing, for example, a mark on their forehead, and realizing that it is not on the mirror, but on themselves. During 14-18 months, an infant is able to differentiate an Other’s intentions from their own (Repacholi & Gopnik, 1997). Children like to think in terms of their own desires. If a kid wants a cookie, they act on their desire. Thus, when they are 14-18 months old, they can distinguish Others’ desires as different from their Unknown-5.jpegown. Within this period, the baby can also know that it is being imitated by someone else. If a parent mimics something the infant is doing, the infant knows their own behavior is being shown to them. Finally, the 18-month marker designates when the baby begins to start its sentences with the first-person “I.” With a sense of self, the infant is able to roleplay, in which it takes on new identities, or roles, and is able to play “as them.” Second-order emotions, also known as self-conscious emotions, like shame and embarrassment, arise in the child at this time, too. Children possess some semblance of self-consciousness.

After the sensorimotor stage is what Piaget called the preoperational stage, which takes place between the ages of two and seven. It is at this stage that the infant constructs their own world. Through the process of assimilation, the toddler creates mental schemas, mini blueprints conceived in their minds, frameworks by which reality is processed then Unknown.pngmade sense off, allowing them to structure reality in a way that is useful to them. When a new experience is undergone, it is made to fit the pre-existing schema. Because these schemas are very simple and basic, they are obviously inaccurate, although that is not point of them; they are not supposed to be innate categories of the mind, as Kant would have thought of them, but early hypotheses made from the little experienced gathered by a child. One time, my cousins came over to play video games; we were playing a level in Lego Indiana Jones where we had to drive around on a motorcycle chasing cars. My cousin’s little brother pointed excitedly at the cars zooming down the streets, exclaiming, “Doo-doo!” I hopped on a motorcycle and chased after them, only for him to look at the motorcycle and, again, shout, “Doo-doo!” My cousin and I tried to tell him that a car and a motorcycle were two separate things. In his mind, he saw a moving vehicle with wheels, so he created a mental schema. Anything that fit under that description—a moving vehicle with wheels—would be considered by him to be a “Doo-doo”—in this case, both the car and the motorcycle, despite their being different things. This illustrates that schemas are not always accurate; they are for classifying and categorizing things. Of course, this leads to a new process observed by Piaget: Accommodation. We come to an age where we discover that our schemas are inadequate because they do not fully represent reality. As such, we have a kind of “schematic crisis,” as we are met with an anomaly, something which sticks out, something which does not fit with our prevailing theory. Hence, we must remodel our thinking. Consequently, we are forced to find a way to reconcile the already-existing category with this new piece of data, either by broadening the schema, or by creating a new one altogether. Babies thus learn to make more accurate classifications as they learn new things and create new schemas with which to interpret Unknown-6.jpegreality. Once these schemas are built up, the infant is able to engage in organization, through which they order their schemas. Some are judged to be more inclusive or exclusive than others, and so are co-ordinated based thereon. In the case of my cousin’s little brother, he would have to organize his schemas like this: Broadly, there are vehicles, under which we might find cars and motorcycles as types, which can themselves be expanded upon, for each comes in different kinds. This way, reality is structured in levels, or hierarchies, not necessarily in importance, but in generality and specificity. Organization is a synthesis of assimilation and accommodation. All this schematizing segues into the next point, namely that in making sense of the world, we give sense to it.

The preoperational period is characterized by symbolic representation in toddlers. In philosophy, the study of meaning and symbolism is called semiotics, and it is closely related to what babies do, interestingly. Life is separated into two concepts: Signs and symbols. Signs are fixed things—concrete objects. Symbols are relative meanings—abstract values—usually assigned to signs. While every car I see is always a car, its meaning is not always the same and is liable to change. For some, it can represent, can be symbolic of, freedom, if you are a teen just getting your license; transportation, if it is how you get around; dread, if you hate road trips or have to wait hours during commute. The point is, everyone sees the same sign, but for everyone the symbol has different meanings. Preoperational toddlers are able, then, to understand objects not just in their literal, concrete sense, but as standing for something, as abstract and meaningful. Babies are not passive, as I have said, but on the contrary, very much, if not entirely, active. By interacting with the world around them, they experiment, learn, and conceptualize. Around three years, the baby is fully capable of speaking, feeling, having motives, and knowing the relation of cause-and-effect.

Unknown-2.pngOne of the consequences of Descartes’ Cogito is its resulting solipsism: The thinker, the Cogito, is only able to prove his own existence, whereas Others’ existences are uncertain. Is this a requisite for existence? Is self-certainty a necessity? If so, the case is a difficult one for babies. Controversially, Piaget proposed that babies are egocentric; his theory is widely contested today in psychological circles. The meaning of egocentrism can be guessed by looking carefully at the word’s roots: It means self-centered; however, it is not self-centeredness in the sense of being prideful, selfish, and concerned with oneself, no—it is more closely related to anthropocentric, in the sense that the self is the central point from which all others points are judged or perceived. For this reason, Piaget suggested that infants can only see things through their own perspectives, not through Others’. You may be wondering why I sometimes have been capitalizing “Other.” Philosophically, the problem of egocentrism is closely related to solipsism, resulting in what is called “the problem of Other Minds,” which is the attempt to prove the existence of selves outside of our own, of whose existence we are uncertain, so they are called “Others,” giving them a kind of external, foreign connotation. I digress. Babies, so thought Piaget, are unable to take Others’ perspectives, so the must rely on their own perspectives. To do this, they reason from self to Other. Infants’ egocentric tendencies, when combined with their inability to acknowledge objects as existing permanently outside of them, lead to a subject-object dualism, a subjective idealism, in which the self is distinguished and utterly separated cup-faces.jpgfrom the physical world. It becomes “my” viewpoint, or “your” viewpoint, subjective, relative. As long as I look at an object, a toddler thinks, it exists. And yet, the toddler also has a social self, which it develops through its interactions with other children. Many psychologists have claimed that, by playing, children are able to acknowledge the existence of not just Others, but Others’ emotions. It is evident in roleplaying, where the children pretend they are someone they are not, and act accordingly, placing themselves within a new self, which they adopt as their own, and interact with the other children, whom they see as someone else, whom they acknowledge and actively engage with, responding to how they are treated, and sensing emotions.

A dominant, popular theory that attempts to refute Piaget’s egocentrism is “Theory of Mind” ([ToM] Wellman, 1990). Wellman found that babies develop an awareness of Others at the age of three, when they operate on belief-desire reasoning. Motivation for kids consists of a belief, what they know, and a desire, what they want. A child might be motivated to have a cookie because they know where the cookie jar is, and they are hungry for one. Using this kind of reasoning, the kid attributes their own intentions to another. Looking at his playmate, the toddler assumes, “Well, I want a cookie, and I know where they are, so this kid, like me, because he has the same beliefs and desires as I, must want a cookie, too.” Is it faulty and inaccurate? Wildly. Does it make sense, realistically? Yes. The Theory of Mind is a primitive form of empathy, a kind of empathetic stepping stone. It is simple and selfish, because it assumes that images.pngchildren have the same beliefs and desires. One often sees this in children trying to console one another: An infant sees another crying, and, because he takes comfort in eating ice cream, believes the other will take comfort in it, too. Critics like Vasudevi Reddy criticize Theory of Mind because it is too detached from actual interaction and ends up actually attributing one’s own self-certitude to another, resulting in what she calls a “Neo-Cartesianism” of sorts. It promotes solipsistic thinking by denying the existence of an independent thinker with emotions, instead attributing to them own’s own ideas, thereby increasing a toddler’s dualistic thinking.

Unknown-8.jpegAccording to Reddy, a baby’s communication with Others’ already presupposed intersubjectivity, or being involved with people on a personal level. Babies are self-aware to an extent at birth because, the argument goes, the baby is able to distinguish itself from the world around it. To act, is to know both the self and the object. It is similar to Fichte’s philosophy in that the Ego becomes aware of itself by recognizing everything that is not the Ego, creating the Non-ego; in other words, it is through the Non-ego—the world—that the Ego knows itself. The world, or Non-ego, is created purely with the intent of being a moral playground for the Ego. Following from this is the idea that the baby, coming into contact with the world, immediately knows it as not-itself, and so uses it as its playground, activating all its senses to learn about reality. If we could not tell the environment apart from ourselves, and we thought ourselves a part of it, how could we act independently of it, with our senses? This is an argument against Freud and Piaget, who both said newborns cannot tell themselves from the world. As a solution to egocentrism, psychologists found that parents play an important role early on. Parents should teach their children early on to differentiate self from Other. Too much similarity between the baby and parent means more egocentrism in life, which is harder to unlearn. Reddy’s RquLcsxM.jpgsolution is to avoid Cartesianism and Theory of Mind and instead pursue a second-person perspective, one between I-and-Thou, You-and-I. This way, there is direct access to another’s intentions. Babies, through play, function on this second-person level by directly interacting with their peers. For Piaget, babies achieve consciousness when symbolism and schematism come together as one to create meaningful representations. An understanding of how things fit together and how they function is what Piaget considers consciousness. On the other hand, metacognition, the ability to think about thinking, does not arise until the age of 11, Piaget’s formal operational stage.

The following are milestones in the evolution of a baby’s cognitive abilities, summarized in eight chronological key events:

  1. Coordination
  2. Self vs. non-self
  3. Know special/loved people
  4. Know + respond to name
  5. Self-image
  6. Pointing to objects (symbol)
  7. Use “I” in sentences
  8. Know Other Minds

Unknown-9.jpegSo, to answer my friend: The question of whether or not babies exist is actually not so straightforward as one might think. It could be argued that babies exist when they are one, when they establish their self-image for the first time, and thus are, in one way or another, conscious of themselves. Or it may be that babies exist once they turn 18 months, and they can use “I,” roleplay, and experience reflexive emotions. Here, babies are aware of themselves as actors, are willing to play with others and take new perspectives, and are able to perceive how they are themselves perceived by others. Yet then again, it is possible that it is only when metacognition is possible, when we are able to doubt that we are doubting, when we are able to posit a hypothetical Evil Demon trying to deceive us all, that we exist—in which case… babies do not exist at all! Do only children and preadolescents and onwards exist? Maybe when we are born, we do not exist, we are in a state of utter nonexistence and non-being, and it is only when we reach 11 that—POOF!—we magically pop into existence.


[1] This is obviously a satirical question. Babies do exist. It is more of a thought-experiment, or armchair philosopher problem. I find the comment to be so outrageous that it is funny, and I thought it made for a perfect reason to research if babies are conscious. 


For further reading: How Infants Know Minds by Vasudevi Reddy (2008)
Developmental Psychology 8th ed. by David R. Shaffer (2010)
The Secret Language of the Mind 
by David Cohen (1996)
The Science of the Mind
by Owen J. Flanagan, Jr. (1984)

Happiness as Eudæmonia

Averill on Happiness.pngHappiness, according to psychologist James R. Averill, a Eudaemonist, is a means-to-an-end, contrary to what his predecessor Aristotle thought. After taking into account both survey reports and behavioral observations, he devised a table of happiness (see below). It is a 2×2 table, one axis being “Activation,” the other “Objectivity.” The four types of happiness he identified were joy, equanimity, eudaemonia, and contentment. He narrowed it down to the objective standard of high immersion known as “eudaemonia,” a term for overall well-being that finds its roots in Aristotle’s Nicomachean Ethics. Aristotle wrote that eudaemonia was achieved through activity, as when we are so engaged in doing something, we forget we are doing it, and lose a sense of time—time flies when you’re having fun. As such, happiness for Aristotle is not a typical emotion in that it occurs for periods of time. You cannot always be in a state of eudaemonia. Rather, it can be actively pursued when you immerse yourself in meaningful work. To be happy is not to be happy about or for anything because it is essentially an object-less emotion, a pure feeling. Eudaemonia is distinguished from equanimity by the fact that the latter is the absence of conflict, the former the resolution thereof. Equanimity has been valued by philosophers as a state of total inner peace; on the other hand, eudaemonia is the result of achieving a images.jpeggoal, which necessarily entails conflict, viz. desire vs. intention. When you are confident in your abilities and set realistic goals, when you are able to complete their goals, having overcome conflict, you can achieve happiness. Too many short-term goals means not experiencing enough of what life has to offer, while too many long-term goals means not being accomplished or confident in yourself. The measure of happiness, then, is relative, not absolute, and differs from person to person. What remains absolute, however, is that this sense of achievement can be had privately, by yourself, and publicly, when it is done for your community, family, or close friends. Inherent to eudaemonia, Averill asserts, is purpose: Behind happiness is direction, intention, and devotion. This led him to claim that “Pleasure without purpose is no prescription for happiness,” meaning you should not resort to hedonism to be happy, but must seek pleasure in meaningful actives into which you can immerse yourself.

Averill’s Table of Happiness:

Subjective: Objective:
High activation: Joy Eudaemonia
Low activation: Contentment Equanimity


For further reading: Handbook of Emotions 2nd ed. by Michael Lewis (2000)

“Talking To” vs. “Talking With”

We spend too much time talking to one another—I think it is about time we start talking with one another.

Unknown.jpegWe might add to this talking about another, by which we mean talk that focuses on another person, often in a derogatory way. In the case of the latter, we refer to gossip, which is malicious, narrow, and crude. Unfortunately, it occupies speech most. Over half of conversations, I would argue, concern others at one point or another, in which they are discussed behind their backs, without knowledge, the unwitting victim of vitriolic verbal venom. Psychologists say this arises from two motives: First, gossip is engaged in order to learn about threats, about who is dominant, as this was important in Neolithic times; second, to compensate for one’s own self-esteem, or lack thereof. Picture nothing worse than two people scheming together in private, and you are the subject of their ridicule and criticism, and you have no knowledge of it as they attack and slander your name and reputation, so that it spreads into rumors, which are accepted prima facie, then used against you—infectious, like a virus, a deadly one.

When we talk about the former, we mean it in a sense with which we are more comfortable; in fact, it is used colloquially by almost everyone: “I was talking to my boss the other day,””My friends and I talked to each other on the phone,” or ”I love talking to people.” The word “to” is a preposition, so used transitively, it takes a verb and is directed toward an object. Already, we see a twofold implication. Plainly, the word Unknown.png“toward” when used in the context of persons is alarming and carries with it negative connotations. While we can be gracious toward another person, it is rare; we usually hear angry, hateful, prejudiced, etc. toward another person. In other words, the word “toward” means to direct something at someone, like a projectile—which words are. Therefore, we hurl words toward another, which is precisely what “talking to” means. This in-itself implies one-way communication. To better illustrate what I am describing, replace to with at. “I was talking at my boss the other day.” While they are different words, the meaning is not changed; rather, the word “to,” seemingly less aggressive and affrontive, is accepted as more acceptable and respectful, despite masking a darker message. Similarly we say we “give things to people,” as though they are the recipient. Taken this way, “talking to” means delivering words to people. But a gift given is not reciprocated. A delivery is sent to one destination to be received, meaning the interlocutor is the receptacle for the speaker’s words—they are reduced to something which receives, as though it is lifeless. Just as a mailbox is designated for receiving mail, so the person whom is being talked to is designated as “something” to receive their words. This leads to the second implication of the preposition “to.” Because “to” receives an object, it means the other person is become an object—that is, they are objectified, made into an object. The person becomes a mailbox, a mere thing, an object whose only reason for existence is to house mail, to be that which receives words; the person is something into which words are deposited and then left. When we endure something, we “take” it. We take the abusetake the lecturetake the pain; when we talk to people, we expect them to take our words.

Thus, when we talk to one another, we are not having a conversation. A conversation requires that two people be involved. It involves an exchange of words—not a depositing of them, nor a receiving of them. When we reduce each other to receptacles, things to store our baggage, we leave no room for exchange. Nobody puts mail into a mailbox and expects it to come back to them; so when you talk to someone, you hurl words toward them and expect them to receive it, but not return it. Talking to is hurling-toward-to-1.jpgdeposit. Everyone knows, however, that if you want a response, you do not just throw it and expect it to stay there. Accordingly, we must learn to talk with one another, rather than to one another. To talk with is to engage in conversation, in two-sided talk, in which words are passed from one to another. Not hurled or thrown but passed, granted, welcomed, exchanged. Whereas one deposits money into the bank to keep it there, one exchanges money into the bank to get its equal value. Who exchanges a 10-dollar bill for 10 one-dollar bills gets the same value back from what they gave. Conversation is an exchange. We converse with. From this we conclude that talking with is exchanging-for-equal-value, by which we mean that: What we put in, we get back. This is conversation. This is discussion. This is healthy communication, where both parties are heard, none prioritized ahead of the other, and where neither is objectivized, reduced to an object, but heard out. Everyone’s opinion is heard in talking with, whereas only one is in talking to. I think it is about time we stop talking to one another and start talking with one another.

Such will be a good start to creating a better future.


Technology and Social Media: A Polemic


Much gratitude is to be given to our devices—those glorious, wonderful tools at our disposal, which grant us capabilities whereof man centuries ago could only have wished, the culmination of years of technology, all combined in a single gadget, be it the size of your lap or hand. What a blessing they are, to be able to connect us to those around the world, to give us access to a preponderance of knowledge, and to give longevity to our lives, allowing us to create narratives and storytell; and yet, how much of a curse they are, those mechanical parasites that latch onto their hosts and deprive them of their vitality, much as a tick does. That phones and computers are indispensable, and further, that social media acts as a necessary sphere that combines the private and public, creating the cybersphere—such is incontrovertible, although they are abused to such an extent that these advantages have been corrupted and have lost their supremacy in the human condition.


Technology is ubiquitous, inescapable, and hardwired into the 21st-century so that it is a priori, given, a simple fact of being whose facticity is such that it is foreign to older generations, who generally disdain it, as opposed to today’s youths, who have been, as Heidegger said, thrown into this world, this technologically dominated world, wherein pocket-sized devices—growing bigger by the year—are everywhere, the defining feature of the age, the zeitgeist, that indomitable force that pervades society, not just concretely, but abstractly, not just descriptive but normative. In being-in-the-world, we Millennials and we of Generation X take technology as it is, and accept it as such. To us, technology is present. It is present insofar as it is both at hand and here, whereby I mean it is pervasive, not just in terms of location but in terms of its presence. A fellow student once observed that we youths are like fish born in the water, whereas older generations are humans born on land: Born into our circumstances, as fish, we are accustomed to the water, while the humans, accustomed to the land, look upon us, upon the ocean, and think us strange, pondering, “How can they live like that?”


As per the law of inertia, things tend to persist in their given states. As such, people, like objects, like to resist change. The status-quo is a hard thing to change, especially when it is conceived before oneself is. To tell a fellow fish, “We ought to live on the land as our fathers did before us”—what an outlandish remark! Verily, one is likely to be disinclined to change their perspective, but will rather accept it with tenacity, to the extent that it develops into a complacency, a terrible stubbornness that entrenches them further within their own deep-rooted ways. This individual is a tough one to change indeed. What is the case, we say is what it ought to be, and so it is the general principle whereupon we take our case, and anyone who says otherwise is either wrong or ignorant. Accordingly, following what has been said, the youth of today, the future of humanity, accepts technology as its own unquestioningly. As per the law of inertia, things tend to persist in their given states—that is, until an unbalanced force acts upon it.


What results from deeply held convictions is dogmatism. A theme central to all users of devices, I find, is guilt; a discussion among classmates has led me to believe that this emotion, deeply personal, bitingly venomous, self-inflicted, and acerbic, is a product of our technological addictions. Addiction has the awesome power of distorting one’s acumen, a power comparable to that of drugs, inasmuch as it compromises the mind’s judiciary faculty, preventing it from distilling events, from correctly processing experiences, and thereby corrupting our better senses. The teen who is stopped at dinner for being on their phone while eating with their family, or the student who claims to be doing homework, when, in reality, they are playing a game or watching a video—what have they in common? The vanity of a guilty conscience—would rather be defensive than apologetic. The man of guilt is by nature disposed to remorse, and thus he is naturally apologetic in order to right his wrong; yet today, children are by nature indisposed thereto, and are conversely defensive, as though they are the ones who have been wronged—yes, we youths take great umbrage at being called out, and instead of feeling remorse, instead of desiring to absolve from our conscience our intrinsic guilt, feel that we have nothing from which to absolve ourselves, imputing the disrespect to they who called us out.


Alas, what backward logic!—think how contrary were it to be if the thief were to call out that poor inhabitant who caught them. Technology has led to moral bankruptcy. A transvaluation of morals in this case, to use Nietzsche’s terminology is to our detriment, I would think. Guilt is a reactionary emotion: It is a reaction formed ex post facto, with the intent of further action. To be guilty is to want to justify oneself, for guilt is by definition self-defeating; guilt seeks to rectify itself; guilt never wants to remain guilty, no; it wants to become something else. But technology has reshaped guilt, turning it into an intransitive feeling, often giving way, if at all, to condemnation, seeking not to vindicate itself but to remonstrate, recriminate, retribute, repugn, and retaliate. Through technology, guilt has gone from being passive and reactive to active and proactive, a negative emotion with the goal of worsening things, not placating them. Digital culture has perpetuated this; now, being guilty and remaining so is seen as normal and valuable. Guilt is not something to be addressed anymore. Guilt is to be kept as long as possible. But guilt, like I said, is naturally self-rectifying, so without an output, it must be displaced—in this case, into resentment, resentment directed toward the person who made us feel this way.


—You disrupt me from my device? Shame on you!—It is no good, say you? I ought get off it? Nay, you ought get off me!—You are foolish to believe I am doing something less important than what we are doing now, together, to think it is I who is in the wrong, and consequently, to expect me to thusly put it away—You are grossly out of line—You know naught of what I am doing, you sanctimonious tyrant!—


When asked whether they managed their time on devices, some students replied quite unsurprisingly that they did not; notwithstanding, this serves as a frightful example of the extent to which our devices play a role in our lives. (Sadly, all but one student said they actually managed their time.) They were then asked some of the reasons they had social media, to which they replied: To get insights into others’ lives, to de-stress and clear their minds after studying, and to talk with friends. A follow-up question asked if using social media made them happy or sad, the answer to which was mixed: Some said it made them happier, some said it made them sadder. An absurd statement was made by one of the interviewees who, when asked how they managed their time, said they checked their social media at random intervals through studying in order to “clear their mind off of things” because their brains, understandably, were tired; another stated they measured their usage by the amount of video game matches played, which, once it was met, signaled them to move onto to something else—not something physical, but some other virtual activity, such as checking their social media account. I need not point out the hypocrisy herein.


I take issue with both statements combined, for they complement each other and reveal a sad, distasteful pattern in today’s culture which I shall presently discuss. Common to all students interviewed was the repeated, woebegone usage of the dreaded word “should”:
—”I should try to be more present”—
—”I should put my phone down and be with my friends”—
—”I should probably manage my time more”—


Lo! for it is one thing to be obliged, another to want. Hidden beneath each of these admissions is an acknowledgment of one’s wrongdoing—in a word, guilt. Guilt is inherent in “shoulds” because they represent a justified course of action. One should have done this, rather than that. Subsequently, the repetition of “should” is vain, a mere placeholder for the repressed guilt, a means of getting rid of some of the weight on one’s conscience; therefore, it, too, the conditional, is as frustrated as the guilt harbored therein.


Another thing with which I take issue is when the two students talked about their means of time management. The first said they liked to play games on their computer, and they would take breaks intermittently by going elsewhere, either their social media or YouTube to watch videos. No less alogical, the other said they would take breaks by checking their social media, as they had just been concentrating hard. How silly it would be for the drug addict to heal himself with the very thing which plagues him! No rehabilitator assures their circle with alcohol; common sense dictates that stopping a problem with that which is the problem in the first place is nonsense! Such is the case with the culture of today, whose drugs are their devices. In the first place, how exactly does stopping a game and checking some other website constitute a “break”? There is no breach of connection between user and device, so it is not in any sense a “break,” but a mere switch from one thing to the next, which is hardly commendable, but foolish forasmuch as it encourages further usage, not less; as one defines the one in relation to the next, it follows that it is a cycle, not a regiment, for there is no real resting period, only transition. Real time management would consist of playing a few games, then deciding to get off the computer, get a snack, study, or read; going from one device to another is not management at all. Similarly, regarding the other scenario, studying on one’s computer and taking a break by checking one’s media is no more effective. One is studying for physics, and after reading several long paragraphs, sets upon learning the vocabulary, committing to memory the jargon, then solving a few problems, but one is thus only halfway through: What now? Tired, drained, yet also proud of what has been accomplished thus far, one decides to check one’s social media—only for 30 minutes, of course: just enough time to forget everything, relax, and get ready to study again—this is not the essence of management; nay, it is the antithesis thereof! No state of mind could possibly think this reasonable. If one is tired of studying, which is justifiable and respectable, then one ought to (not should!) take a real break and really manage one’s time! Social media is indeed a distraction, albeit of a terrible kind, and not the one we ought to be seeking. Checking a friend’s or a stranger’s profile and looking through their photos, yearning for an escape, hoping for better circumstances—this is not calming, nor is it productive. A good break, good time management, is closing one’s computer and doing something productive. Social media serves to irritate the brain even more after exhaustion and is not healthy; instead, healthy and productive tasks, of which their benefits have been proven, ought to be taken up, such as reading, taking a walk, or exercising, among other things: A simple search will show that any of the aforementioned methods is extremely effective after intense studying, and shows signs of better memory, better focus, and better overall well-being, not to mention the subconscious aspect, by which recently learned information is better processed if put in the back of the mind during something else, such as the latter two, which are both physical, bringing with them both physiological and psychological advantages. Conclusively, time management consists not in transitioning between devices, but in transitioning between mind- and body-states.


The question arises: Why is spending too much time with technology on devices a problem in the world? Wherefore, asks the skeptic, is shutting oneself off from the world and retreating into cyberspace where there are infinite possibilities a “bad” thing? Do we really need face-to-face relationships or wisdom or ambitions when we can scroll through our media without interference, getting a window into what is otherwise unattainable? Unfortunately, as with many philosophical problems, including the simulation theory, solipsism, and the mind-body problem, no matter what is argued, the skeptic can always refute it. While I or anyone could give an impassioned speech in defense of life and about what it means to be human, it may never be enough to convince the skeptic that there is any worth in real-world experiences. It is true that one could easily eschew worldly intercourse and live a successful life on their device, establishing their own online business, finding that special person online and being in love long distance—what need is there for the real world, for the affairs of everyday men? Philosopher Robert Nozick asks us to consider the Pleasure Machine: Given the choice, we can choose to either hook ourselves up to a machine that simulates a perfect, ideal, desirable world wherein all our dreams come true, and everything we want, we get, like becoming whatever we always wanted to become, marrying whomever we have always wanted to marry, yet which is artificial, and, again, simulated; or to remain in the real world, where there are inevitable strifes and struggles, but also triumphs, and where we experience pleasure and pain, happiness and sadness—but all real, all authentic. There is, of course, nothing stopping one from choosing the machine; and the skeptic will still not be swayed, but I think the sanctity of humanity, that which constitutes our humanity, ought never be violated.


What, then, is the greatest inhibition to a healthy, productive digital citizenship? What can we do to improve things? The way I see it, the answer is in the how, not the what. Schools can continue to hold events where they warn students of the dangers of technology, advise them on time management, and educate them about proper usage of technology and online presence; but while these can continue ad infinitum, the one thing that will never change is our—the students—want to change. Teachers, psychologists, and parents can keep teaching, publishing, and lecturing more and more convincingly and authoritatively, but unless the want to change is instilled in us, I am afeard no progress will be made. Today’s generation will continue to dig itself deeper into the technological world. They say the first step in overcoming a bad habit or addiction is to admit you have a problem. Like I said earlier, technology just is for us youths, and it always will be henceforth, and there will not be a time when there is not technology, meaning it is seen as a given, something that is essential, something humans have always needed and will continue to need. Technology is a tool, not a plaything. Technology is a utility, not a distraction. Social media is corrupting, not clarifying, nor essential. We have been raised in the 21st-century such that we accept technology as a fact, and facts cannot be disproven, so they will remain, planted, their roots reaching deeper into the soil, into the human psyche. Collectively, we have agreed technology is good, but this is “technology” in its broadest sense, thereby clouding our view of it. We believe our phones and computers are indispensable, that were we to live without them, we would rather die. To be without WiFi—it is comparable to anxiety, an object-less yearning, and emptiness in our souls. How dependent we have become, we “independent” beings! This is the pinnacle of humanity, and it is still rising! Ortega y Gasset, in the style of Nietzsche, proclaimed, “I see the flood-tide of nihilism rising!”¹ We must recognize technology as a problem before we can reform it and ourselves. A lyric from a song goes, “Your possessions will possess you.” Our devices, having become a part of our everyday lives to the extent that we bring them wheresoever we go, have become more controlling of our lives than we are of ourselves, which is a saddening prospect. We must check every update, every message, every notification we receive, lest we miss out on anything! We must miss out on those who care about us, who are right in front of us, in order to not miss out on that brand new, for-a-limited-time sale! But as long as we keep buying into these notification, for so long as we refuse to acknowledge our addictions and the problem before us, we will continue to miss out on life and waste moments of productivity, even if they are for a few minutes, which, when added up at the end of our lives, will turn out to be days, days we missed out on. As my teacher likes to say, “Discipline equals freedom.” To wrest ourselves from our computers or phones, we must first discipline ourselves to do so; and to discipline ourselves, we must first acknowledge our problem, see it as one, and want to change. As per the law of the vis viva (and not the vis inertiæ), things tend to persist in their given states, until its internal force wills it otherwise. We bodies animated with the vis viva, we have the determination and volition to will ourselves, to counter the inertia of being-in-the-world, of being-online, whence we can liberate ourselves, and awaken, so to speak. We, addicts, have no autonomy with our devices—we are slaves to them. Until we break out of our complacency, until we recognize our masters and affirm our self-consciousness thence, and until we take a stand and break from our heteronomy, we will remain prisoners, automata, machines under machines. We must gain our freedom ourselves. But we cannot free ourselves if we do not want to be freed, if we want to remain slaves, if we want to remain in shackles, if we want to plug into the machine. A slave who disdains freedom even when freed remains a slave. Consequently, we cannot be told to stop spending so much time on our devices, to pay attention to whom or what is in front of us; we must want to ourselves. Yet no matter how many times or by whom they are told, today’s youth will never realize it unless they do so themselves. They must make the decision for themselves, which, again, I must stress, must be of their own volition. Until then, it is merely a velleity, a desire to change, but a desire in-itself—nothing more, a wish with no intent to act. It is one thing to say we should spend less time, another that we ought to.


¹Ortega y Gasset, The Revolt of the Masses, p. 54

Harper Lee’s Guide to Empathy

Unknown.pngIn the 21st Century, surrounded by technologies that distance us, by worldviews that divide us, and by identities that define us, we do not see a lot of empathy among people. While we see friends and family every day, we never really see them, nor do we acknowledge that they, too, are real people, people who have opinions like us, feelings like us, and perspectives like us. Harper Lee is the author of To Kill a Mockingbird, a novel that itself has many perspectives, many of which are in conflict with each other. Set in the 1930’s South, the book takes place during the Great Depression, when many lost their jobs, and a time of racism, when laws were passed that prohibited the rights of black people. The protagonist is a girl named Scout who lives in the fictional town of Maycomb with her brother Jem and father Atticus, who is an empathetic lawyer. Through interactions with her peers, Scout learns to take others’ perspectives and walk in their shoes. In To Kill a Mockingbird, Harper Lee teaches that, in order to take another’s perspective and practice empathy, it is required that one understand someone else’s thoughts or background, try to relate to them, then become aware of how the consequences of one’s actions affects them.

Before one can truly take another’s perspective, Lee argues, one must first seek to understand how someone thinks and where they come from. After hearing about Mr. Cunningham’s legal entailment, Scout asks if he will ever pay Atticus back. He replies that they will, just not in money. She asks, “‘Why does he pay you like that [with food]?’ ‘Because that’s the only way he can pay me. He has no money… The Cunninghams are country folk, farmers, and the crash hit them the hardest…’ As the Cunninghams had no money to pay a lawyer, they simply paid us with what they had’” (Lee 27-8).  Scout is confused why the Cunninghams pay “like that” because it is not the conventional way of paying debts. Money is always used in business transactions, yet Atticus allows them to pay through other means. Atticus acknowledges that the Cunninghams are having economic problems. He empathizes with him by drawing on his background knowledge, namely that, because he is a farmer who gets his money from agriculture, he does not Unknown.jpeghave the means to pay. The Great Depression left many poor and without jobs, so Atticus is easier on Mr. Cunningham; he knows it would be unfair to make him pay when he hardly has any money. Accordingly, Atticus accepts that the Cunninghams are trying their best, and he compromises with them. He willingly accepts anything Mr. Cunningham will give him, since he knows it will come from the heart. For this reason, Atticus can empathize by thinking outside normal conventions to accommodate Mr. Cunningham’s situation. Just as Atticus understands the Cunninghams, so Calpurnia empathizes with them when she lectures Scout not to judge them. Jem invites Walter Cunningham from school over to have dinner with him and Scout. Reluctantly, Walter agrees, but once he starts eating, Scout takes issue with his habits; so Calpurnia scolds her. Calpurnia yells, “‘There’s some folks who don’t eat like us… but you ain’t called on to contradict ‘em at the table when they don’t… [A]nd don’t you let me catch you remarkin’ on their ways like you was so high and mighty!’” (Lee 32-3). Because Scout is not used to the way Walter eats, she immediately judges his way as different from her own, thereby patronizing him. Hence, she is not empathizing because she is not considering his point of view, but is only evaluating her own. Calpurnia states that not everyone eats like Scout does, showing that she, unlike Scout, does not form generalizations; rather, she rationalizes, recognizing that he comes from a different home, a different home with different manners. Since she empathizes with Walter in this way, Calpurnia tells Scout not to “contradict” him, meaning it is rude and unsympathetic not to consider Walter and his background. Furthermore, she warns Scout not to act as though she is “so high and mighty,” especially around others who are less fortunate and who differ from her, such as Walter. By criticizing Walter’s eating and thence abashing him, Scout is being sanctimonious, declaring that her way is the better than anyone else’s. Calpurnia gets mad at Scout for this, as it is egocentric; i.e., she is concerned with herself and cannot consider others’ perspectives. Consequently, Calpurnia shows empathy by understanding that people have different perspectives, while Scout does not. Both Atticus and Calpurnia are empathetic because, as shown, they actively try to understand other people and selflessly consider their perspectives.

Unknown-1.jpegOnce a person’s way of thinking and past is understood, one is able to see oneself in that other and make connections with them. One night, Scout, Jem, and Dill sneak off to the Radley house and are scared away, Jem losing his pants in the process. Jem decides to retrieve his pants, regardless of the danger involved therewith. The next morning, he is moody and quiet, and Scout does not know why. Upon some reflection, she says, “As Atticus had once advised me to do, I tried to climb into Jem’s skin and walk around in it: if I had gone alone to the Radley Place at two in the morning, my funeral would have been held the next afternoon. So I left Jem alone and tried not to bother him” (Lee 77). Scout follows her father’s advice and “climb[s] into Jem’s skin,” symbolizing that she has taken his perspective and seen life therethrough. She asks herself the vital question of what it would be like to be Jem; in doing this, she has visualized herself as Jem, has visualized herself doing what he did, thereby understanding him. The first step in empathizing—understanding—allows her to relate to Jem and put herself in his position: She imagines what it would have been like to risk her own life, how she would have felt doing so. As a result, she examines her emotional reaction and projects it onto Jem, relating to him, feeling as he would feel. Had she not tried to understand Jem’s position, had she not related to him emotionally, she would have never known why Jem was being moody. Jem’s “funeral would have been held the next afternoon,” says Scout, realizing why Jem is upset. If she felt that way herself, then she would not want anyone bothering her, either, seeing as it is a traumatic event. Scout connects to Jem on an emotional level, empathizing with him. Another instance in which Scout shows empathy by relating is when she connects with Mr. Cunningham. Jem and Scout sneak out at night to find Atticus, who is at the county jail keeping watch over his client, Tom Robinson. While they near to him, a mob closes in on Atticus and threatens to kill Robinson, so Scout tries to find a way of civilizing them and 120130184141-mockingbird-6-super-169.jpgtalks to Walter’s father. Thinking of conversation, she considers, “Atticus had said it was the polite thing to talk to people about what they are interested in, not what you were interested in. Mr. Cunningham displayed no interest in his son, so I tackled his entailment once more in a last-ditch effort to make him feel at home” (Lee 205). In this moment, Scout recalls that it is polite to relate to others and consider their views rather than her own. She hereby distances herself from her egocentrism, instead concerning herself with what someone other than herself wants. Empathizing requires that one cross the gorge of disparity, and Scout bridges this gap between self and other to find that she has things in common with Mr. Cunningham, common things of which she would never have thought prior. Before this connection could occur, Scout had to know his background, of which she learned when talking to Atticus; additionally, she had his Unknown-1.pngson over and learned about him then, giving her something in common with him with which to talk. Since Scout knows Walter, she thinks him a topic to which the two can both relate, seeing as Walter is close to his father, creating a strong connection. However, she notes that he “displayed no interest in his son”; thus, she thinks back further, remembers another thing they have in common, then relates to it in an attempt to “make him feel at home.” The phrase “feel at home” denotes acceptance, belonging, and coziness—being warm and welcome—so Scout, in coming up with certain topics that will be of interest to Mr. Cunningham, seeks to make him feel like he is a welcome person, to put herself in his shoes and consider what he would like to talk about, what would make him feel accepted as it would her. Through these moments in the text, Lee shows that empathy is relating to and identifying with another by removing one’s own position and taking theirs.

Empathy is accomplished when one takes another’s perspective in order to know their actions will affect them and consider how they would make them feel. Jem and Scout find out Atticus has been insulted and threatened by Bob Ewell in chapter 23. They are confused as to why their dad did nothing to retaliate, why he just took it. He tells Jem, Unknown.jpeg“[S]ee if you can stand in Bob Ewell’s shoes a minute. I destroyed his last shred of credibility at the trial, if he had any to begin with… [I]f spitting in my face and threatening me saved Mayella Ewell one extra beating, that’s something I’ll gladly take. He had to take it out on somebody and I’d rather it be me than that houseful of children out there’” (Lee 292-3). Atticus directs Jem to “stand in Bob Ewell’s shoes” so that he can understand his perspective, and therefore how Atticus’ actions could have affected him. Knowing Mr. Ewell has many children, finding a common link therein, Atticus can relate to him, imagining how horrible it would be if his children were beaten. Bob Ewell, upset over the trial, wants to take out his anger, so he displaces it onto Atticus, which Atticus says is better than his displacing it on his children. Taking the pacifist route, Atticus avoids exacerbating the situation, aware that fighting back would cause things to worsen, and he steps outside himself to become aware of how his actions will not just have direct effects, but indirect effects as well: Angering Bob Ewell would make him want to physically harm Atticus, but would further encourage him to be more hostile to his children in addition. As such, Atticus takes into account the long-term consequences and empathizes because he is aware of how his actions could possibly obviate a disaster. He thinks ahead—to Bob Ewell’s children, to his own children, concluding, “‘I’d rather it be me than that houseful of children.’” A second example of considering the consequences of one’s actions on another takes place when Scout, a couple years later, reflects on how she treated Arthur “Boo” Radley. At the beginning of chapter 26, Scout is thinking about her life and passes the Radley house, of which she and Jem were always scared, and about which they had always heard rumors. She remembers all the times in the past she and her brother and their friend played outside, acting out what happened at the house. Pensively, she Unknown-1.jpegponders, “I sometimes felt a twinge of remorse when passing by the old place [Radley house], at ever having taken part in what must have been sheer torment to Arthur Radley—what reasonable recluse wants children peeping through his shutters, delivering greetings on the end of a fishing-pole, wandering in his collards at night?” (Lee 324). Lee uses the word “remorse” here to conjure up feelings of guilt, regret, and shame, all associated with the way Scout feels about her actions. To say she feels a “twinge of remorse” is to say she feels compunction; that is, morally, she feels she has wronged the Radleys, and, looking back, that what she did was wrong. She is contrite because she can stand back and objectively evaluate her deeds, deeds she deems unempathetic, considering they were inconsiderate of Arthur. Having become aware of the weight of her choices, Scout experiences regret, an important emotional reaction because it signifies empathy, insofar as it is representative of her taking into account how she affected another person; and, in this case, how it negatively impacted Arthur, which itself requires understanding and relation to him. This regret, this guilt, is caused by the realization that her actions in the past were mean and thus incite moral guilt. Again, Scout puts herself in Arthur’s shoes, imagining what it would reasonably be like to be a “recluse”: Certainly, she affirms, she does not want “children peeping,… delivering greetings,… [or] wandering in [her] collards.” The thought process is supposed to relate to Arthur’s, so Scout is actively relating to and understanding him, ultimately to realize how her conduct impacts him. Her scruples finally notify her that, from the perspective of the solitary Arthur, her behavior had a negative effect. Scout’s awareness of the consequences of her actions makes her empathetic, for she has introjected Arthur’s perspective. In conclusion, Atticus and Scout exhibit empathy because they both consider how their comportment has an effect on others.

Unknown.pngAccording to Lee, empathy is put into practice when one takes time to learn about another person, makes a personal connection with them, and considers how their actions will affect them. We are social animals by nature, which means we desire close relationships; unfortunately, most of us seldom recognize the importance of understanding those with whom we have a relationship, leading to inconsiderateness, ignorance, and stereotypes. For such intimate animals, we all too often neglect the feelings and thoughts of others, even though they are of no less priority than ours. Therefore, empathy is a vital, indispensable tool in social interaction that helps us connect with others. As communication is being revolutionized, worldviews shaken, and identities changed, it is integral that we learn to better understand others and never forget to empathize, lest we lose our humanity.


To Kill a Mockingbird by Harper Lee (1982)