Do Babies Exist?

My friends and I were sitting on the deck one Summer afternoon sipping cokes by the pool while discussing different philosophical matters. It was a hot day, and I was introducing Descartes’ philosophy to them—as any normal person in an everyday conversation does—and explaining why it was important and what it meant for us. I set Unknownit up like this: He asked if his whole life were an illusion, a dream, and if there were an Evil Demon that was deceiving him, causing his senses to be misleading. It is impossible, I explained, to distinguish between waking reality and a dream, according to Descartes. However, searching for a first principle, a single starting point of knowledge from which to start, he realized he had been thinking this whole time. The process of questioning whether he was in a dream presupposed that there was a questioner who was doing it. This led him to remark, “Cogito, ergo sum,” or “I think, therefore I am.” By doubting all his senses, he was led to the conviction that he could not doubt that he was doubting in the first place; for otherwise, he would not be able to doubt: He would have to exist first before he could be deluded.

UnknownAfter hearing this, my friends seemed pretty convinced, and pondered it a bit. Out of nowhere, one of them said, “Well, babies aren’t self-conscious.” A pause. “So do babies exist?” Taken aback, unprepared for such a response, I readily dismissed the notion, called it absurd, and tried to think of an answer. We began debating whether or not babies knew they existed, or whether they could even think about thinking. Of course, the question itself—do babies exist since they are not self-conscious?—is actually grounded in a misunderstanding: Descartes was not trying to prove his existence; rather, he was trying to prove he had certainty, something undoubtedly true. But for the sake of argument, we entertained the idea. Common face shouts till it is red in the face, “Obviously, yes, babies exist! Only a madman would doubt their existence. I mean, we see them right in front of us—they’re right there, they exist!”[1]

This prompts the question: If we are conscious of a baby existing, yet they themselves are not conscious of themselves existing, do they exist? Babies are fascinating creatures. They are copies of us, miniature humans who must learn to cope with and understand the world in which they are living through trial-and-error. Seeing as they are capable of such amazing cognitive feats like cause-and-effect and language acquisition, investigating their conscious abilities sounded intriguing. A delve into developmental psychology, the study of how humans develop through life, yields interesting insights into this psycho-philosophical problem.

Unknown-1.jpegJean Piaget was a developmental psychologist who studied the development of children throughout the 20th-century. Today, his influence is still felt in psychological literature and continues to impact thought regarding childhood development. For years he observed, tested, and took notes on infants, from birth to early adulthood, using the data to devise his famous theory of cognitive development, which takes place in four stages: Sensorimotor, preoperational, concrete operational, and formal operational. The first stage, sensorimotor, takes place starting at birth and ending at the age of two. During this period, the baby’s life is geared toward adjusting to the world. Babies are “thrown” into this world, to use a Heideggerian term. They are born immediately into life amidst chaos, with all kinds of new stimuli to which to react. Confused, unable to make sense of things, exposed to strange sights and sounds, the baby cries and thrashes about, trying to find some sense of security. It is bombarded all at once by sensations and experiences. It is disoriented. This is a brave new world, and it is full of data that needs to be interpreted and sorted out in the baby’s mind. In order to navigate through the world, the newborn uses its motor skills and physical senses to experience things. The baby interacts with its environment, including people, grabbing with its hands, sucking with its mouth, hearing with its ears, and smelling with its nose. Imagine being in a cave for years, devoid of all sensory information, when, one day, you are let out and, having forgotten what it was like to experience the world, you are overcome by the magnitude of the environment, so you try to relearn as much as possible, greedily taking in everything that you can—well, being in the womb is kind of like being in a cave for the baby, meaning it is doing the same thing: It is getting a grasp of reality by engaging its senses in any way that it Unknown-3.jpegpossibly can. The baby is an empiricist who delights in its senses as though life were a buffet. Oh, there is something I can touch! Ah, that smells nice, let me smell it! While it cannot yet register these sensations, the infant uses its senses to obtain a primitive understanding. They are actively mapping out the world according to their perceptions, simple though they are. According to Piaget, babies eventually learn to pair coordination, knowledge of their body and its movement, with determination. Once they are able to effectively use their body parts in a way that is conducive to their survival, they develop their sense of where these limbs are in relation to each other, called proprioception. This allows them to use determination in regard to this newly acquired coordination. Babies can now direct themselves with autonomy and do something. However, this is a simple form of determination; it is not like the baby has free will and can decide or choose to do this or that. Whereas the baby can move toward a particular object, it cannot decide mentally, “I am going to crawl over to that thing”; it just does it out of pure, unthinking volition.

At three months, a baby can sense emotions and, amazingly, recreate them. Seeing their parents sad, an infant can react to this with a fitting response, as in being sad themselves. By being able to tell what someone is feeling, the baby can imitate them, showing that the baby has at least a simple recognition of empathy. Around this time also, the baby actively listens to their social scene, picking up on spoken language. It is incredible (in both senses of the word) because it is now that the infant unobtrusively Unknown-4.jpegand quietly internalizes and processes everything it hears like a sponge, learning speech cues, such as when to talk and when to pause; the rhythms of speech, including cadence; vocabulary; and nonverbal communication, which makes up the majority of social interaction. Here is a tiny little human just crawling around the house on all fours who cries and eats and goes to the bathroom, all the while they are actually learning how to speak—who could possibly fathom what is going on in that small, undeveloped mind! A little earlier, around two months usually, the baby already shows signs of early speech when it babbles. Nonsense sounds are uttered by the baby, who is trying to imitate speech, but who is not complex enough to reproduce it entirely. Four to five months into development, the baby can understand itself as a self-to-Others, or a self-as-viewed-by-Others. I have my own image of myself, but I understand that I am perceived by other people, who form their own images of me. One study shows that, from four to nine months, the infant has changing patterns of involvement in play. In the earliest stage, the baby will, if it is approached by the parent, play peekaboo. Because they have not yet learned that things exist independent of them in time, babies think that the parent disappears when they are covered, and is surprised to find they are still there. A few months later, nine months, the baby is able to take on the role of the initiator who wants to play peekaboo, instead of the responder who will play peekaboo if asked. This proves that babies learn to combine determination with intention (Bruner, 1983).

Just three months later, when the infant is officially one year old, it achieves a self-image. Looking in the a mirror, it can recognize itself and form an early identity. Like chimps, babies can now respond to themselves as an actual self in the mirror, noticing, for example, a mark on their forehead, and realizing that it is not on the mirror, but on themselves. During 14-18 months, an infant is able to differentiate an Other’s intentions from their own (Repacholi & Gopnik, 1997). Children like to think in terms of their own desires. If a kid wants a cookie, they act on their desire. Thus, when they are 14-18 months old, they can distinguish Others’ desires as different from their Unknown-5.jpegown. Within this period, the baby can also know that it is being imitated by someone else. If a parent mimics something the infant is doing, the infant knows their own behavior is being shown to them. Finally, the 18-month marker designates when the baby begins to start its sentences with the first-person “I.” With a sense of self, the infant is able to roleplay, in which it takes on new identities, or roles, and is able to play “as them.” Second-order emotions, also known as self-conscious emotions, like shame and embarrassment, arise in the child at this time, too. Children possess some semblance of self-consciousness.

After the sensorimotor stage is what Piaget called the preoperational stage, which takes place between the ages of two and seven. It is at this stage that the infant constructs their own world. Through the process of assimilation, the toddler creates mental schemas, mini blueprints conceived in their minds, frameworks by which reality is processed then Unknown.pngmade sense off, allowing them to structure reality in a way that is useful to them. When a new experience is undergone, it is made to fit the pre-existing schema. Because these schemas are very simple and basic, they are obviously inaccurate, although that is not point of them; they are not supposed to be innate categories of the mind, as Kant would have thought of them, but early hypotheses made from the little experienced gathered by a child. One time, my cousins came over to play video games; we were playing a level in Lego Indiana Jones where we had to drive around on a motorcycle chasing cars. My cousin’s little brother pointed excitedly at the cars zooming down the streets, exclaiming, “Doo-doo!” I hopped on a motorcycle and chased after them, only for him to look at the motorcycle and, again, shout, “Doo-doo!” My cousin and I tried to tell him that a car and a motorcycle were two separate things. In his mind, he saw a moving vehicle with wheels, so he created a mental schema. Anything that fit under that description—a moving vehicle with wheels—would be considered by him to be a “Doo-doo”—in this case, both the car and the motorcycle, despite their being different things. This illustrates that schemas are not always accurate; they are for classifying and categorizing things. Of course, this leads to a new process observed by Piaget: Accommodation. We come to an age where we discover that our schemas are inadequate because they do not fully represent reality. As such, we have a kind of “schematic crisis,” as we are met with an anomaly, something which sticks out, something which does not fit with our prevailing theory. Hence, we must remodel our thinking. Consequently, we are forced to find a way to reconcile the already-existing category with this new piece of data, either by broadening the schema, or by creating a new one altogether. Babies thus learn to make more accurate classifications as they learn new things and create new schemas with which to interpret Unknown-6.jpegreality. Once these schemas are built up, the infant is able to engage in organization, through which they order their schemas. Some are judged to be more inclusive or exclusive than others, and so are co-ordinated based thereon. In the case of my cousin’s little brother, he would have to organize his schemas like this: Broadly, there are vehicles, under which we might find cars and motorcycles as types, which can themselves be expanded upon, for each comes in different kinds. This way, reality is structured in levels, or hierarchies, not necessarily in importance, but in generality and specificity. Organization is a synthesis of assimilation and accommodation. All this schematizing segues into the next point, namely that in making sense of the world, we give sense to it.

The preoperational period is characterized by symbolic representation in toddlers. In philosophy, the study of meaning and symbolism is called semiotics, and it is closely related to what babies do, interestingly. Life is separated into two concepts: Signs and symbols. Signs are fixed things—concrete objects. Symbols are relative meanings—abstract values—usually assigned to signs. While every car I see is always a car, its meaning is not always the same and is liable to change. For some, it can represent, can be symbolic of, freedom, if you are a teen just getting your license; transportation, if it is how you get around; dread, if you hate road trips or have to wait hours during commute. The point is, everyone sees the same sign, but for everyone the symbol has different meanings. Preoperational toddlers are able, then, to understand objects not just in their literal, concrete sense, but as standing for something, as abstract and meaningful. Babies are not passive, as I have said, but on the contrary, very much, if not entirely, active. By interacting with the world around them, they experiment, learn, and conceptualize. Around three years, the baby is fully capable of speaking, feeling, having motives, and knowing the relation of cause-and-effect.

Unknown-2.pngOne of the consequences of Descartes’ Cogito is its resulting solipsism: The thinker, the Cogito, is only able to prove his own existence, whereas Others’ existences are uncertain. Is this a requisite for existence? Is self-certainty a necessity? If so, the case is a difficult one for babies. Controversially, Piaget proposed that babies are egocentric; his theory is widely contested today in psychological circles. The meaning of egocentrism can be guessed by looking carefully at the word’s roots: It means self-centered; however, it is not self-centeredness in the sense of being prideful, selfish, and concerned with oneself, no—it is more closely related to anthropocentric, in the sense that the self is the central point from which all others points are judged or perceived. For this reason, Piaget suggested that infants can only see things through their own perspectives, not through Others’. You may be wondering why I sometimes have been capitalizing “Other.” Philosophically, the problem of egocentrism is closely related to solipsism, resulting in what is called “the problem of Other Minds,” which is the attempt to prove the existence of selves outside of our own, of whose existence we are uncertain, so they are called “Others,” giving them a kind of external, foreign connotation. I digress. Babies, so thought Piaget, are unable to take Others’ perspectives, so the must rely on their own perspectives. To do this, they reason from self to Other. Infants’ egocentric tendencies, when combined with their inability to acknowledge objects as existing permanently outside of them, lead to a subject-object dualism, a subjective idealism, in which the self is distinguished and utterly separated cup-faces.jpgfrom the physical world. It becomes “my” viewpoint, or “your” viewpoint, subjective, relative. As long as I look at an object, a toddler thinks, it exists. And yet, the toddler also has a social self, which it develops through its interactions with other children. Many psychologists have claimed that, by playing, children are able to acknowledge the existence of not just Others, but Others’ emotions. It is evident in roleplaying, where the children pretend they are someone they are not, and act accordingly, placing themselves within a new self, which they adopt as their own, and interact with the other children, whom they see as someone else, whom they acknowledge and actively engage with, responding to how they are treated, and sensing emotions.

A dominant, popular theory that attempts to refute Piaget’s egocentrism is “Theory of Mind” ([ToM] Wellman, 1990). Wellman found that babies develop an awareness of Others at the age of three, when they operate on belief-desire reasoning. Motivation for kids consists of a belief, what they know, and a desire, what they want. A child might be motivated to have a cookie because they know where the cookie jar is, and they are hungry for one. Using this kind of reasoning, the kid attributes their own intentions to another. Looking at his playmate, the toddler assumes, “Well, I want a cookie, and I know where they are, so this kid, like me, because he has the same beliefs and desires as I, must want a cookie, too.” Is it faulty and inaccurate? Wildly. Does it make sense, realistically? Yes. The Theory of Mind is a primitive form of empathy, a kind of empathetic stepping stone. It is simple and selfish, because it assumes that images.pngchildren have the same beliefs and desires. One often sees this in children trying to console one another: An infant sees another crying, and, because he takes comfort in eating ice cream, believes the other will take comfort in it, too. Critics like Vasudevi Reddy criticize Theory of Mind because it is too detached from actual interaction and ends up actually attributing one’s own self-certitude to another, resulting in what she calls a “Neo-Cartesianism” of sorts. It promotes solipsistic thinking by denying the existence of an independent thinker with emotions, instead attributing to them own’s own ideas, thereby increasing a toddler’s dualistic thinking.

Unknown-8.jpegAccording to Reddy, a baby’s communication with Others’ already presupposed intersubjectivity, or being involved with people on a personal level. Babies are self-aware to an extent at birth because, the argument goes, the baby is able to distinguish itself from the world around it. To act, is to know both the self and the object. It is similar to Fichte’s philosophy in that the Ego becomes aware of itself by recognizing everything that is not the Ego, creating the Non-ego; in other words, it is through the Non-ego—the world—that the Ego knows itself. The world, or Non-ego, is created purely with the intent of being a moral playground for the Ego. Following from this is the idea that the baby, coming into contact with the world, immediately knows it as not-itself, and so uses it as its playground, activating all its senses to learn about reality. If we could not tell the environment apart from ourselves, and we thought ourselves a part of it, how could we act independently of it, with our senses? This is an argument against Freud and Piaget, who both said newborns cannot tell themselves from the world. As a solution to egocentrism, psychologists found that parents play an important role early on. Parents should teach their children early on to differentiate self from Other. Too much similarity between the baby and parent means more egocentrism in life, which is harder to unlearn. Reddy’s RquLcsxM.jpgsolution is to avoid Cartesianism and Theory of Mind and instead pursue a second-person perspective, one between I-and-Thou, You-and-I. This way, there is direct access to another’s intentions. Babies, through play, function on this second-person level by directly interacting with their peers. For Piaget, babies achieve consciousness when symbolism and schematism come together as one to create meaningful representations. An understanding of how things fit together and how they function is what Piaget considers consciousness. On the other hand, metacognition, the ability to think about thinking, does not arise until the age of 11, Piaget’s formal operational stage.

The following are milestones in the evolution of a baby’s cognitive abilities, summarized in eight chronological key events:

  1. Coordination
  2. Self vs. non-self
  3. Know special/loved people
  4. Know + respond to name
  5. Self-image
  6. Pointing to objects (symbol)
  7. Use “I” in sentences
  8. Know Other Minds

Unknown-9.jpegSo, to answer my friend: The question of whether or not babies exist is actually not so straightforward as one might think. It could be argued that babies exist when they are one, when they establish their self-image for the first time, and thus are, in one way or another, conscious of themselves. Or it may be that babies exist once they turn 18 months, and they can use “I,” roleplay, and experience reflexive emotions. Here, babies are aware of themselves as actors, are willing to play with others and take new perspectives, and are able to perceive how they are themselves perceived by others. Yet then again, it is possible that it is only when metacognition is possible, when we are able to doubt that we are doubting, when we are able to posit a hypothetical Evil Demon trying to deceive us all, that we exist—in which case… babies do not exist at all! Do only children and preadolescents and onwards exist? Maybe when we are born, we do not exist, we are in a state of utter nonexistence and non-being, and it is only when we reach 11 that—POOF!—we magically pop into existence.


[1] This is obviously a satirical question. Babies do exist. It is more of a thought-experiment, or armchair philosopher problem. I find the comment to be so outrageous that it is funny, and I thought it made for a perfect reason to research if babies are conscious. 


For further reading: How Infants Know Minds by Vasudevi Reddy (2008)
Developmental Psychology 8th ed. by David R. Shaffer (2010)
The Secret Language of the Mind 
by David Cohen (1996)
The Science of the Mind
by Owen J. Flanagan, Jr. (1984)


Happiness as Eudæmonia

Averill on Happiness.pngHappiness, according to psychologist James R. Averill, a Eudaemonist, is a means-to-an-end, contrary to what his predecessor Aristotle thought. After taking into account both survey reports and behavioral observations, he devised a table of happiness (see below). It is a 2×2 table, one axis being “Activation,” the other “Objectivity.” The four types of happiness he identified were joy, equanimity, eudaemonia, and contentment. He narrowed it down to the objective standard of high immersion known as “eudaemonia,” a term for overall well-being that finds its roots in Aristotle’s Nicomachean Ethics. Aristotle wrote that eudaemonia was achieved through activity, as when we are so engaged in doing something, we forget we are doing it, and lose a sense of time—time flies when you’re having fun. As such, happiness for Aristotle is not a typical emotion in that it occurs for periods of time. You cannot always be in a state of eudaemonia. Rather, it can be actively pursued when you immerse yourself in meaningful work. To be happy is not to be happy about or for anything because it is essentially an object-less emotion, a pure feeling. Eudaemonia is distinguished from equanimity by the fact that the latter is the absence of conflict, the former the resolution thereof. Equanimity has been valued by philosophers as a state of total inner peace; on the other hand, eudaemonia is the result of achieving a images.jpeggoal, which necessarily entails conflict, viz. desire vs. intention. When you are confident in your abilities and set realistic goals, when you are able to complete their goals, having overcome conflict, you can achieve happiness. Too many short-term goals means not experiencing enough of what life has to offer, while too many long-term goals means not being accomplished or confident in yourself. The measure of happiness, then, is relative, not absolute, and differs from person to person. What remains absolute, however, is that this sense of achievement can be had privately, by yourself, and publicly, when it is done for your community, family, or close friends. Inherent to eudaemonia, Averill asserts, is purpose: Behind happiness is direction, intention, and devotion. This led him to claim that “Pleasure without purpose is no prescription for happiness,” meaning you should not resort to hedonism to be happy, but must seek pleasure in meaningful actives into which you can immerse yourself.

Averill’s Table of Happiness:

Subjective: Objective:
High activation: Joy Eudaemonia
Low activation: Contentment Equanimity


For further reading: Handbook of Emotions 2nd ed. by Michael Lewis (2000)

“Talking To” vs. “Talking With”

We spend too much time talking to one another—I think it is about time we start talking with one another.

Unknown.jpegWe might add to this talking about another, by which we mean talk that focuses on another person, often in a derogatory way. In the case of the latter, we refer to gossip, which is malicious, narrow, and crude. Unfortunately, it occupies speech most. Over half of conversations, I would argue, concern others at one point or another, in which they are discussed behind their backs, without knowledge, the unwitting victim of vitriolic verbal venom. Psychologists say this arises from two motives: First, gossip is engaged in order to learn about threats, about who is dominant, as this was important in Neolithic times; second, to compensate for one’s own self-esteem, or lack thereof. Picture nothing worse than two people scheming together in private, and you are the subject of their ridicule and criticism, and you have no knowledge of it as they attack and slander your name and reputation, so that it spreads into rumors, which are accepted prima facie, then used against you—infectious, like a virus, a deadly one.

When we talk about the former, we mean it in a sense with which we are more comfortable; in fact, it is used colloquially by almost everyone: “I was talking to my boss the other day,””My friends and I talked to each other on the phone,” or ”I love talking to people.” The word “to” is a preposition, so used transitively, it takes a verb and is directed toward an object. Already, we see a twofold implication. Plainly, the word Unknown.png“toward” when used in the context of persons is alarming and carries with it negative connotations. While we can be gracious toward another person, it is rare; we usually hear angry, hateful, prejudiced, etc. toward another person. In other words, the word “toward” means to direct something at someone, like a projectile—which words are. Therefore, we hurl words toward another, which is precisely what “talking to” means. This in-itself implies one-way communication. To better illustrate what I am describing, replace to with at. “I was talking at my boss the other day.” While they are different words, the meaning is not changed; rather, the word “to,” seemingly less aggressive and affrontive, is accepted as more acceptable and respectful, despite masking a darker message. Similarly we say we “give things to people,” as though they are the recipient. Taken this way, “talking to” means delivering words to people. But a gift given is not reciprocated. A delivery is sent to one destination to be received, meaning the interlocutor is the receptacle for the speaker’s words—they are reduced to something which receives, as though it is lifeless. Just as a mailbox is designated for receiving mail, so the person whom is being talked to is designated as “something” to receive their words. This leads to the second implication of the preposition “to.” Because “to” receives an object, it means the other person is become an object—that is, they are objectified, made into an object. The person becomes a mailbox, a mere thing, an object whose only reason for existence is to house mail, to be that which receives words; the person is something into which words are deposited and then left. When we endure something, we “take” it. We take the abusetake the lecturetake the pain; when we talk to people, we expect them to take our words.

Thus, when we talk to one another, we are not having a conversation. A conversation requires that two people be involved. It involves an exchange of words—not a depositing of them, nor a receiving of them. When we reduce each other to receptacles, things to store our baggage, we leave no room for exchange. Nobody puts mail into a mailbox and expects it to come back to them; so when you talk to someone, you hurl words toward them and expect them to receive it, but not return it. Talking to is hurling-toward-to-1.jpgdeposit. Everyone knows, however, that if you want a response, you do not just throw it and expect it to stay there. Accordingly, we must learn to talk with one another, rather than to one another. To talk with is to engage in conversation, in two-sided talk, in which words are passed from one to another. Not hurled or thrown but passed, granted, welcomed, exchanged. Whereas one deposits money into the bank to keep it there, one exchanges money into the bank to get its equal value. Who exchanges a 10-dollar bill for 10 one-dollar bills gets the same value back from what they gave. Conversation is an exchange. We converse with. From this we conclude that talking with is exchanging-for-equal-value, by which we mean that: What we put in, we get back. This is conversation. This is discussion. This is healthy communication, where both parties are heard, none prioritized ahead of the other, and where neither is objectivized, reduced to an object, but heard out. Everyone’s opinion is heard in talking with, whereas only one is in talking to. I think it is about time we stop talking to one another and start talking with one another.

Such will be a good start to creating a better future.



Technology and Social Media: A Polemic


Much gratitude is to be given to our devices—those glorious, wonderful tools at our disposal, which grant us capabilities whereof man centuries ago could only have wished, the culmination of years of technology, all combined in a single gadget, be it the size of your lap or hand. What a blessing they are, to be able to connect us to those around the world, to give us access to a preponderance of knowledge, and to give longevity to our lives, allowing us to create narratives and storytell; and yet, how much of a curse they are, those mechanical parasites that latch onto their hosts and deprive them of their vitality, much as a tick does. That phones and computers are indispensable, and further, that social media acts as a necessary sphere that combines the private and public, creating the cybersphere—such is incontrovertible, although they are abused to such an extent that these advantages have been corrupted and have lost their supremacy in the human condition.


Technology is ubiquitous, inescapable, and hardwired into the 21st-century so that it is a priori, given, a simple fact of being whose facticity is such that it is foreign to older generations, who generally disdain it, as opposed to today’s youths, who have been, as Heidegger said, thrown into this world, this technologically dominated world, wherein pocket-sized devices—growing bigger by the year—are everywhere, the defining feature of the age, the zeitgeist, that indomitable force that pervades society, not just concretely, but abstractly, not just descriptive but normative. In being-in-the-world, we Millennials and we of Generation X take technology as it is, and accept it as such. To us, technology is present. It is present insofar as it is both at hand and here, whereby I mean it is pervasive, not just in terms of location but in terms of its presence. A fellow student once observed that we youths are like fish born in the water, whereas older generations are humans born on land: Born into our circumstances, as fish, we are accustomed to the water, while the humans, accustomed to the land, look upon us, upon the ocean, and think us strange, pondering, “How can they live like that?”


As per the law of inertia, things tend to persist in their given states. As such, people, like objects, like to resist change. The status-quo is a hard thing to change, especially when it is conceived before oneself is. To tell a fellow fish, “We ought to live on the land as our fathers did before us”—what an outlandish remark! Verily, one is likely to be disinclined to change their perspective, but will rather accept it with tenacity, to the extent that it develops into a complacency, a terrible stubbornness that entrenches them further within their own deep-rooted ways. This individual is a tough one to change indeed. What is the case, we say is what it ought to be, and so it is the general principle whereupon we take our case, and anyone who says otherwise is either wrong or ignorant. Accordingly, following what has been said, the youth of today, the future of humanity, accepts technology as its own unquestioningly. As per the law of inertia, things tend to persist in their given states—that is, until an unbalanced force acts upon it.


What results from deeply held convictions is dogmatism. A theme central to all users of devices, I find, is guilt; a discussion among classmates has led me to believe that this emotion, deeply personal, bitingly venomous, self-inflicted, and acerbic, is a product of our technological addictions. Addiction has the awesome power of distorting one’s acumen, a power comparable to that of drugs, inasmuch as it compromises the mind’s judiciary faculty, preventing it from distilling events, from correctly processing experiences, and thereby corrupting our better senses. The teen who is stopped at dinner for being on their phone while eating with their family, or the student who claims to be doing homework, when, in reality, they are playing a game or watching a video—what have they in common? The vanity of a guilty conscience—would rather be defensive than apologetic. The man of guilt is by nature disposed to remorse, and thus he is naturally apologetic in order to right his wrong; yet today, children are by nature indisposed thereto, and are conversely defensive, as though they are the ones who have been wronged—yes, we youths take great umbrage at being called out, and instead of feeling remorse, instead of desiring to absolve from our conscience our intrinsic guilt, feel that we have nothing from which to absolve ourselves, imputing the disrespect to they who called us out.


Alas, what backward logic!—think how contrary were it to be if the thief were to call out that poor inhabitant who caught them. Technology has led to moral bankruptcy. A transvaluation of morals in this case, to use Nietzsche’s terminology is to our detriment, I would think. Guilt is a reactionary emotion: It is a reaction formed ex post facto, with the intent of further action. To be guilty is to want to justify oneself, for guilt is by definition self-defeating; guilt seeks to rectify itself; guilt never wants to remain guilty, no; it wants to become something else. But technology has reshaped guilt, turning it into an intransitive feeling, often giving way, if at all, to condemnation, seeking not to vindicate itself but to remonstrate, recriminate, retribute, repugn, and retaliate. Through technology, guilt has gone from being passive and reactive to active and proactive, a negative emotion with the goal of worsening things, not placating them. Digital culture has perpetuated this; now, being guilty and remaining so is seen as normal and valuable. Guilt is not something to be addressed anymore. Guilt is to be kept as long as possible. But guilt, like I said, is naturally self-rectifying, so without an output, it must be displaced—in this case, into resentment, resentment directed toward the person who made us feel this way.


—You disrupt me from my device? Shame on you!—It is no good, say you? I ought get off it? Nay, you ought get off me!—You are foolish to believe I am doing something less important than what we are doing now, together, to think it is I who is in the wrong, and consequently, to expect me to thusly put it away—You are grossly out of line—You know naught of what I am doing, you sanctimonious tyrant!—


When asked whether they managed their time on devices, some students replied quite unsurprisingly that they did not; notwithstanding, this serves as a frightful example of the extent to which our devices play a role in our lives. (Sadly, all but one student said they actually managed their time.) They were then asked some of the reasons they had social media, to which they replied: To get insights into others’ lives, to de-stress and clear their minds after studying, and to talk with friends. A follow-up question asked if using social media made them happy or sad, the answer to which was mixed: Some said it made them happier, some said it made them sadder. An absurd statement was made by one of the interviewees who, when asked how they managed their time, said they checked their social media at random intervals through studying in order to “clear their mind off of things” because their brains, understandably, were tired; another stated they measured their usage by the amount of video game matches played, which, once it was met, signaled them to move onto to something else—not something physical, but some other virtual activity, such as checking their social media account. I need not point out the hypocrisy herein.


I take issue with both statements combined, for they complement each other and reveal a sad, distasteful pattern in today’s culture which I shall presently discuss. Common to all students interviewed was the repeated, woebegone usage of the dreaded word “should”:
—”I should try to be more present”—
—”I should put my phone down and be with my friends”—
—”I should probably manage my time more”—


Lo! for it is one thing to be obliged, another to want. Hidden beneath each of these admissions is an acknowledgment of one’s wrongdoing—in a word, guilt. Guilt is inherent in “shoulds” because they represent a justified course of action. One should have done this, rather than that. Subsequently, the repetition of “should” is vain, a mere placeholder for the repressed guilt, a means of getting rid of some of the weight on one’s conscience; therefore, it, too, the conditional, is as frustrated as the guilt harbored therein.


Another thing with which I take issue is when the two students talked about their means of time management. The first said they liked to play games on their computer, and they would take breaks intermittently by going elsewhere, either their social media or YouTube to watch videos. No less alogical, the other said they would take breaks by checking their social media, as they had just been concentrating hard. How silly it would be for the drug addict to heal himself with the very thing which plagues him! No rehabilitator assures their circle with alcohol; common sense dictates that stopping a problem with that which is the problem in the first place is nonsense! Such is the case with the culture of today, whose drugs are their devices. In the first place, how exactly does stopping a game and checking some other website constitute a “break”? There is no breach of connection between user and device, so it is not in any sense a “break,” but a mere switch from one thing to the next, which is hardly commendable, but foolish forasmuch as it encourages further usage, not less; as one defines the one in relation to the next, it follows that it is a cycle, not a regiment, for there is no real resting period, only transition. Real time management would consist of playing a few games, then deciding to get off the computer, get a snack, study, or read; going from one device to another is not management at all. Similarly, regarding the other scenario, studying on one’s computer and taking a break by checking one’s media is no more effective. One is studying for physics, and after reading several long paragraphs, sets upon learning the vocabulary, committing to memory the jargon, then solving a few problems, but one is thus only halfway through: What now? Tired, drained, yet also proud of what has been accomplished thus far, one decides to check one’s social media—only for 30 minutes, of course: just enough time to forget everything, relax, and get ready to study again—this is not the essence of management; nay, it is the antithesis thereof! No state of mind could possibly think this reasonable. If one is tired of studying, which is justifiable and respectable, then one ought to (not should!) take a real break and really manage one’s time! Social media is indeed a distraction, albeit of a terrible kind, and not the one we ought to be seeking. Checking a friend’s or a stranger’s profile and looking through their photos, yearning for an escape, hoping for better circumstances—this is not calming, nor is it productive. A good break, good time management, is closing one’s computer and doing something productive. Social media serves to irritate the brain even more after exhaustion and is not healthy; instead, healthy and productive tasks, of which their benefits have been proven, ought to be taken up, such as reading, taking a walk, or exercising, among other things: A simple search will show that any of the aforementioned methods is extremely effective after intense studying, and shows signs of better memory, better focus, and better overall well-being, not to mention the subconscious aspect, by which recently learned information is better processed if put in the back of the mind during something else, such as the latter two, which are both physical, bringing with them both physiological and psychological advantages. Conclusively, time management consists not in transitioning between devices, but in transitioning between mind- and body-states.


The question arises: Why is spending too much time with technology on devices a problem in the world? Wherefore, asks the skeptic, is shutting oneself off from the world and retreating into cyberspace where there are infinite possibilities a “bad” thing? Do we really need face-to-face relationships or wisdom or ambitions when we can scroll through our media without interference, getting a window into what is otherwise unattainable? Unfortunately, as with many philosophical problems, including the simulation theory, solipsism, and the mind-body problem, no matter what is argued, the skeptic can always refute it. While I or anyone could give an impassioned speech in defense of life and about what it means to be human, it may never be enough to convince the skeptic that there is any worth in real-world experiences. It is true that one could easily eschew worldly intercourse and live a successful life on their device, establishing their own online business, finding that special person online and being in love long distance—what need is there for the real world, for the affairs of everyday men? Philosopher Robert Nozick asks us to consider the Pleasure Machine: Given the choice, we can choose to either hook ourselves up to a machine that simulates a perfect, ideal, desirable world wherein all our dreams come true, and everything we want, we get, like becoming whatever we always wanted to become, marrying whomever we have always wanted to marry, yet which is artificial, and, again, simulated; or to remain in the real world, where there are inevitable strifes and struggles, but also triumphs, and where we experience pleasure and pain, happiness and sadness—but all real, all authentic. There is, of course, nothing stopping one from choosing the machine; and the skeptic will still not be swayed, but I think the sanctity of humanity, that which constitutes our humanity, ought never be violated.


What, then, is the greatest inhibition to a healthy, productive digital citizenship? What can we do to improve things? The way I see it, the answer is in the how, not the what. Schools can continue to hold events where they warn students of the dangers of technology, advise them on time management, and educate them about proper usage of technology and online presence; but while these can continue ad infinitum, the one thing that will never change is our—the students—want to change. Teachers, psychologists, and parents can keep teaching, publishing, and lecturing more and more convincingly and authoritatively, but unless the want to change is instilled in us, I am afeard no progress will be made. Today’s generation will continue to dig itself deeper into the technological world. They say the first step in overcoming a bad habit or addiction is to admit you have a problem. Like I said earlier, technology just is for us youths, and it always will be henceforth, and there will not be a time when there is not technology, meaning it is seen as a given, something that is essential, something humans have always needed and will continue to need. Technology is a tool, not a plaything. Technology is a utility, not a distraction. Social media is corrupting, not clarifying, nor essential. We have been raised in the 21st-century such that we accept technology as a fact, and facts cannot be disproven, so they will remain, planted, their roots reaching deeper into the soil, into the human psyche. Collectively, we have agreed technology is good, but this is “technology” in its broadest sense, thereby clouding our view of it. We believe our phones and computers are indispensable, that were we to live without them, we would rather die. To be without WiFi—it is comparable to anxiety, an object-less yearning, and emptiness in our souls. How dependent we have become, we “independent” beings! This is the pinnacle of humanity, and it is still rising! Ortega y Gasset, in the style of Nietzsche, proclaimed, “I see the flood-tide of nihilism rising!”¹ We must recognize technology as a problem before we can reform it and ourselves. A lyric from a song goes, “Your possessions will possess you.” Our devices, having become a part of our everyday lives to the extent that we bring them wheresoever we go, have become more controlling of our lives than we are of ourselves, which is a saddening prospect. We must check every update, every message, every notification we receive, lest we miss out on anything! We must miss out on those who care about us, who are right in front of us, in order to not miss out on that brand new, for-a-limited-time sale! But as long as we keep buying into these notification, for so long as we refuse to acknowledge our addictions and the problem before us, we will continue to miss out on life and waste moments of productivity, even if they are for a few minutes, which, when added up at the end of our lives, will turn out to be days, days we missed out on. As my teacher likes to say, “Discipline equals freedom.” To wrest ourselves from our computers or phones, we must first discipline ourselves to do so; and to discipline ourselves, we must first acknowledge our problem, see it as one, and want to change. As per the law of the vis viva (and not the vis inertiæ), things tend to persist in their given states, until its internal force wills it otherwise. We bodies animated with the vis viva, we have the determination and volition to will ourselves, to counter the inertia of being-in-the-world, of being-online, whence we can liberate ourselves, and awaken, so to speak. We, addicts, have no autonomy with our devices—we are slaves to them. Until we break out of our complacency, until we recognize our masters and affirm our self-consciousness thence, and until we take a stand and break from our heteronomy, we will remain prisoners, automata, machines under machines. We must gain our freedom ourselves. But we cannot free ourselves if we do not want to be freed, if we want to remain slaves, if we want to remain in shackles, if we want to plug into the machine. A slave who disdains freedom even when freed remains a slave. Consequently, we cannot be told to stop spending so much time on our devices, to pay attention to whom or what is in front of us; we must want to ourselves. Yet no matter how many times or by whom they are told, today’s youth will never realize it unless they do so themselves. They must make the decision for themselves, which, again, I must stress, must be of their own volition. Until then, it is merely a velleity, a desire to change, but a desire in-itself—nothing more, a wish with no intent to act. It is one thing to say we should spend less time, another that we ought to.


¹Ortega y Gasset, The Revolt of the Masses, p. 54


Harper Lee’s Guide to Empathy

Unknown.pngIn the 21st Century, surrounded by technologies that distance us, by worldviews that divide us, and by identities that define us, we do not see a lot of empathy among people. While we see friends and family every day, we never really see them, nor do we acknowledge that they, too, are real people, people who have opinions like us, feelings like us, and perspectives like us. Harper Lee is the author of To Kill a Mockingbird, a novel that itself has many perspectives, many of which are in conflict with each other. Set in the 1930’s South, the book takes place during the Great Depression, when many lost their jobs, and a time of racism, when laws were passed that prohibited the rights of black people. The protagonist is a girl named Scout who lives in the fictional town of Maycomb with her brother Jem and father Atticus, who is an empathetic lawyer. Through interactions with her peers, Scout learns to take others’ perspectives and walk in their shoes. In To Kill a Mockingbird, Harper Lee teaches that, in order to take another’s perspective and practice empathy, it is required that one understand someone else’s thoughts or background, try to relate to them, then become aware of how the consequences of one’s actions affects them.

Before one can truly take another’s perspective, Lee argues, one must first seek to understand how someone thinks and where they come from. After hearing about Mr. Cunningham’s legal entailment, Scout asks if he will ever pay Atticus back. He replies that they will, just not in money. She asks, “‘Why does he pay you like that [with food]?’ ‘Because that’s the only way he can pay me. He has no money… The Cunninghams are country folk, farmers, and the crash hit them the hardest…’ As the Cunninghams had no money to pay a lawyer, they simply paid us with what they had’” (Lee 27-8).  Scout is confused why the Cunninghams pay “like that” because it is not the conventional way of paying debts. Money is always used in business transactions, yet Atticus allows them to pay through other means. Atticus acknowledges that the Cunninghams are having economic problems. He empathizes with him by drawing on his background knowledge, namely that, because he is a farmer who gets his money from agriculture, he does not Unknown.jpeghave the means to pay. The Great Depression left many poor and without jobs, so Atticus is easier on Mr. Cunningham; he knows it would be unfair to make him pay when he hardly has any money. Accordingly, Atticus accepts that the Cunninghams are trying their best, and he compromises with them. He willingly accepts anything Mr. Cunningham will give him, since he knows it will come from the heart. For this reason, Atticus can empathize by thinking outside normal conventions to accommodate Mr. Cunningham’s situation. Just as Atticus understands the Cunninghams, so Calpurnia empathizes with them when she lectures Scout not to judge them. Jem invites Walter Cunningham from school over to have dinner with him and Scout. Reluctantly, Walter agrees, but once he starts eating, Scout takes issue with his habits; so Calpurnia scolds her. Calpurnia yells, “‘There’s some folks who don’t eat like us… but you ain’t called on to contradict ‘em at the table when they don’t… [A]nd don’t you let me catch you remarkin’ on their ways like you was so high and mighty!’” (Lee 32-3). Because Scout is not used to the way Walter eats, she immediately judges his way as different from her own, thereby patronizing him. Hence, she is not empathizing because she is not considering his point of view, but is only evaluating her own. Calpurnia states that not everyone eats like Scout does, showing that she, unlike Scout, does not form generalizations; rather, she rationalizes, recognizing that he comes from a different home, a different home with different manners. Since she empathizes with Walter in this way, Calpurnia tells Scout not to “contradict” him, meaning it is rude and unsympathetic not to consider Walter and his background. Furthermore, she warns Scout not to act as though she is “so high and mighty,” especially around others who are less fortunate and who differ from her, such as Walter. By criticizing Walter’s eating and thence abashing him, Scout is being sanctimonious, declaring that her way is the better than anyone else’s. Calpurnia gets mad at Scout for this, as it is egocentric; i.e., she is concerned with herself and cannot consider others’ perspectives. Consequently, Calpurnia shows empathy by understanding that people have different perspectives, while Scout does not. Both Atticus and Calpurnia are empathetic because, as shown, they actively try to understand other people and selflessly consider their perspectives.

Unknown-1.jpegOnce a person’s way of thinking and past is understood, one is able to see oneself in that other and make connections with them. One night, Scout, Jem, and Dill sneak off to the Radley house and are scared away, Jem losing his pants in the process. Jem decides to retrieve his pants, regardless of the danger involved therewith. The next morning, he is moody and quiet, and Scout does not know why. Upon some reflection, she says, “As Atticus had once advised me to do, I tried to climb into Jem’s skin and walk around in it: if I had gone alone to the Radley Place at two in the morning, my funeral would have been held the next afternoon. So I left Jem alone and tried not to bother him” (Lee 77). Scout follows her father’s advice and “climb[s] into Jem’s skin,” symbolizing that she has taken his perspective and seen life therethrough. She asks herself the vital question of what it would be like to be Jem; in doing this, she has visualized herself as Jem, has visualized herself doing what he did, thereby understanding him. The first step in empathizing—understanding—allows her to relate to Jem and put herself in his position: She imagines what it would have been like to risk her own life, how she would have felt doing so. As a result, she examines her emotional reaction and projects it onto Jem, relating to him, feeling as he would feel. Had she not tried to understand Jem’s position, had she not related to him emotionally, she would have never known why Jem was being moody. Jem’s “funeral would have been held the next afternoon,” says Scout, realizing why Jem is upset. If she felt that way herself, then she would not want anyone bothering her, either, seeing as it is a traumatic event. Scout connects to Jem on an emotional level, empathizing with him. Another instance in which Scout shows empathy by relating is when she connects with Mr. Cunningham. Jem and Scout sneak out at night to find Atticus, who is at the county jail keeping watch over his client, Tom Robinson. While they near to him, a mob closes in on Atticus and threatens to kill Robinson, so Scout tries to find a way of civilizing them and 120130184141-mockingbird-6-super-169.jpgtalks to Walter’s father. Thinking of conversation, she considers, “Atticus had said it was the polite thing to talk to people about what they are interested in, not what you were interested in. Mr. Cunningham displayed no interest in his son, so I tackled his entailment once more in a last-ditch effort to make him feel at home” (Lee 205). In this moment, Scout recalls that it is polite to relate to others and consider their views rather than her own. She hereby distances herself from her egocentrism, instead concerning herself with what someone other than herself wants. Empathizing requires that one cross the gorge of disparity, and Scout bridges this gap between self and other to find that she has things in common with Mr. Cunningham, common things of which she would never have thought prior. Before this connection could occur, Scout had to know his background, of which she learned when talking to Atticus; additionally, she had his Unknown-1.pngson over and learned about him then, giving her something in common with him with which to talk. Since Scout knows Walter, she thinks him a topic to which the two can both relate, seeing as Walter is close to his father, creating a strong connection. However, she notes that he “displayed no interest in his son”; thus, she thinks back further, remembers another thing they have in common, then relates to it in an attempt to “make him feel at home.” The phrase “feel at home” denotes acceptance, belonging, and coziness—being warm and welcome—so Scout, in coming up with certain topics that will be of interest to Mr. Cunningham, seeks to make him feel like he is a welcome person, to put herself in his shoes and consider what he would like to talk about, what would make him feel accepted as it would her. Through these moments in the text, Lee shows that empathy is relating to and identifying with another by removing one’s own position and taking theirs.

Empathy is accomplished when one takes another’s perspective in order to know their actions will affect them and consider how they would make them feel. Jem and Scout find out Atticus has been insulted and threatened by Bob Ewell in chapter 23. They are confused as to why their dad did nothing to retaliate, why he just took it. He tells Jem, Unknown.jpeg“[S]ee if you can stand in Bob Ewell’s shoes a minute. I destroyed his last shred of credibility at the trial, if he had any to begin with… [I]f spitting in my face and threatening me saved Mayella Ewell one extra beating, that’s something I’ll gladly take. He had to take it out on somebody and I’d rather it be me than that houseful of children out there’” (Lee 292-3). Atticus directs Jem to “stand in Bob Ewell’s shoes” so that he can understand his perspective, and therefore how Atticus’ actions could have affected him. Knowing Mr. Ewell has many children, finding a common link therein, Atticus can relate to him, imagining how horrible it would be if his children were beaten. Bob Ewell, upset over the trial, wants to take out his anger, so he displaces it onto Atticus, which Atticus says is better than his displacing it on his children. Taking the pacifist route, Atticus avoids exacerbating the situation, aware that fighting back would cause things to worsen, and he steps outside himself to become aware of how his actions will not just have direct effects, but indirect effects as well: Angering Bob Ewell would make him want to physically harm Atticus, but would further encourage him to be more hostile to his children in addition. As such, Atticus takes into account the long-term consequences and empathizes because he is aware of how his actions could possibly obviate a disaster. He thinks ahead—to Bob Ewell’s children, to his own children, concluding, “‘I’d rather it be me than that houseful of children.’” A second example of considering the consequences of one’s actions on another takes place when Scout, a couple years later, reflects on how she treated Arthur “Boo” Radley. At the beginning of chapter 26, Scout is thinking about her life and passes the Radley house, of which she and Jem were always scared, and about which they had always heard rumors. She remembers all the times in the past she and her brother and their friend played outside, acting out what happened at the house. Pensively, she Unknown-1.jpegponders, “I sometimes felt a twinge of remorse when passing by the old place [Radley house], at ever having taken part in what must have been sheer torment to Arthur Radley—what reasonable recluse wants children peeping through his shutters, delivering greetings on the end of a fishing-pole, wandering in his collards at night?” (Lee 324). Lee uses the word “remorse” here to conjure up feelings of guilt, regret, and shame, all associated with the way Scout feels about her actions. To say she feels a “twinge of remorse” is to say she feels compunction; that is, morally, she feels she has wronged the Radleys, and, looking back, that what she did was wrong. She is contrite because she can stand back and objectively evaluate her deeds, deeds she deems unempathetic, considering they were inconsiderate of Arthur. Having become aware of the weight of her choices, Scout experiences regret, an important emotional reaction because it signifies empathy, insofar as it is representative of her taking into account how she affected another person; and, in this case, how it negatively impacted Arthur, which itself requires understanding and relation to him. This regret, this guilt, is caused by the realization that her actions in the past were mean and thus incite moral guilt. Again, Scout puts herself in Arthur’s shoes, imagining what it would reasonably be like to be a “recluse”: Certainly, she affirms, she does not want “children peeping,… delivering greetings,… [or] wandering in [her] collards.” The thought process is supposed to relate to Arthur’s, so Scout is actively relating to and understanding him, ultimately to realize how her conduct impacts him. Her scruples finally notify her that, from the perspective of the solitary Arthur, her behavior had a negative effect. Scout’s awareness of the consequences of her actions makes her empathetic, for she has introjected Arthur’s perspective. In conclusion, Atticus and Scout exhibit empathy because they both consider how their comportment has an effect on others.

Unknown.pngAccording to Lee, empathy is put into practice when one takes time to learn about another person, makes a personal connection with them, and considers how their actions will affect them. We are social animals by nature, which means we desire close relationships; unfortunately, most of us seldom recognize the importance of understanding those with whom we have a relationship, leading to inconsiderateness, ignorance, and stereotypes. For such intimate animals, we all too often neglect the feelings and thoughts of others, even though they are of no less priority than ours. Therefore, empathy is a vital, indispensable tool in social interaction that helps us connect with others. As communication is being revolutionized, worldviews shaken, and identities changed, it is integral that we learn to better understand others and never forget to empathize, lest we lose our humanity.


To Kill a Mockingbird by Harper Lee (1982)


Attention and Mindfulness (2 of 2)

Summary of part one: Attention is “the process of focusing conscious awareness, providing heightened sensitivity to a limited range of experience requiring more extensive information processing” and requires an external stimulus. Research by Colin Cherry (1953), Donald Broadbent (1958), and Anne Treisman (1964) found that we can attend to one task at a time, suppressing all other incoming stimuli, based on quality of sound.

Unknown.png“It is easy to eat without tasting,” says Jon Kabat-Zinn in Coming to Our Senses (p. 118). At first glance, this sentence seems random, out-of-nowhere, and completely absurd. Of course we taste our feed when we eat it! However, Kabat-Zinn argues that while we claim to experience and sense things, we do not truly experience them. His message throughout the book is that we have become out of touch with ourselves, with our senses, our bodies, and with the world around us; we fail to experience things for themselves, insofar as we rush through our lives, treating food as “just another meal,” hastily consuming it, not really taking the time to taste each individual flavor. When we eat a hamburger, all we taste is hamburger, not meat, lettuce, tomato, etc., but just hamburger. Our meals are prepared then eaten, but we do not taste them as they should be tasted. Kabat-Zinn states that when attention and intention team up, we are awarded with connection; from connection, regulation; from regulation, order; and from order, we arrive at ease, contentment. There is an effect called sensory adaptation that we seldom recognize yet is always at work. Constant exposure to an external stimulus builds up our tolerance to it, resulting in the numbing of that sense, to the point that we do not notice it. The reason others can smell our body odor but we ourselves cannot is an example of this, because our odor is constantly emanated, and the brain, to avoid distractions, builds up tolerance, to the extent that we no longer smell our own bodies. The purpose of sensory adaptation is to prevent us from becoming entirely distracted. The world is full of smells, sounds, sights, touches, and tastes, but imagine if we were exposed to all of them at once—this is why we need to adapt to our senses. Of course, were we rapt on studying so that all else was ignored, the sound of a car would still interrupt us, considering the intensity of it would overstimulate our senses. While sensory adaptation has helped us biologically, Kabat-Zinn notes that it also works to our disadvantage, particularly the dampening of our Unknown-4.jpegsenses, without which we cannot live. Breathing is of especial importance in meditation. It is necessary to all living things, we must remember; yet we take it for granted, repeatedly neglecting it, forgetting to check how we are doing it. If we took a few minutes every day to attend to our breathing, we could all reduce stress, find composure, and even lower our heart rate through practice. This applies to all sense. As Aristotle keenly reminds us, “[O]ur power of smell is less discriminating and in general inferior to that of many species of animals.”[1] Unlike most animals, humans’ sense of smell is weaker, and so we rely less upon it. Smell and taste are underrated when it comes to senses, although they are of equal merit. Like breathing, both are taken for granted, appreciated only when we are sick, when we can no longer use them—only then do we wish we could taste and smell again. Just as Kabat-Zinn said, we truthfully eat without tasting. Eating our food, we feel pleasure, in the moment; but if we were sick in the same circumstances, we would appreciate our senses that much more; as such, we must live each day as though we were sick.

There are different kinds of meditations, of ways of being mindful. During meditation, you can do a body or sense scan, where you spend a few moments going through your body, focusing on the sensations in a particular part of the body, examining it, then moving on; or you can, for a few minutes at a time, focus on each of your main senses, perhaps using only your ears for a minute, your nose the next. Proprioception is an obscure sense: it is the sensation of each body part in relation to the others. In a body scan, this is most prevalent, when you feel your body in totality, as a whole, yet are able to focus on one body part. William James, writing about boredom, could just have easily been writing about this state of meditation:

The eyes are fixed on vacancy, the sounds of the world melt into confused unity, the attention is dispersed so that the whole body is felt, as it were, at once, and the foreground of consciousness is filled, if by anything, by a solemn sense of surrender to the empty passing of time.[2]

Unknown.pngTypically, when one meditates, one can either close or open their eyes, fixing them at a certain point, listening to the sounds of the world around them, acknowledging every part of their body, paying attention to the breath, overcome by a static sense of stillness, as they are neither in the past nor the future, but the present, simply being, moment to moment. There are two types of attention in meditation: abstract, or inward, and sensory, or outward, attention. The former involves impartial introspection, the clearing of the mind, the decluttering of ideas. “This curious state of inhibition can for a few moments be produced by fixing the eyes on vacancy. Some persons can voluntarily empty their minds and ‘think of nothing,’” wrote James, describing hypnotism, though inadvertently describing meditation as well.[3] Sensory attention, on the other hand, is simply being attentive to the senses and all incoming stimuli. If you are interested in meditation, there are several exercises that can be done to sharpen your attentiveness, like dhāraṇā, jhāna, samādhi, or you can practice some brahmavihāras. In dhāraṇā, the meditator is aware of themselves, as a whole and as meditating, and an object; after dhāraṇā, they move to jhāna, which is awareness of Unknown-5.jpegbeing and of an object; and finally, in samādhi, they find themselves in unity with the object. Samādhi is translated to “one-pointedness” and refers to pure concentration, pure attention. When in this state, the meditator is in what William James calls voluntary attention. This attention occurs when there is a powerful stimulus, yet you focus on something of less intensity. If you are studying and there is noisy construction outside, focusing on the studying, even though the construction is louder and demands your attention, would be an act of voluntary attention. This state, however, cannot be held indefinitely. As James writes, “[S]ustained voluntary attention is a repetition of successive efforts which bring back [a] topic to the mind.”[4] Hence there is no such thing as maintaining voluntary attention, rather coming back to it over and over. Brahmavihāras are like reflections upon Buddhist virtues. There are four traditional brahmavihāras: loving-kindness, compassion, joy, and equanimity. Feel free, too, to make your own meditation, where you reflect on something outside of the given topics—questions in philosophy, like good and evil, justice, and the sort, are some starters.

Unknown-9.jpegI briefly mentioned the idea of clearing the mind, of emptying it of ideas, and to that I shall turn again. Thoughts, in Buddhist writings, are treated like clouds, wispy and flowing; they are temporary; sometimes they are clear, sometimes they clump together; sometimes they are sunny, sometimes they are like a storm. Either way, thoughts are not permanent, nor can they harm you in any way. Generally, we ought to be on the lookout for negative thoughts. When they arise, we must simply dismiss them. Thoughts are the fire to our thinking’s gasoline, for thinking about our thoughts merely propagates more and makes them worse. It is better to let thoughts pass than to intervene through force. Meditation calls for dispelling all thoughts, good or bad. It is misleading to think that we are trying to get rid of them, that we are trying to single some thoughts out from others. This is not the case; rather, we must acknowledge that we are thinking and let them pass. If a positive thought comes, do not perpetuate it, let it pass; if a negative thought comes, do not perpetuate it, let it pass. Another thing to remember is that simply acknowledging that you are thinking is being mindful, and you should not get frustrated with yourself for this reason. An important facet of Buddhist psychology is the distinction between perception and conception. Perception is pure sensation, and conception is labeling, to put it simply. Sitting in peace and silence, you hear a sound, process it, identify it as the rustling of the trees and the singing of birds, and continue meditating—such is an act of conception, for hearing a sound is perception, but classifying it, labeling it, is conception. Unknown-8.jpegLabeling is necessary for living. Without it, there would be no way to comprehend the world. We would be exposed to a chaotic mess, an overwhelming tidal wave of sensations we cannot understand. Almost everything we see and process is conceptualized: this is a tree, that is a plant, this is grass, that is dirt on which I am walking. One is tempted to think of Kant’s categories of the mind and the differentiation between phenomena and noumena. Our mind actively shapes our world, grouping things together, creating causal links, imposing spaciotemporal relations, constantly conceiving things. Perception is to noumena as conception is to phenomena. Rarely do we perceive things as they are, as things-in-themselves, but conceive them imperfectly. We need to carry this to meditation, in thought and in sensation. We must try not to classify things by texture, color, or shape, nor judge thoughts by appearance, nor label anything as “good” or “bad.” Another danger of thinking is daydreaming, to which all meditators are vulnerable, especially if their eyes are closed. When we doze off, finding comfort and relaxation, following our breath, we might accidentally slip into our fantasies, moving from the external to the internal, where we begin to plan for the future or reminisce in the past. No matter which you do, neither is good. William James warns us, “When absorbed in [passive] intellectual Unknown-10.jpegattention we become so inattentive to outer things as to be ‘absent-minded,’’abstracted,’ or ‘distrait.’ All revery or concentrated meditation is apt to throw us into this state.”[5] By meditation, James is not referring to it in our sense, but to the act of pondering. We should not fall into the trap of thinking about the future or ruminating about the past, because as Marcus Aurelius said, “[M]an lives only in the present, in this fleeting instant: all the rest of his life is either past and gone, or not yet revealed.”[6] The past is in the past, and there is nothing we can do to change it, and wishing you could redo something will not help. And the future has not happened yet, so making unrealistic expectations will not help either.

images.jpeg“But we do far more than emphasize things, and unite some, and keep others apart. We actually ignore most of the things before us,” notes William James.[7] For such a formidable tool to which we all have access, the art of attention and how to properly apply it has all but been forgotten by today’s society, to their disadvantage. We live in an age where A.D.D is rampant, and more and more kids are diagnosed with it. Further, our technology strips us of our connection to nature, to the world, to each other. We are no longer in touch with ourselves or our senses. With mindfulness and meditation, however, by living in the present and embracing our senses and life, we can make our lives meaningful.


[1] Aristotle, De Anima II.8, 421a9-10
[2] James, The Principles of Psychology, XI.2, p. 261
[3] Ibid.
[4] Id., XI.6, p. 272
[5] Id., p. 271
[6] Aurelius, Meditations, III.10
[7] James, op. cit., IX.5, p. 184


For further reading: Buddhist Psychology Vol. 3 by Geshe Tashi Tsering (2006)
The Principles of Psychology by William James (1990)
Coming to Our Senses by Jon Kabat-Zinn (2005)
Mindfulness by Joseph Goldstein (2016)
Meditations by Marcus Aurelius (2014)
Zen Training by Katsuki Sekida (1985)
De Anima by Aristotle (1990)


Attention and Mindfulness (1 of 2)

Attention is vital to our everyday lives. Some of us are better than others at paying attention, but regardless of skill, we all need it, whether we are learning in class or playing out in the field. In a world that values fast, immediate, instantaneous things, attention is slowly fading away, leaving us disoriented and scattered, left in a culture where it is easy to be left behind if you are not fast enough. Not enough of us pay attention in our everyday lives, even in the most simplest of tasks, failing to appreciate the beauty of life, successfully missing the important things, letting life slip out of our grasp. Through a better understanding of what attention is and how it can be used in mindfulness, I believe we can all live more fulfilling lives.


In psychology, attention refers to “the process of focusing conscious awareness, providing heightened sensitivity to a limited range of experience requiring more extensive information processing.”[1] Simply put, attention is the ability to focus your awareness and senses on a particular task, leading to better experience and understanding. In order for this focusing to occur, you need an external stimulus, such as a sound, and an active goal, which is your response to or classification to such a stimulus. For example, if you hear a dog bark, the barking is the external stimulus, and your realizing it is a dog who is barking is the active goal. The act of paying attention is no direct process but a combination of three processes (Posner, 1995): Orienting senses, controlling consciousness and voluntary behavior, and maintaining alertness. The first stage, orienting senses, is what happens when your sensory organs are directed to the source of a stimulation. When you hear a sound coming from the left, it is your left ear that will process it first, as it is oriented to the direction from which the sound came. Similarly, when you touch something, your hand comes into direct contact with the object. Depending on what sense the stimulus activates, your cortex suppress the other sensory organs while focusing on the active one: rarely do you need your eyes to smell something—it is the nose’s job to do that. When you orient your senses, you tend to use your superior colliculus, responsible for eye movement; the thalamus, responsible for activating specific sensory systems; and the parietal lobe, which is usually responsible for giving us a sense of direction. The next stage is controlling consciousness and voluntary behavior, in which your brains decide just how much you want to focus on a particular sense. Your eyes, when paying attention to something, can dilate or constrict depending on light, for example. Therefore, this 250px-Basal_Ganglia_and_Related_Structures.svg.pngsecond stage’s job is to control your response to stimuli and uses the frontal lobe and basal ganglia, known for their relation to controlling thoughts and actions. Third is maintaining alertness, which is indispensable for attention, for its job is to remain focused on a sense and ignore possible distractions. When you maintain alertness, you use different neural patterns in your reticular formation and frontal lobe. A type of attention known as selection is defined as “the essence of attention” (Rees et al., 1997).[2] Selective attention is the ability to focus on something important and ignore others, whereas selective inattention is the ability to ignore something important and focus on others; the latter is used most often, either for good, as in diverting stress, or for bad, as in procrastinating.

Imagine you are at a party. You are sitting at a table with your friends, deep in conversation; the speakers are blasting music; there are people dancing; and there is another conversation across the room. Engrossed in the talk, you block out all other sound beside your own conversation, when all of a sudden, you hear your name being Unknown-2.jpegmentioned in the conversation across the room. The Cocktail Party Phenomenon, as it came to be called, was studied by Colin Cherry (1953), who found, startlingly, that not only is most information unconsciously processed, but some of this information, conscious or not, is prioritized above other information. A contemporary of his, Donald Broadbent, developed the Broadbent Filter Model (1958) to attempt to explain why this is so. Fascinated by air traffic controllers, whose job it is to receive multiple incoming messages at once and in mere seconds make quick judgments about which is most important, Broadbent began to study divided attention, “the capacity to split attention or cognitive resources between two or more tasks”[3] (Craik et al., 1996), by using a method of testing called dichotic listening, where a subject puts on a pair of headphones and is played a different messages in each ear, simultaneously. Broadbent found that only one channel can be understood at a time, while the other is blocked out. He reasoned that there must be a theoretical, Y-shaped divergence in our minds that, when two inputs try to pass, lets one through and blocks access to the other. He said, further, that we have a short-term memory store that keeps track of these channels. The question remained, though: How does the brain decide which channel to let through? In another surprising conclusion, he found that in spoken language, meaning is understood after being processed; as such, content is not the decisive factor but quality of sound, like loudness, harshness, and from what sex it came. A loud, domineering voice, therefore, will be prioritized over a softer, nicer voice, even if the latter is more important in its message. Broadbent later went back and revised his model, stating priority is based on a combination of quality of the voice, content of the words, and prior experience; however, a later psychologist, Anne Treisman, said that during the Y-exchange, the second channel is not ignored, per se, but suppressed—this would explain the Cocktail Party Effect, because although you do not consciously hear your name, you still process it.


[1] Westen, Psychology: Brain, Mind, & Culture, 2nd ed., p. 395
[2] Ibid., pp. 395-6
[3] Id., pp. 397-8


For further reading: Psychology: Brain, Mind, & Culture 2nd ed. by Drew Westen (1999)
Essentials of Psychology by Kendra Cherry (2010)
The Psychology Book by Wade E. Pickren (2014)
The Psychology Book
by DK (2012)

Some Experiments on the Recognition of Speech, with One and with Two Ears by Colin Cherry (1953)


If Thou Art Pained By Any External Thing, It Is Not This Thing That Disturbs Thee, But Thy Own Judgment About It.

Unknown.jpegIn book 8 section 47 of the Meditations, Marcus Aurelius writes,

If thou art pained by any external thing, it is not this thing that disturbs thee, but thy own judgment about it. And it is in thy power to wipe out this judgment now. But if anything in thy own disposition gives thee pain, who hinders thee from correcting thy opinion? And even if thou art pained because thou art not doing some particular thing which seems to thee to be right, why dost thou not rather act than complain?—but some insuperable obstacle is in the way?—Do not be grieved then, for the cause of its not being done depends not on thee.

Things in themselves are not nuisances, rather we make them so ourselves. Nothing is either good or bad in itself, although we commonly think they have to, and incorrectly. The thing is, though, that our thoughts, unlike external events, are within our power, and so we are able to change our thoughts, judgments, and perceptions to make things bearable. Aurelius tells us that if we are annoyed by ourselves, we oughtn’t blame it on others or on anything; instead, we should take to correcting our opinions, as they belong to us, so we can fix them. No one stops us from changing our way of thinking except ourselves. Further, he points out that when we think things we would rather not think, we often complain to others and to ourselves, ignorant of the true nature of the annoyance; as such, he advises to simply change our thinking when it appears to be straying. When we notice negative thinking, acknowledging it and knowing it is the cause of our problems is one thing—but actually acting on it and changing it, is another, and is what we ought to do. But often times we will impute our misfortune to some external thing, such as the day, leading to remarks like, “Today is not a good day,” or, “Today does not like me”; however, if we attribute our personal torment to something impersonal, something external to us, we should be doing the opposite, really, for if “day” is what is causing our problems, we know that it is a force greater than us, outside of us, and therefore not concerned with us. Because the courses of our days are outside of our power, we can do nothing to change them, so instead of resisting them, we should allow them to carry out as they please, as per nature, and leave our attitude to ourselves. Hamlet expresses this in the same way: “[T]here is nothing either good or bad, but thinking makes it so” (II.ii.265-66.). Therefore, our perception is what affects our attitude, so your life is what you make of it.


For further reading: Meditations by Marcus Aurelius (2014)


What is Humorism?

Unknown.jpegPsychology and medicine, finding their beginnings in Greek culture, have come a long way; and since their speculative foundations, their influence has become larger, more pertinent, and more accurate than ever before, with the invention of prosthetics in the field of medicine and cognitive studies in psychology, for example. It seems as though anything is possible, as though nothing cannot be achieved. One may wonder, then, from where psychology came, from whom modern medicine developed. Small questions, like why, when someone is in a bad mood, we say they are in bad humor; or why, when someone is angry, we say they are hot-blooded, or short-tempered, never fail to come up regarding the origins of either discipline. A glance through history, to the invention of psychology, can show us the foundations of both psychology and medicine—the ancient system of humorism.

The theory of the four humors is derived from the Pre-Socratic philosopher Empedocles (c. 490-430 BC) who posited the existence of the four basic elements that constituted all of reality: air, fire, earth, and water. Everything in the world, he explained, was a synthesis of all four, each contributing their unique characteristics and properties to create everyday objects. For this reason, early theory in medicine was based on philosophical theory, so the two subjects were closely intermingled, the cause of many a medical error in ancient times. The man whom we ought to credit for the beginnings of modern medicine is the Unknown-4.jpegGreek physician Hippocrates (c. 460-370 BC), who is most renown for the Hippocratic Oath, which is still used today. Despite the countless contributions he has made to medicine, there is difficulty when it comes to pinpointing which works he actually wrote and which works were written by either his student Polybus or perhaps even rival doctors of his. Some of his works, furthermore, seem to diverge in content, contradicting earlier theories. Central to Hippocrates’ method was a holistic approach to the body. “Hippocrates the Asclepiad says that the nature even of the body can only be understood as a whole,” remarked Plato [1]. Each part of the body was to be examined against every other part, so as to treat everything as one. He wrote of a popular principle at the time: “Certain sophists and physicians say that it is not possible for any one [sic] to know medicine who does not know what man is.”[2] Such importance placed upon the human body and its composition made the humoral theory possible, as well as the secularization of medicine itself. Apollo and Asclepius, the Gods of medicine, were thought to be the causes of disease up until Hippocrates, who, diagnosing epilepsy—once thought the work of the Gods—said it “appears… to be nowise more divine nor more sacred than any other disease, but has a natural cause from which it originates like other affectations.”[3]

The natural cause of which Hippocrates spoke was the humors. From the Latin word umor, meaning fluid, the humors were four fluids within the body, each aligning with one of the four elements of Empedocles. Hippocrates identified blood with fire, phlegm with water, black bile with earth, and yellow bile air. During the Scientific Revolution, the 16th-century physician William Harvey performed studies on the circulatory system, when he would eventually disprove Hippocrates and Galen. Acknowledging the two physicians and Aristotle (he supported the humoral theory), he wrote in his book regarding animal Unknown-2.jpeggeneration, “And thus they [the Ancients] arrived at their four humors, of which the pituita [phlegm] is held to be cold and moist; the black bile cold and dry; the yellow bile hot and dry; and the blood hot and moist.”[4] According to Hippocrates, one could tell whether the upcoming season would be one of sickness or health by analyzing the weather; if there were extreme conditions, like extreme coldness during winter or heavy rains during spring, then more diseases were to be expected, whereas normal conditions foretold of health and prosperity. Cold seasons exacerbated the cold humors, phlegm and black bile; while warm seasons exacerbated the warm humors, yellow bile and blood. Alchemist Philippus Aureolus Theophrastus Bombastus von Hohenheim, or Paracelsus (1493-1541), was a notorious physician during his time, often burning the works of Galen in public to disrespect him and his theories. Instead of the four humors, Paracelsus preferred a more alchemical approach, diagnosing based on saltiness, sweetness, bitterness, and sourness, adding a fifth property, life. In addition, he gave these elements their own properties, such as combustibility, solidness, fluidity, and vaporousness. The human body has a balance to it, what Hippocrates judged as a body’s krasis (κρασις), or mixture. A healthy body has a good mixture, eucrasia (ευκρασια), meaning it has an even amount of all four humors. Pausanias, a doctor in The Symposium, explains that,

The course of the seasons is also full of both these principles; and when, as I was saying, the elements of hot and cold, moist and dry, attain the harmonious love of one another and blend in temperance and harmony, they bring to men,… health and plenty, and do them no harm.[5]

Unknown-3.jpegWhile one should strive for an ideal balance, eucrasia, one should stay as far away as possible from dyscrasia (δυσκρασια), or bad mixture, for if it is extreme, it can result in death. Too much phlegm (mucus), warns Hippocrates, can clog the throat, choking off airflow, resulting in asphyxiation, for instance. Another Renaissance physician, shortly after Paracelsus, named Santorio Santorio (1561-1636), calculated that between the perfect balance of eucrasia and the imperfect balance of dyscrasia lie 80,000 unique diseases stemming from different combinations of humors. Determined to prove Hippocrates and Galen right, Santorio carried out extensive experiments, measuring the body’s temperature before and after diagnosis with a thermoscope, then measuring it daily thereafter, comparing each new temperature to the healthy one.

“Those diseases which medicines do not cure, iron cures; those which iron cannot cure, fire cures; and those which fire cannot cure, are to be reckoned wholly incurable,” stated Hippocrates confidently [6]. Should some poor soul suffer from dyscrasia, there were several cures with which he could proceed, and there was a cure for each type of imbalance. Hippocrates invested his faith in incisions, stating that iron, by which he means knife, is the next step up from remedies; if surgery does not work, he says one should proceed to cauterize; but if fire does not work, then one is out of luck. Other proposed cures were sweating and vomiting, which would either excrete or purge any excess humors. Of course, then there was bloodletting, the deadly, inaccurate method of making a cut in the skin and cleansing the blood. So popular was bloodletting that by the 1500’s, “[t]reatment was still based on the Hippocratic theory of humors, and bloodletting was a panacea.”[7] Virtually any disease could be cured by bloodletting—that is, until William Harvey. Besides these cleansing methods, there was an easier, more efficient way of handling humoral diseases, one which did not require knives or fire: using opposites to counteract. If there was too much blood, a doctor could counteract it with black bile, opposing the hotness and moistness of the former with the coldness and dryness of the latter; similarly, too much yellow bile could be countered with phlegm, and vice versa. Hippocrates was also famous for prescribing his patients varying diets that would in the same way counter the excess humor, usually advising the replacement of wheat with bread, of water with wine.

This raises the question, though: Why does humorism matter, why is it relevant at all, considering it is outdated and completely incorrect, and why should we be interested? As I said at the beginning, humorism was the foundation for psychology; specifically the foundation for the psychology of personality, a much-studied and much-debated area of research today. Roman physician Galen (c. 130-200) was arguably the first person to attempt a formal study of personality. A studied physician of Hippocratic writings, a learned student of Stoic logic, Galen was an empiricist at heart, emphasizing experience over speculation, what he called demonstrative knowledge (επιστημη αποδεικτικη). Neither Hippocrates nor Galen studied the interior of the human body, as the dissection of humans was taboo; thus, their findings were purely theoretical, which is rather ironic for Galen, who did cut open animals, just not humans. Galen identified two types of passions: irascible passions, those which are negative, and concupiscible, those which are positive [8]. He observed four Unknown-1.jpegtemperaments arising from the four humors. (Temperament, interestingly, translates to mixture!) In fact, “Before the invention of the clinical thermometer and even for some time afterwards, bodily ‘temperature’ was only a synonym for ‘temperament.’”[9] His theory of the four temperaments is so influential that their adjectives have carried over today: too much blood creates a sanguine character who is cheerful; too much phlegm a phlegmatic who is calm; too much yellow bile a choleric who is angry; and too much black bile a melancholic who gloomy; and for the latter two, one can say bilious. Hippocrates noticed these characteristics in his time and attested, commenting, “Those who are mad from phlegm are quiet, and do not cry nor make a sound; but those from vile are vociferous, malignant, and will not be quiet, but are always doing something improper.”[10]

One may dissent again: Why is this relevant? for it is outdated. Although Galen’s theory of the four temperaments is largely out of use [11], it has spawned interest in following Hans_Eysencks_4_Personality_Types.gifpsychologists of personality. The infamous Myers-Briggs Type Indicator, or MBTI (1943), can be seen as a derivative. It utilizes different traits to arrive at a certain personality. Those who wish to know their personality have to decide if they are introvert or extrovert, if they intuit or sense, think or feel, and perceive or judge. Another option, the Big Five, or Big Three (1949), identifies people based on their levels of openness, conscientiousness, extraversion, agreeableness, and neuroticism. Big Three limits the scales to neuroticism, extraversion, and openness. Lastly, the direct descendant is psychologist Hans J. Eysenck (1916-1997), whose method of deducing personality was influenced entirely by Galen. Eysenck created a dichotomy between extraversion and introversion, neuroticism and psychoticism, recognizing several character traits reminiscent of Galen.

[1] Plato, Phaedrus, 270c
[2] Hippocrates, On Ancient Medicine, p. 13b*
[3] Hippocrates, On the Sacred Disease, p. 326a
[4] Harvey, Anatomical Exercises on the Generation of Animals, p. 435b*
[5] Plato, The Symposium, 188a
[6] Hippocrates, Aphorisms, §7, 87
[7] Durant, The Story of Civilization, Vol. 5, p. 532
[8] This is a very superficial description; for a more detailed one, read Aquinas’ Summa Theologica, 1.81.2,ad.1
[9] Boorstin, The Discoverers, p. 341

[10] Hippocrates, On the Sacred Disease, 337a
[11] Read Florence Littauer’s Personality Plus for a modern perspective

For further reading: 
Greek Thought: A Guide to Classical Knowledge by Jacques Brunschwig (2000)
The Oxford Companion to Classical Civilization by Simon Hornblower (1998)
Anatomical Experiments on the Generation of Animals by William Harvey

An Intellectual History of Psychology by Daniel N. Robinson (1995)
The Encyclopedia of Philosophy Vol. 3 by Paul Edwards (1967)
The Encyclopedia of Philosophy Vol. 4 by Paul Edwards (1967)
The Encyclopedia of Philosophy Vol. 6 by Paul Edwards (1967)
The Story of Civilization Vol. 2 by Will Durant (1966)
The Story of Civilization Vol. 3 by Will Durant (1972)
The Story of Civilization Vol. 5 by Will Durant (1953)
The Psychology Book by Wade E. Pickren (2014)
The Story of Psychology by Morton Hunt (1993)
The Discoverers by Daniel J. Boorstin (1983)
On the Sacred Disease by Hippocrates
On Ancient Medicine 
by Hippocrates
On the Natural Faculties
 by Galen

Extra reading for fun: Personality Plus by Florence Littauer (1992)

*Pages referenced to Great Books of the Western World, Vol. 9, 26, by Mortimer J. Adler (1990), respectively


Jack and His Discontents (2 of 2)

We now move onto the late stages of Jack’s neuroticism. Jack, as we have learned, has been repressing his primitive instincts, meaning he has kept them out of the conscious, leaving the ideational presentations stuck in the unconscious, forgotten, neglected, left to multiply like fungus. As Freud said, the longer we keep our instincts repressed, the more time they have to regroup, come together, and create more resistance in our minds, creating tension, Guilt_Finger.gifresulting in the censuring of the ego by the superego, ultimately creating a sense of guilt, the result of a fight or flight response. Freud spoke of an economy in the mind, a national reserve of sorts; when this reserve is depleted, the defense mechanisms of our mind break down. Repression requires energy, and the longer an idea is repressed, the more energy is consumed. By killing the pig, Jack has given his aggression a catalyst, so the impulses grow stronger, eating more energy, his repression slowly breaking down, his aggression shining through the cracks in little bits. We see that, after killing the pig, Jack becomes increasingly aggressive. Slowly but surely, the walls of his mind are crumbling down, and his aggression is able to come through. Ralph lectures Jack for not looking after the fire. Jack notices that he is in hostile territory, and his super-ego begins to hammer on his ego. The guilt that arises thereafter cannot be tolerated by Jack, who is guilty of not completing his duties, who, feeling threatened, turns the anger onto Piggy, presently punching him and knocking him down (Golding 66). Here, there is a struggle between the id, which wants to take out its aggression, and the superego, which instills a sense of guilt in Jack. The result is displacement: unable to cope with the greed of the id and the morality of the superego, the ego decides to appease them both by taking out his feelings on something weak, vulnerable, and defenseless—Piggy. In so doing, Jack has temporarily satisfied his id. Like a hungry child, the id, once fed, will return to normal, until it begins to grow hungry once more. What has just occurred has been Jack acting out. Roger and Jack are both sadists. Golding describes a scene in which Roger throws rocks at the Littlun Henry:

Here, invisible yet strong, was the taboo of old life. Round the squatting child was the protection of parents and school and policemen and the law. Roger’s arm was conditioned by a civilization that knew nothing of him and was in ruins (57).

Roger and Jack have both been raised in a society that values temperance, control, and politeness. They were scolded by their parents not to hurt their siblings; taught in school not to do mean things to other students; warned by the police not to break the law; conditioned by society to be behaved, to be like everyone else, to resist all urges. Think, then, what this has done to their inner aggression, to have been repressed to such an extent! But here, on the island, things are different; no longer is there a higher authority Unknown.jpegto keep the boys in check. Roger, free to do as he pleases, unable to be punished, can be aggressive and not get in trouble. However, it is strange that he refuses to hit Henry directly, throwing instead into a small circle instead. Law and morality still remain with him. Despite his freedom, the idea of restraint has been ingrained into his mind. That there is no evil in him is false; his throwing rocks at Henry is proof of the opposite—Roger’s dark side is stronger than his good, for all this time it has been growing uncontrollably powerful. All it took to release it was the absence of punishment, be it from an external force, like a parent, or an internal force, namely the superego. Without the restraints of civilization, Roger, like Jack, regresses to his primal self, his aggressive, savage self. Fromm wrote,

[I]f the situation changes, repressed desires become conscious and are acted out…. Another case in point is the change that occurs in the character when the total social situation changes. The sadistic character who may have posed as a meek or even friendly individual may become a fiend in a terroristic society…. Another may suppress sadistic behavior in all visible actions, while showing it in a subtle expression of the face or in seemingly harmless and marginal remarks.[1]

Put another way, Fromm is saying that the sadist will feign a pleasant character in a certain environment, say a school, but will reveal himself in a different context, such as an island. This echoes Freud who also noted that society forces us to create reaction-formations. Because we cannot satisfy our aggressive tendencies, we must be exceedingly gentle. Fromm also notes that the sadist, even in a safe environment, will not completely hide his nature, as there will be minor signs, like expressions in the face, of which he spoke.

Unknown.pngFollowing this event, the next major stage in Jack’s neuroticism happens shortly before he kills the pig. Jack is by the riverside, collecting clay, then smearing it on his face, covering it up. He looks at himself at the river and is satisfied. “[T]he mask was a thing on its own, behind which Jack hid, liberated from shame and self-consciousness,” writes Golding (59). Hereafter, Jack relinquishes all remnants of his past life, devoured by his aggression, which takes control for the rest of the story. A small detail, the mask allows for disinhibition, allowing Jack to take on a whole new persona. This mask hides who Jack was, endows him with new strength, and lets him get away with anything. It is no longer Jack who is acting but the mask. If Jack kills Ralph, it is not Jack who does it, but the mask. One can think of the story of Gyges’ Ring as told in the Republic, in which a shepherd finds a ring that can make him invisible. Granted this awesome power, Gyges abuses it, making himself invisible and killing the king and marrying his wife. Anonymity Unknown-1.jpegbestows upon its subject great powers, including immorality. The mask on Jack’s face lets him be sadistic, for he can no longer be ashamed. A sense of invincibility is coupled with invisibility, seeing as Jack, hiding himself behind the mask, feels untouchable, as though he can do whatever he wants, since it is not he who is doing it. No more responsibilities are expected of Jack hence. When Jack steals fire from Ralph, the two come face-to-face. Committing an unforgivable act, Jack, normally, would not be able to look the other boy in the face, an overwhelming feeling of guilt preventing him; but with his mask, Jack can easily steal from Ralph without thinking twice. Ralph, Piggy, and Samneric try to go after Jack and his hunters at the end, except that “[t]hey understood only too well the liberation into savagery that the concealing paint brought” (Golding 170). Golding adds further that, “Freed by the paint,… they were more comfortable than he [Ralph] was” (173). Anyone who puts on the mask of paint is relieved of all expectancies, of all moral obligations, of all sensibleness. Freud observed that the barbarian was happier than the civilized man, inasmuch as the former could satisfy his impulses, whereas the latter could not; similarly, the hunters are more comfortable than Ralph because they can do what he cannot: gratify their aggression.

Thanatos, the major force through which Jack now operates, is committed to but one task: self-destruction, the return to the womb, to nothingness. Jack is never seen backing away from a daunting task, always one for a challenge, even if it may end up killing him. Eager to kill, Jack volunteers to go on pig hunts constantly, going as far as to hunt the dreaded beast that threatens their existence. Upon climbing the mountain, Ralph considers going back, but Jack calls him a coward, insisting that they go up. Ralph calls their mission a foolish one, and Jack agrees, continuing up the mountain, determined to kill the beast. If this is so, if Jack wants to destroy himself, why is it, then, that he kills the pig earlier in the book? Freud would answer, “It really seems as though it is necessary for us to destroy some other thing or person in order not to destroy ourselves.”[2] The real goal of Thanatos is destruction of the self, but Jack obviously does not want to die, consciously that is, so he must satisfy his death-instinct some other way, viz., killing something else. Simple trade-off: kill something else to avoid not killing myself. Like Prometheus, Jack tries to defy his god (his superego, rather) by stealing fire from their sacred home. It is a forbidden task, one that will surely result in suffering. Only, unlike Prometheus, Jack gets away with it, despite almost being compromised, successfully. This small act of defiance further tips the scale of his death-instinct.

Another trait of the sadist is that he is stimulated only be the helpless, never by those who are strong…. For the sadistic character there is only one admirable quality, and that is power. He admires,… those who have power, and he despises and wants to control those who are powerless and cannot fight back.[3]

Jack emulates Fromm’s description of the sadistic character when he orders his hunters to take the innocent Wilfred into custody to be tortured for no reason. Ralph asks Samneric why Jack ordered Wilfred to be tortured, but the twins have no answer. It seems Jack did so purely for pleasure, for fun, to fulfill his aggressive death-instinct. There is no rational reason for what he did, obviously, except for the fact that it was in his own self-interest, and that he was able to exert control over a powerless being. The relationship between Ralph and Jack is odd, the latter’s respect for the former strained by his desire to remove him from power. In some ways this is true, for Jack does not truly want to kill Ralph, as he harbors a sort of respect for him, for his demotic popularity. What Jack really wants to do is have all the power for himself. Just a few hours before Jack captured and had Wilfred beat, Roger horrendously killed Piggy, to which Jack reacted apathetically, coldly, disturbingly, responding by threatening Ralph that the same could happen to him. If Jack wanted Ralph dead, he could have done it long ago, and easily—but he did not.

1024px-VingtAnnees_258-980x682.jpg“Few people ever have the chance to attain so much power that they can seduce themselves into the delusion that it might be absolute,”[4] commented Erich Fromm. Fortunately, this is true; unfortunately, it is still possible. Completely neurotic now, Jack has become like Mr. Kurtz, gaunt and savage, his loyal hunters willing to do anything for him, as he sits in his throne as though he were an idol, or a god. Power has indeed gotten to him now, to the point that he is worshiped, thought invincible, the true leader of the boys on the island.

In many cases the sadism is camouflaged in kindness and what looks like benevolence toward certain people in certain circumstances. But it would be erroneous to think that the kindness is simply intended to deceive, or even that it is only a gesture, not based on any genuine feeling. To understand this phenomenon better, it is necessary to consider that most sane people wish to preserve a self-image that makes them out to be human in at least some respects. [5] 

Jack may not be totally sane, but he does seek to maintain his human appearance. When he is not off hunting pigs, stealing fire, or torturing kids, Jack is seen giving plentiful rations to his and his enemies’ people, not as an illusion, not to bait them, but to appear in some way humane, to be what remains of his character. In fact, Jack invites Ralph and his friends to join his tribe rather pleasantly, offering them food and protection, all in a friendly tone, no force necessary. It is only later, when he has been confronted, that he forces Samneric to join the tribe by means of  force. While this may be the last of his humanity, it does not change the fact that he is still savage. Having regressed completely to the beginning, Jack is now like his hunting ancestors, hosting ritualistic dances centered on sacrifices, complete with disturbing chants and entrancing rhythms. Jack has become so ill, so neurotic, so sadistic, that he has nearly fallen out of touch with reality, becoming more of a black hole than a human, sucking up all good, drawing in all light, all that is good. Even pure-hearted Ralph and Piggy succumb to his darkness, joining one of the rituals, eventually killing their friend Simon in cold blood. Conclusively, Jack has become a deranged, sadistic neurotic.

In conclusion, to use the wise words of Piggy, “[P]eople [are] never quite what you thought they were” (Golding 49).


(Retrieved from Stephen Glazier’s Word Menu)

Acting out- Unconscious expression of previously repressed feelings through specific behavior
Aggression- Hostile, destructive behavior towards others
Death-instinct/Thanatos- Destructive, aggressive compulsion to achieve nonexistence
Defense mechanism- Any of various mental processes, including… displacement,… projection,… reaction-formation, regression, repression,…, used by the ego for protection against instinctual demands and to reduce anxiety
Disinhibition- Removal of inhibition (process of stopping an impulse)
 Reality-oriented, structured component of personality that enables individual to function autonomously in the world
Ego-ideal/Superego- Aspect of personality involving conscience, guilt, imposition of moral standards, and introjected authoritative and ethical images
Guilt- Recurrent feeling of self-reproach or self-blame for something wrong, often something beyond one’s control
Unconscious, unsocialized component of personality, containing unexpressed desires and motivations and driven by pleasure principle
Neuroticism- Emotional disorder involving basic repression of primary instinctual urge and reliance on defense mechanisms that results in symptoms or personality disturbance
Reaction-formation- Defense mechanism involving denial of unacceptable unconscious urges by behavior contrary to one’s own feelings
Regression- Defense mechanism involving return to behavior expressive of earlier developmental stage, usu. due to trauma, fixation, anxiety, or frustration
Repression- Defense mechanism in which threatening or unacceptable ideas or urges are forgotten
Sadism- Condition in which pleasure, esp. sexual, is derived from inflicting pain on others


[1] Fromm, The Anatomy of Human Destructiveness, pp. 107-8
[2] Qtd. in Fromm, id., p. 492
[3] Id., p. 325
[4] Id., p. 323
[5] 329-30


For further reading: 
A General Introduction to Psychoanalysis by Sigmund Freud (1975)
The Anatomy of Human Destructiveness
 by Erich Fromm (1992)

Civilization and Its Discontents by Sigmund Freud (1929)
Instincts and Their Vicissitudes 
by Sigmund Freud (1915)
The Ego and the Id 
by Sigmund Freud (1923)
Lord of the Flies
 by William Golding (2011)
 by Sigmund Freud (1915)