Do Babies Exist?

My friends and I were sitting on the deck one Summer afternoon sipping cokes by the pool while discussing different philosophical matters. It was a hot day, and I was introducing Descartes’ philosophy to them—as any normal person in an everyday conversation does—and explaining why it was important and what it meant for us. I set Unknownit up like this: He asked if his whole life were an illusion, a dream, and if there were an Evil Demon that was deceiving him, causing his senses to be misleading. It is impossible, I explained, to distinguish between waking reality and a dream, according to Descartes. However, searching for a first principle, a single starting point of knowledge from which to start, he realized he had been thinking this whole time. The process of questioning whether he was in a dream presupposed that there was a questioner who was doing it. This led him to remark, “Cogito, ergo sum,” or “I think, therefore I am.” By doubting all his senses, he was led to the conviction that he could not doubt that he was doubting in the first place; for otherwise, he would not be able to doubt: He would have to exist first before he could be deluded.

UnknownAfter hearing this, my friends seemed pretty convinced, and pondered it a bit. Out of nowhere, one of them said, “Well, babies aren’t self-conscious.” A pause. “So do babies exist?” Taken aback, unprepared for such a response, I readily dismissed the notion, called it absurd, and tried to think of an answer. We began debating whether or not babies knew they existed, or whether they could even think about thinking. Of course, the question itself—do babies exist since they are not self-conscious?—is actually grounded in a misunderstanding: Descartes was not trying to prove his existence; rather, he was trying to prove he had certainty, something undoubtedly true. But for the sake of argument, we entertained the idea. Common face shouts till it is red in the face, “Obviously, yes, babies exist! Only a madman would doubt their existence. I mean, we see them right in front of us—they’re right there, they exist!”[1]

This prompts the question: If we are conscious of a baby existing, yet they themselves are not conscious of themselves existing, do they exist? Babies are fascinating creatures. They are copies of us, miniature humans who must learn to cope with and understand the world in which they are living through trial-and-error. Seeing as they are capable of such amazing cognitive feats like cause-and-effect and language acquisition, investigating their conscious abilities sounded intriguing. A delve into developmental psychology, the study of how humans develop through life, yields interesting insights into this psycho-philosophical problem.

Unknown-1.jpegJean Piaget was a developmental psychologist who studied the development of children throughout the 20th-century. Today, his influence is still felt in psychological literature and continues to impact thought regarding childhood development. For years he observed, tested, and took notes on infants, from birth to early adulthood, using the data to devise his famous theory of cognitive development, which takes place in four stages: Sensorimotor, preoperational, concrete operational, and formal operational. The first stage, sensorimotor, takes place starting at birth and ending at the age of two. During this period, the baby’s life is geared toward adjusting to the world. Babies are “thrown” into this world, to use a Heideggerian term. They are born immediately into life amidst chaos, with all kinds of new stimuli to which to react. Confused, unable to make sense of things, exposed to strange sights and sounds, the baby cries and thrashes about, trying to find some sense of security. It is bombarded all at once by sensations and experiences. It is disoriented. This is a brave new world, and it is full of data that needs to be interpreted and sorted out in the baby’s mind. In order to navigate through the world, the newborn uses its motor skills and physical senses to experience things. The baby interacts with its environment, including people, grabbing with its hands, sucking with its mouth, hearing with its ears, and smelling with its nose. Imagine being in a cave for years, devoid of all sensory information, when, one day, you are let out and, having forgotten what it was like to experience the world, you are overcome by the magnitude of the environment, so you try to relearn as much as possible, greedily taking in everything that you can—well, being in the womb is kind of like being in a cave for the baby, meaning it is doing the same thing: It is getting a grasp of reality by engaging its senses in any way that it Unknown-3.jpegpossibly can. The baby is an empiricist who delights in its senses as though life were a buffet. Oh, there is something I can touch! Ah, that smells nice, let me smell it! While it cannot yet register these sensations, the infant uses its senses to obtain a primitive understanding. They are actively mapping out the world according to their perceptions, simple though they are. According to Piaget, babies eventually learn to pair coordination, knowledge of their body and its movement, with determination. Once they are able to effectively use their body parts in a way that is conducive to their survival, they develop their sense of where these limbs are in relation to each other, called proprioception. This allows them to use determination in regard to this newly acquired coordination. Babies can now direct themselves with autonomy and do something. However, this is a simple form of determination; it is not like the baby has free will and can decide or choose to do this or that. Whereas the baby can move toward a particular object, it cannot decide mentally, “I am going to crawl over to that thing”; it just does it out of pure, unthinking volition.

At three months, a baby can sense emotions and, amazingly, recreate them. Seeing their parents sad, an infant can react to this with a fitting response, as in being sad themselves. By being able to tell what someone is feeling, the baby can imitate them, showing that the baby has at least a simple recognition of empathy. Around this time also, the baby actively listens to their social scene, picking up on spoken language. It is incredible (in both senses of the word) because it is now that the infant unobtrusively Unknown-4.jpegand quietly internalizes and processes everything it hears like a sponge, learning speech cues, such as when to talk and when to pause; the rhythms of speech, including cadence; vocabulary; and nonverbal communication, which makes up the majority of social interaction. Here is a tiny little human just crawling around the house on all fours who cries and eats and goes to the bathroom, all the while they are actually learning how to speak—who could possibly fathom what is going on in that small, undeveloped mind! A little earlier, around two months usually, the baby already shows signs of early speech when it babbles. Nonsense sounds are uttered by the baby, who is trying to imitate speech, but who is not complex enough to reproduce it entirely. Four to five months into development, the baby can understand itself as a self-to-Others, or a self-as-viewed-by-Others. I have my own image of myself, but I understand that I am perceived by other people, who form their own images of me. One study shows that, from four to nine months, the infant has changing patterns of involvement in play. In the earliest stage, the baby will, if it is approached by the parent, play peekaboo. Because they have not yet learned that things exist independent of them in time, babies think that the parent disappears when they are covered, and is surprised to find they are still there. A few months later, nine months, the baby is able to take on the role of the initiator who wants to play peekaboo, instead of the responder who will play peekaboo if asked. This proves that babies learn to combine determination with intention (Bruner, 1983).

Just three months later, when the infant is officially one year old, it achieves a self-image. Looking in the a mirror, it can recognize itself and form an early identity. Like chimps, babies can now respond to themselves as an actual self in the mirror, noticing, for example, a mark on their forehead, and realizing that it is not on the mirror, but on themselves. During 14-18 months, an infant is able to differentiate an Other’s intentions from their own (Repacholi & Gopnik, 1997). Children like to think in terms of their own desires. If a kid wants a cookie, they act on their desire. Thus, when they are 14-18 months old, they can distinguish Others’ desires as different from their Unknown-5.jpegown. Within this period, the baby can also know that it is being imitated by someone else. If a parent mimics something the infant is doing, the infant knows their own behavior is being shown to them. Finally, the 18-month marker designates when the baby begins to start its sentences with the first-person “I.” With a sense of self, the infant is able to roleplay, in which it takes on new identities, or roles, and is able to play “as them.” Second-order emotions, also known as self-conscious emotions, like shame and embarrassment, arise in the child at this time, too. Children possess some semblance of self-consciousness.

After the sensorimotor stage is what Piaget called the preoperational stage, which takes place between the ages of two and seven. It is at this stage that the infant constructs their own world. Through the process of assimilation, the toddler creates mental schemas, mini blueprints conceived in their minds, frameworks by which reality is processed then Unknown.pngmade sense off, allowing them to structure reality in a way that is useful to them. When a new experience is undergone, it is made to fit the pre-existing schema. Because these schemas are very simple and basic, they are obviously inaccurate, although that is not point of them; they are not supposed to be innate categories of the mind, as Kant would have thought of them, but early hypotheses made from the little experienced gathered by a child. One time, my cousins came over to play video games; we were playing a level in Lego Indiana Jones where we had to drive around on a motorcycle chasing cars. My cousin’s little brother pointed excitedly at the cars zooming down the streets, exclaiming, “Doo-doo!” I hopped on a motorcycle and chased after them, only for him to look at the motorcycle and, again, shout, “Doo-doo!” My cousin and I tried to tell him that a car and a motorcycle were two separate things. In his mind, he saw a moving vehicle with wheels, so he created a mental schema. Anything that fit under that description—a moving vehicle with wheels—would be considered by him to be a “Doo-doo”—in this case, both the car and the motorcycle, despite their being different things. This illustrates that schemas are not always accurate; they are for classifying and categorizing things. Of course, this leads to a new process observed by Piaget: Accommodation. We come to an age where we discover that our schemas are inadequate because they do not fully represent reality. As such, we have a kind of “schematic crisis,” as we are met with an anomaly, something which sticks out, something which does not fit with our prevailing theory. Hence, we must remodel our thinking. Consequently, we are forced to find a way to reconcile the already-existing category with this new piece of data, either by broadening the schema, or by creating a new one altogether. Babies thus learn to make more accurate classifications as they learn new things and create new schemas with which to interpret Unknown-6.jpegreality. Once these schemas are built up, the infant is able to engage in organization, through which they order their schemas. Some are judged to be more inclusive or exclusive than others, and so are co-ordinated based thereon. In the case of my cousin’s little brother, he would have to organize his schemas like this: Broadly, there are vehicles, under which we might find cars and motorcycles as types, which can themselves be expanded upon, for each comes in different kinds. This way, reality is structured in levels, or hierarchies, not necessarily in importance, but in generality and specificity. Organization is a synthesis of assimilation and accommodation. All this schematizing segues into the next point, namely that in making sense of the world, we give sense to it.

The preoperational period is characterized by symbolic representation in toddlers. In philosophy, the study of meaning and symbolism is called semiotics, and it is closely related to what babies do, interestingly. Life is separated into two concepts: Signs and symbols. Signs are fixed things—concrete objects. Symbols are relative meanings—abstract values—usually assigned to signs. While every car I see is always a car, its meaning is not always the same and is liable to change. For some, it can represent, can be symbolic of, freedom, if you are a teen just getting your license; transportation, if it is how you get around; dread, if you hate road trips or have to wait hours during commute. The point is, everyone sees the same sign, but for everyone the symbol has different meanings. Preoperational toddlers are able, then, to understand objects not just in their literal, concrete sense, but as standing for something, as abstract and meaningful. Babies are not passive, as I have said, but on the contrary, very much, if not entirely, active. By interacting with the world around them, they experiment, learn, and conceptualize. Around three years, the baby is fully capable of speaking, feeling, having motives, and knowing the relation of cause-and-effect.

Unknown-2.pngOne of the consequences of Descartes’ Cogito is its resulting solipsism: The thinker, the Cogito, is only able to prove his own existence, whereas Others’ existences are uncertain. Is this a requisite for existence? Is self-certainty a necessity? If so, the case is a difficult one for babies. Controversially, Piaget proposed that babies are egocentric; his theory is widely contested today in psychological circles. The meaning of egocentrism can be guessed by looking carefully at the word’s roots: It means self-centered; however, it is not self-centeredness in the sense of being prideful, selfish, and concerned with oneself, no—it is more closely related to anthropocentric, in the sense that the self is the central point from which all others points are judged or perceived. For this reason, Piaget suggested that infants can only see things through their own perspectives, not through Others’. You may be wondering why I sometimes have been capitalizing “Other.” Philosophically, the problem of egocentrism is closely related to solipsism, resulting in what is called “the problem of Other Minds,” which is the attempt to prove the existence of selves outside of our own, of whose existence we are uncertain, so they are called “Others,” giving them a kind of external, foreign connotation. I digress. Babies, so thought Piaget, are unable to take Others’ perspectives, so the must rely on their own perspectives. To do this, they reason from self to Other. Infants’ egocentric tendencies, when combined with their inability to acknowledge objects as existing permanently outside of them, lead to a subject-object dualism, a subjective idealism, in which the self is distinguished and utterly separated cup-faces.jpgfrom the physical world. It becomes “my” viewpoint, or “your” viewpoint, subjective, relative. As long as I look at an object, a toddler thinks, it exists. And yet, the toddler also has a social self, which it develops through its interactions with other children. Many psychologists have claimed that, by playing, children are able to acknowledge the existence of not just Others, but Others’ emotions. It is evident in roleplaying, where the children pretend they are someone they are not, and act accordingly, placing themselves within a new self, which they adopt as their own, and interact with the other children, whom they see as someone else, whom they acknowledge and actively engage with, responding to how they are treated, and sensing emotions.

A dominant, popular theory that attempts to refute Piaget’s egocentrism is “Theory of Mind” ([ToM] Wellman, 1990). Wellman found that babies develop an awareness of Others at the age of three, when they operate on belief-desire reasoning. Motivation for kids consists of a belief, what they know, and a desire, what they want. A child might be motivated to have a cookie because they know where the cookie jar is, and they are hungry for one. Using this kind of reasoning, the kid attributes their own intentions to another. Looking at his playmate, the toddler assumes, “Well, I want a cookie, and I know where they are, so this kid, like me, because he has the same beliefs and desires as I, must want a cookie, too.” Is it faulty and inaccurate? Wildly. Does it make sense, realistically? Yes. The Theory of Mind is a primitive form of empathy, a kind of empathetic stepping stone. It is simple and selfish, because it assumes that images.pngchildren have the same beliefs and desires. One often sees this in children trying to console one another: An infant sees another crying, and, because he takes comfort in eating ice cream, believes the other will take comfort in it, too. Critics like Vasudevi Reddy criticize Theory of Mind because it is too detached from actual interaction and ends up actually attributing one’s own self-certitude to another, resulting in what she calls a “Neo-Cartesianism” of sorts. It promotes solipsistic thinking by denying the existence of an independent thinker with emotions, instead attributing to them own’s own ideas, thereby increasing a toddler’s dualistic thinking.

Unknown-8.jpegAccording to Reddy, a baby’s communication with Others’ already presupposed intersubjectivity, or being involved with people on a personal level. Babies are self-aware to an extent at birth because, the argument goes, the baby is able to distinguish itself from the world around it. To act, is to know both the self and the object. It is similar to Fichte’s philosophy in that the Ego becomes aware of itself by recognizing everything that is not the Ego, creating the Non-ego; in other words, it is through the Non-ego—the world—that the Ego knows itself. The world, or Non-ego, is created purely with the intent of being a moral playground for the Ego. Following from this is the idea that the baby, coming into contact with the world, immediately knows it as not-itself, and so uses it as its playground, activating all its senses to learn about reality. If we could not tell the environment apart from ourselves, and we thought ourselves a part of it, how could we act independently of it, with our senses? This is an argument against Freud and Piaget, who both said newborns cannot tell themselves from the world. As a solution to egocentrism, psychologists found that parents play an important role early on. Parents should teach their children early on to differentiate self from Other. Too much similarity between the baby and parent means more egocentrism in life, which is harder to unlearn. Reddy’s RquLcsxM.jpgsolution is to avoid Cartesianism and Theory of Mind and instead pursue a second-person perspective, one between I-and-Thou, You-and-I. This way, there is direct access to another’s intentions. Babies, through play, function on this second-person level by directly interacting with their peers. For Piaget, babies achieve consciousness when symbolism and schematism come together as one to create meaningful representations. An understanding of how things fit together and how they function is what Piaget considers consciousness. On the other hand, metacognition, the ability to think about thinking, does not arise until the age of 11, Piaget’s formal operational stage.

The following are milestones in the evolution of a baby’s cognitive abilities, summarized in eight chronological key events:

  1. Coordination
  2. Self vs. non-self
  3. Know special/loved people
  4. Know + respond to name
  5. Self-image
  6. Pointing to objects (symbol)
  7. Use “I” in sentences
  8. Know Other Minds

Unknown-9.jpegSo, to answer my friend: The question of whether or not babies exist is actually not so straightforward as one might think. It could be argued that babies exist when they are one, when they establish their self-image for the first time, and thus are, in one way or another, conscious of themselves. Or it may be that babies exist once they turn 18 months, and they can use “I,” roleplay, and experience reflexive emotions. Here, babies are aware of themselves as actors, are willing to play with others and take new perspectives, and are able to perceive how they are themselves perceived by others. Yet then again, it is possible that it is only when metacognition is possible, when we are able to doubt that we are doubting, when we are able to posit a hypothetical Evil Demon trying to deceive us all, that we exist—in which case… babies do not exist at all! Do only children and preadolescents and onwards exist? Maybe when we are born, we do not exist, we are in a state of utter nonexistence and non-being, and it is only when we reach 11 that—POOF!—we magically pop into existence.


[1] This is obviously a satirical question. Babies do exist. It is more of a thought-experiment, or armchair philosopher problem. I find the comment to be so outrageous that it is funny, and I thought it made for a perfect reason to research if babies are conscious. 


For further reading: How Infants Know Minds by Vasudevi Reddy (2008)
Developmental Psychology 8th ed. by David R. Shaffer (2010)
The Secret Language of the Mind 
by David Cohen (1996)
The Science of the Mind
by Owen J. Flanagan, Jr. (1984)


Philosopher Clerihews

Invented by Edmund Clerihew Bentley, the clerihew is a poem form composed of two rhyming couplets with the scheme AABB, wherein a famous person is mentioned in the first line, and the last three complete an accomplishment, failure, biography, anecdote, rumor, or joke about them. Contrived, silly, and fun to read, these humorous poems can actually be quite educational while still being entertaining. I was inspired after reading some of Jacques Barzun’s clerihews on philosophers to write my own. Following are 16 clerihews on different philosophers. I have tried my best to make them concise summaries of their philosophies!






Henry David Thoreau
Was a very thorough
Observer of nature
Who used botanical nomenclature


Martin Heidegger
Conceived upon his ledger,
That what was once concealed
Would in a new beginning be revealed


Michel Henry
Did French phenomenology
And he into life inquired
Whence he from interiority acquired


Friedrich Wilhelm Nietzsche
Tried to preach the
Death of God, and of the slave morality
Favoring instead: Übermensch mentality


Arthur Schopenhauer
Believed in the instinctive power
Of the blind Will-to-Life,
So his pessimism was rife


Had to accede this:
Some things are outside our control
So with the punches we must roll


Edmund Husserl
Made unfurl
In his phenomenological prolegomena
The bracketing of experienced phenomena


Plato, or Aristocles,
Had found the keys
To the fundamental reality,
Which was actually ideality


Did not like Apologies
So he rushed out of the cave
And made dialectic all the rave


John Stuart Mill
Had had his fill
Of individual liberty:
He used it as a Utility


Thomas Kuhn—
Why’d you have to ruin
All of scientific history
By reducing it to anomalistic mystery?


Søren Kierkegaard
Was the first of Existential regard
Whose melancholy made him weep
And whose faith made him take a Leap


Thomas Hobbes
Was moved to sobs
When he found life was short
And served the Leviathan’s royal court


Blaise Pascal
Was a real ras-cal
Who liked to gamble
In his theological preamble


John Locke
Pictured a rock
And said it was qualities, primarily
Conceived on a blank slate, summarily


George Berkeley
Said, “Esse est percipi,”
Meaning he couldn’t find
Anything outside his mind

Should I write more philosophical clerihews? Maybe in other subjects as well, like history, literature, and psychology? Make sure to leave your own in the comments, and I’ll be sure to read them!


A Very Short History of the Dream Argument

Unknown.jpegDreaming is an integral part of our lives, occurring every night when we are asleep. While the body relaxes, the brain stays active, creating a stream of thought, a stream that comes from the unconscious. Recent research into a method called “lucid dreaming” allows people to control their dreams, to place themselves within their illusory world, letting them make their dreams a reality; however, lucid dreaming, as cool as it is, presents a troubling problem, one that has intrigued humans for millennia: How do we know for certain we are not lucid dreaming right now? How do we distinguish our consciousness, our awareness, from the unconscious, the unaware? Are we actually asleep at this moment, life but a mere string of thoughts and sensations?

Defining dreaming and consciousness will help, as both concepts, simple though they may seem, are highly complex, each with their own requirements, psychologically and philosophically. Consciousness refers to “the quality or state of being aware especially of something within oneself”; in other words, consciousness refers to the realization or Unknown-1.jpegacknowledgement of the mind and its inner workings.[1] If you acknowledge that you are reading right now, you are conscious of yourself as reading, so consciousness is always consciousness of something, be it an activity or a mental state. American psychologist William James thought consciousness was not an existent thing, relating it to a stream, a series of experiences, one after the other, every one distinct from the other. Neurological studies later linked consciousness, the awareness of the brain, as a process within the brain itself, located in the thalamus. Dreams, on the other hand, are defined as “a succession of images, thoughts, or emotions passing through the mind during sleep.”[2] Dreams are specific from person to person, which makes it difficult, then, to “remember” a dream, considering it cannot be proven true or false. Therefore, it is difficult to differentiate the waking state from the dream state, so far as both are collections of experiences.

Apps-Lucid-Dreaming-Header.jpgMany philosophers, dating from the 5th century B.C. to the modern day, have attempted to tackle the “Dream Argument,” trying to prove that we are in fact living consciously. For example, Plato mentions it in a dialogue: “How can you determine whether at this moment we are sleeping, and all our thoughts are a dream; or whether we are awake, and talking to one another in waking state?”[3] Socrates was interested in finding out if our senses were reliable, if what we see, hear, taste, feel, and smell is real or a figment of our active minds. Perhaps when we fall asleep, when our brains switch to R.E.M., when we dream, there is a dreamer dreaming this dream. Another philosopher, René Descartes of the 17th century, in refuting the Dream Argument, famously proposed, “I think, therefore I am.” Descartes thought that his whole life was an illusion, a trick played on him by a divine being, that he was misled into believing reality. He started to doubt everything, including his senses; but one thing he could not possibly doubt was his existence, his self, because in order for him to doubt, there had to be a him to doubt in the first place!

Even though some of the greatest thinkers could not deny the Dream Argument irrefutably, at least we know from science that we exist, that dreams are just processes happening in the brain, and that reality is as real as it gets, dreams being a product of our imagination… unless we actually are dreaming, just waiting to be woken.



[1] “Consciousness.” (January 19th, 2017)
[2] “Dreaming.” (January 19th, 2017)
[3] Plato, Theætetus, 158d


If you have a lot of free time:


The Breath and Mindfulness

Benefits-Deep-Breathing-Featured1.pngHow many times have you gone for a run, and, a mile in, you reach your prime, and you feel unstoppable, your legs like automatic machines pumping, arms swinging by your sides, only to feel a pain in your chest, a heavy feeling in your lungs, sharp, managing just short breaths? Or what about getting ready to present in front of an audience, all their eyes on you, expectations hanging above you like the sword of Damocles, your reputation on the line, and you find yourself pacing nervously, breathing in and out shallowly? Or when you try to hold your breath for as long as you can underwater, cheeks puffed out, pressure building up, rising, inside your mouth and lungs, till it is enough to make you burst so that you pop up to the surface fighting for air, gasping, thankful for each time you get to swallow? In each of the common and everyday above instances, there runs a common theme: The importance of the breath. Just as these occasions are average, so breathing is something we do daily, although we never give attention to it. Constant, unchanging, it remains with us throughout the day, even if we do not heed it, dependable, vital. Despite being something we do around 20,000 times a day, breathing is, for the most part, subconscious, an effort produced by the brain because it has to be done, rather than because we will it. It is only after a workout, for example, when we push ourselves, that we find we have power over it, and really feel a need for it. However, the breath is much Unknown.jpegmore important than we believe. For thousands of years, the breath has remained an essential part of our cultures, West and East, ranging from Vedic writings from India to Ancient Greek philosophy to modern day Buddhism and mindfulness practices, which have tried to bring back an ancient appreciation of the breath. In this blog, I will discuss the physiology of breathing, its philosophical and meditative significance, and how it can help in daily life.

Beginning with the physiology is essential because sometimes, one appreciates something more when they know how it works; and also because, once one understands how something operates, they are more aware of how to improve it. The process of breathing, although covered it in school, is not always covered in detail. Respiration, or ventilation, is the act of inhaling fresh air and exhaling stale air. It is an exchange. The purpose of respiration is to exchange carbon dioxide (CO2) for oxygen (O2), the former being poisonous, the latter good for us, hence the need to get rid of CO2 and get more O2 in the body. While you can go weeks without food and days without water or sleep, you cannot go a single day, let alone a minute, without air—that is how vital it is. Beneath the diaphragmatic-breathing-illustration.jpgsurface, the process of inhalation goes like this: Together, the diaphragm, located between the abdomen and thorax, or chest, and the intercostals, which are muscles between the ribs on either side of the lungs, contract, allowing the lungs to expand. A dome-shaped muscle, the diaphragm flattens out, and the intercostals move up and outward, expanding the total area in the chest. Near the neck and shoulders, the sternocleidomastoid (a real mouthful!) moves the clavicle—the collarbone—and sternum, in harmony with the scalenes, all of which contract upward, opening up the chest farther. Put together, both actions make room for the lungs to expand. The chest, increases, as do the lungs, whose inner pressure is exceeded by external pressure, causing a suction effect so that air is sucked in. Exhalation is the opposite: The diaphragm relaxes, and the interior intercostals go down and in with the abdominals and obliques, shrinking and thereby increasing the volume of the lungs, causing a reverse suction, where the higher concentration of air within the lungs is diffused outside, to the lower concentration. Like a rubber band, the lungs remain passive purify-lungs.jpgthroughout respiration. Instead of thinking of the lungs as actively sucking in air, it is better to think of them as passive bands that are either stretched or released. Lungs are big pink sponges, colored so because they are full of blood vessels, inflated so because full of pneumatic branches ending in alveoli, where air is stored. Extending from the collarbone to the diaphragm, they are both divided into lobes. The right has three lobes, the left only two since it leaves room for the heart. Pleural membranes surround the exterior of the lungs, coating them with a fluid to help them contract effortlessly and smoothly, accounting for friction during inhalation and exhalation. How does the air get from your mouth and nose to your lungs? Air passes from the nasal cavity and mouth to the pharynx, which is pretty much the throat, whereupon it goes down the larynx, better known as the voicebox—where your voice is produced—before moving down the trachea. Here, it comes to a fork, two bronchi, left and right, each extending into Unknown-1.jpegsecondary bronchi, then tertiary bronchi, and finally into bronchioles, at the ends of which are small sacs called alveoli. This section takes place in the lungs, and because they physically branch downward, resembling an upside-down tree, it is referred to as the “bronchial tree.” A flap of cartilage lies between the pharynx and larynx. It is the epiglottis, and when relaxed, it lies up against the throat, opening up the passage of air; however, when it contracts, such as when swallowing, it acts like a drawbridge, moving down over the larynx, blocking anything unwanted. The job of the epiglottis is to let only air pass. All of these muscles are involved in subconscious breathing. More muscles are activated during exercise, as Respiratory center 1.jpgextra help is needed to speed up the process. At the bottom of the brain, the respiratory center stimulates the diaphragm and intercostals based on CO2, O2, and muscle stretch receptors. Chemoreceptors in the brain test blood in the body, and if there is a lack of blood, they alert the medulla oblongata, which will tell the body to produce oxygen faster. As we know, much of breathing is subconsciously controlled, its rate and depth preset by the brain, and altered when necessary, but we also have voluntary control over it. At rest, we breathe about 12-15 times per minute, and twice or more that amount during exercise. About 17 fl. oz. (0.5L) of air are displaced by the diaphragm; when forced, 70 fl. oz. (2L), totaling 150 fl. oz. (4.5L) added up. The air we breathe is 78.6% Unknown-2.jpegnitrogen, 20.9% oxygen, 0.4% water, 0.04% carbon dioxide, and 0.06% other elements. Accordingly, a lot of nitrogen is taken in, more than is needed, yet a lot of it is safe for us, only posing a threat when we are underwater, because then it remains in bubble form, at which time it can get into our blood. Luckily, our system is made to take in the right amount of oxygen we need. Of our total lung capacity, only 10% is used subconsciously. We always have at least 35 fl. oz. (0.1L) of air leftover despite having a total capacity of 204 fl. oz. (5.8L), meaning we never exhale all the air in our lungs, even if we try our hardest. The average flow air in the breath is 18fl. oz. (0.5L), but we have a reserved capacity of extra air in case we need it.

Meditation and running are a great combination because the two complement each other. Both value the breath and call for relaxation, which in turn strengthens oneself. To practice the two together, it is advised that you run at “conversational pace,” which is a pace at which you can comfortably sustain a conversation with someone else and not feel out of breath. When breathing during this, you should breathe from the bottom up, not the top down as we instinctively do, for there are no alveoli in the upper lungs. Shallow breaths from the chest deprive you of oxygen since there is not sufficient gas TobiasMacpheeRunThroughGrass.jpgexchange involved. Slow breaths from the diaphragm, at the bottom of the chest, near the stomach, will help you stay energized, prevent cramps, and focus you. Another important tip is to make your exhale longer than your inhale. Inhalation leaves residue oxygen in the lungs, mind you, such that, every now and then, the leftover oxygen will interfere with your respiratory system, resulting in a cramp because the oxygen got in. This way, by exhaling longer than you inhale, you not only reduce the chance of getting a cramp, but you also get a deeper, rhythmic breathing cycle. In the traditional philosophy of Yoga—not modern day Yoga, with the stretches—the regulation of breath is called prāna vritti. Central to its teachings is prānāyāma, or expansion of the vital force, prāna being Sanskrit for breath or vital force, āyama vertical or horizontal pranayama-breathing-lessons-nadhi-shodhana.jpgexpansion. Yoga training in prānāyāma requires that you first master āsana, posture, before moving onto breathing, to the extent that proper breathing is only enacted after achieving proper posture. Āsana involves straightening the spine so you are erect, a straight line able to be drawn from head to hips; opening up the chest, allowing the lungs to expand naturally; pulling the shoulders back between the scapula, or shoulder blades, thus enlarging the chest activity; and relaxing the whole body, releasing all tension from the muscles. The spine represents Earth, the empty space in the torso Ether, respiration Air, and Water and Fire, being diametrically opposed, represent life force (prāna). Therefore, all of nature is manifest in the body as a sacred unity, a gathering of the Elements. Once āsana is practiced sufficiently, one can move onto prānāyāma, where one is instructed to apply attention to the breath. Sahita prānāyāma is one specific technique that involves inhaling (pūruka), retaining (kumbhaka), then exhaling (recaka), each of which is equally prolonged. As such, each stage should last as long as the others, usually held for a few seconds, lengthening by a second. You should sit either on a chair or on the ground in a comfortable position, get Unknown.jpeginto āsana, properly aligned, erect, and breathe in a few seconds, retain it for the same length, then exhale for the same time, and repeat. It is similar to “box breathing,” a technique used by Navy Seals, who inhale for four seconds, hold it for four, exhale for four, and wait before inhaling for four—perhaps it was based on the ancient practice of sahita prānāyāma. By thus controlling the breath, you give it a regular rhythm. According to Yogic texts, there are five breaths: 1.) Prāna, which extends from the toe to the heart to the nose 2.) Apāna, which extends from the throat to the ribs 3.) Samāna, which extends from the digestion system to the joints to the navel 4.) Udāna, which is in the skull and eyebrows and 5.) Vyāna, which occupies the circulation of the breath, distributing the life force throughout the body. The aim hereof is to slow the breath as though you are asleep, when your mind goes adrift, wavering, and you can see into the absolute state of consciousness, “continued consciousness.” Just as we instinctively, subconsciously take shallow breaths as a habit, so we must learn to turn controlled, rhythmic breathing into a subconscious, instinctive habit. Through our days, we should be able to notice that we are breathing deeply and steadily by habit and therefore by instinct, rather than as we normally do it, subconsciously.

Other traditions, too, outside of Indian philosophy, practice extension of the breath. The Chinese philosophy of Taoism, in T’ai Chi, has a practice called “embryonic respiration,” whereby the breath is sustained for the goal of a longer life, ch’ang shen. It was thought that the breath gave the power of the immortality; if one could hold one’s breath for 1,000 seconds, they would become immortal. Obviously, the breath was taken very seriously, and it was trained rigorously. Other benefits of the breath were believed to be the ability to walk on fire, to not drown, and to cure sickness by expelling bad humors and airs. Islam and Hesychasm in the East also have breathing practices. Sufis say Dhikr, a kind of devotional prayer that is immensely private and isolated, always involving the Unknown-1.jpegbreath. Ancient Greek philosophy held air to be vital as well. One of the first philosophers, the pre-Socratic Anaximenes, held that the arche (αρχἠ) of the world, the single element from which the Cosmos and everything in it was made, was Air. A monist, he like Thales and Anaximander believed a single element was the basis of reality. Air, he taught, was concentrated in the breath, which functioned as man’s psyche (ψυχἠ), or soul/spirit, whence came “psychology.” Although its origin is widely debated, the saying of “Bless you” has been proposed to have come from an Anaximenes-influenced Ancient Greece: A sneeze was thought to expel the breath, which was synonymous with the soul, so people would say “Bless you” to keep the soul inside the body. A couple centuries later, the Stoics posited the existence of two principles in Nature, one passive, the other active. Pneuma (πνεῦμα), translated as breath, was conceived to be the active principle, a sort of fiery air immixed in the breath that pervaded reality. From it, we get words like “pneumatic” and “pneumonia,” all relating to the breath.

Unknown-2.jpegToday, the breath is becoming the center of attention again in modern mindfulness practices. It is well known that oxygenation has tons of health benefits, such as lowering stress, improving one’s clarity and moods, removing negative thoughts, and grounding oneself in the present.[1] Buddhist writers often identify the breath as an “anchor,” something to which to return when distracted, to shift to in order to be present, to consult when invaded by thoughts. Some of the thinking is: If you can notice, appreciate, and love something so small, precious, and minute as the breath, then you can surely extend that attention and love to everything else in life, big or small. In other words, if you can appreciate the simplicity of the breath, then you can also appreciate, for example, the simplicity of a tree, or the smell of the coffee you make every morning, adding a depth to everyday life, an added layer of meaning. Both Buddhists’ and Zen Buddhists’ central teaching regarding the breath is to notice. You just have to acknowledge at any moment, “I am breathing”—nothing else. To stop in the middle of the day, halting whatever you are doing, and notice the breath, to just know and be conscious of the breath is to appreciate it, considering we move through our days like automatons without ever giving notice to our unsung breaths, without which we could not live. During mindfulness meditation, the goal is to feel the breath, passively, observantly, unobtrusively. The feeling of the breath as you inhale and exhale, as it comes in through your nose, down your throat, down the bronchial tree, and out the mouth—this is to what we must pay attention. A particular Zen practice calls for beginning practitioners to count the breath, by counting breath5.jpgthe in’s and out’s, only the in’s, or only the out’s. Whichever you choose, it is advised that you count up to a number like 10 before restarting; and eventually, once the count is ingrained enough, having been trained multiple times, you will not have to say it out loud or mentally voice it—your breath will naturally fall into rhythm. Conclusively, what can be said is this: That while both Yoga and Buddhism attribute great importance to the breath, they differ in their approaches to it, Yoga’s being to control the breath, to apply rhythm, to attune the breath voluntarily; Buddhism’s being to notice the breath, to watch Unknown-4.jpegit, to fully and intentionally be present with it; one is active, the other passive in its method. Nature is the perfect place to be mindful of the breath. Simply stand, the sun shining down on you, leaves blowing around, and be mindful of the fact that as you exchange CO2 and O2, you are actively engaging with the trees around in a mutual exchange, symbiotic, one giving life to the other, perpetuating, giving existence to one another. You, the trees, and the animals and wildlife are all interconnected, sharing the eternal breath.

Personally, when I do mindfulness meditation, despite having read about the importance of the breath, I never feel anything special, never get what they mean by “appreciating the breath,” no matter how much I try, always trying to “feel” the breath as I inhale, then losing it as it moves past the nasal cavity, wondering where it went, then exhaling through my mouth, monotonous, uninteresting, without any specific feeling. Hence, I usually focus on using my senses rather than focusing on the breath. However, recently I discovered that an appreciation of the breath through mindfulness can be achieved in another way, one more suited to my subjective tastes, when I can truly be alone with it and feel its benefits:

Unknown-5.jpegIt was 78ºF on a Saturday morning, unbearably hot for a weekend in January, and I was with my fellow runners at track practice. We were all exhausted. We had only just warmed up, yet we were already sweating, all of us taking off our jackets and sweats and putting them on the turf. Our coach gathered us, back to the sun, and announced fatalistically, “You will be doing 5×300’s, Varsity at a 48-second pace. This is going to be the hardest workout all season, and they will only get easier after this.” As soon as he said 5×300’s, my heart sank, my eyes widened, and my jaw nearly dropped, and I could feel my teammates collectively doing the same. Anyone who is a short-distance sprinter specializing in the 100m will know how dreadful 300’s are—how they strike fear into your soul, unforgiving, excruciating, unfeeling, merciless. Only 100 meters less than the 400m and 100 meters more than the 200m, they are a terrible, formidable middle state, a Purgatory between two Hells. This said, the senior and freshman runners alike were mortally terrified. Having no choice in the matter, though, we approached the track, with heads down and a shuffling gait, unwilling—or was it unable?—to face the track, to look it head on. We were divided into groups of about six to 10 runners, and I was placed in the first heat, with the seniors and juniors, who had to run them at a 48 second pace, which cheered me up a bit seeing as it was the time one got on a regular 400m, but it also meant I had to run 48 seconds, too. Staggering on the track, we got into our lines, bent our legs, got low, surveyed the track, taking in the great distance we had to traverse, contemplated the suffering we would endure, and hoped for the best, forcing out a final breath of repose. Coach said “Go,” stopwatch in hand, and we were off. I followed closely behind the Unknown.pngjuniors, like a dog does its owner, careful not to lose them, not to fall back with the others who were behind, as I wanted to push myself. The sun was beating down on us, and my body was pushing to keep up with them as we turned the bend, straightening out, until it was me and three other runners leading the pack, behind us a few others. When we finished our first rep, I was relieved. It was not too bad; we were running at a pace I likened to a fast jog, the kind of pace at which you go for a casual mile, but with more haste. Those who came up the rear were breathing hard. That morning, before coming to practice, I had completed a 20-minute meditation in which I tried to focus on my breath and my breath alone. As I confessed, it did not work so well, and I could not for the life of me stay with my breath. There and then, though, standing arms akimbo on the grass, sweat across my forehead, legs heavy, I found solace in my breath. In contrast to the rapid, shallow breathing of my teammates, I walked around calmly, breathing slowly and intentionally, in and out, not from the top of my lungs, but the bottom, from the diaphragm, which made all the difference. Because of this, there was a noticeable difference. I was much more collected. With this in mind, I headed over to the starting line again, ready for rep two, eager to try a new strategy: When I ran, I would focus only on the breath, like I was supposed to during meditation. This next ran, I told myself, was not a run at all, but another meditation session, a practice of mindfulness—mindful sprinting. My thinking instilled within me a kind of vitalization, a readiness for pain, whereas the other runners came up sluggishly, not looking forward to this next rep. Instead of viewing the track as a stumbling block, I viewed it as a hurdle (no pun intended), something to overcome, over which to jump, and thus from which to grow. The sprint was an opportunity, not a punishment. We lined up again after the last heat finished. Once more staggered, we heard “Go,” and we went. Familiar with the pacing, I set myself behind the juniors and kept close to them, careful not to speed up at the bend, but to relax. I breathed as though I were not running, but sitting still, meditating, still Unknown.jpegbreathing from the diaphragm and exhaling through my mouth. The first 100m was not hard, nor was the second. It was always the third which was hardest. My friend, who had up until then been running at my hip, had fallen behind on the second leg, his legs too tired, his breath too short, to keep up. This was the final straightaway. Lactic acid had built up in my legs, making them heavy, so that just raising my leg took most of my effort. I thought of what my Coach had told me, namely that I needed to keep my knees high, especially at the end; so I turned my attention to my breath. Unlike pain, unlike tiredness, the breath is not transitory, but is permanent, constant, unchanging, eternal, a dependable cycle of air, of vitality, which coursed through my body, an unending cycle, infinite, and it entered into the foreground, while the rest of my attention faded into the background, even the track, even my periphery, even the pain I felt in my legs, even the pressure in my chest, even the sweat dripping as I ran—it all went away, impermanent, mere sensations, perceptions, which could easily have been illusory, as opposed to the breath, whereof I was most certain at that time—Respiro, ergo sum—the only certainty, the only object of which I was conscious, to which I was willing to devote my attention, and so it felt as if my mind and breath were alone, two objects painted into an empty wind_breath.jpgcanvas, my thoughts and my breath, both transcendent and immortal, real, unlike pain, which felt unreal at the time, and the track was the dependent variable, my breath the independent variable, the distance equal to the pace and the infinite Now, the passing away of time into seconds as my legs carried me forward, knees high, arms pumping cheek-to-cheek, my breath still constant, till I was nearing the end, feeling great, triumphant, and suddenly all the sensations dawned on me, but they did not matter, not the pain, not the feeling in my lungs as I watched my running shadow on the track, so I did not feel alone with my breath, whereupon I saw the finish line, and, pushing one last time, made it to the finish line. As I peeled off to the side to make room for the others, I interlaced my fingers and put my arms over my head, opening my chest to make my breathing easier, more controlled, while the others were out of breath.  

[1] A simple search will bear hundreds of results if you want to read more. Here are two: 18 Benefits and 21 Benefits


For further reading: 
Running with the Mind of Meditation by Sakyong Mipham (2012)
Light on the Yoga Sūtras of Patañjali by B.K.S. Iyengar (1996)
Mindfulness & the Natural World 
by Claire Thompson (2013)
Encyclopedia of the Human Body 
by Richard Walker (2002)
Wherever You Go, There You
Are by Jon Kabat-Zinn (2005)
Yoga: Immortality and Freedom
by Mircea Eliade (1958)
The Complete Human Body 
by Dr. Alice Roberts (2010)
The Greek Thinkers 
Vol. 1 by Theodor Gomperz (1964)
Philosophies of India 
by Heinrich Zimmer (1951)
Coming to Our Senses
 by Jon Kabat-Zinn (2005)
The Human Body Book 
by Steve Parker (2007)
by Joseph Goldstein (2016)

Zen Training by Katsuki Sekida (1985)
Chi Running by Danny Dreyer (2004)




The Media, Democracy, and the Public Sphere [2 of 2]

Unknown.jpegClick here to read part 1 if you have not already (and makes sure to leave a like)!

Today’s technology-driven world is also system-dominated. A system is any division of labor paired with productive forces and knowledge, thinks Habermas. Systems operate through instrumental reason, or ends-means rationality. The ends justify the means. Organization and the state, accordingly, can manipulate the public with publicity, diverting their attention. The government tends to focus on technical problems, replacing democracy with bureaucracy, resulting in a democratic deficit, where principles of equality and consent of the governed lose their importance to Habermas’ “technocratic consciousness,” a state of mind brought forth by increasing specialization handled by authorities, experts, and professionals, each of whom spreads propaganda under technical jargon, claiming to be “fixing” some new problem. These technical problems are those to which social, pragmatic, pressing, and vital problems are subordinated. Technological ideology is not delusionalper se, as other ideologies are, such that their believers are under an illusion, misguided and mislead, although it is ubiquitous, as other ideologies are, infectious, spreading like wildfire. As such, the technical dominates the practical, removing thereby personal ethics. When a decision is made, its ethical dimensions are not considered; it is an ends-means instrumentality. Simply put, technology is self-determinative in terms of its values, which makes it a threat to democracy (in excess, of course, as technology is not intrinsically bad).

Unknown.pngThe commercialization of the press has led to the death of intellectual journalism. Drama takes precedence over detail, personality over policy. During the election, the press notably focused less on the actual and real issues and more on the candidates themselves. Rational discussion was thereby taken from the people, from whence they were distracted. Back in the 18th century, the bourgeois educated middle class read the newspaper daily, then went to the salon to discuss it with their peers. Now, the newspaper is still read daily, although not to the same extent. Consumers watch TV for hours every day, without ever exchanging discourse. Listening to the radio, watching TV, we cannot “disagree” with the media, in a sense, because it “takes away distance,” to use one of Habermas’ phrase, by which he means that we are so close to the media, that we cannot engage with it, we cannot talk face-to-face with the television or the interviewer or host who is speaking, but are forced to sit there, inactive, passive, taking it in, unable to respond critically. “The public Unknown-1.jpegsphere,” notes Habermas, “becomes the sphere for the publicizing of private biographies.”[1] News, publicity, focuses on celebrities, scandals, and politicians. It dramatizes everything they do, reporting it as news, using names to attract and tempt us, making a story out of anything they can get, in order to profit off of it. Rather than examine the policies and character of a person, the news analyzes their personal life. Habermas reflects ironically on the fact that, in the 19th century, ads in the press were considered dishonest, so they took up only 1/20 (0.05%) of the page. —How things have changed!— Take a look at any newspaper, even a respectable one, and behold how the whole page is practically take over by ads! Editorials are advertised and lose their meaning.  Advertisement gives a sales pitch, clear as day, but PR is more dangerous than advertisement because it exploits the public with attention-grabbing publicity, taking cover beneath the protection of the press.

Moreover, newspapers are dumbed-down. Publishers play around with type and font, adding flashy images and illustrations that distract from it, Habermas points out. The supervisors, just figureheads for their representative companies, get to control which topics are covered, scrapping any of which they disapprove. They “serv[e] up the material as ready-made convenience, patterned and predigested. Editorial opinions recede behind information from press agencies and reports from correspondents; Unknown-1.pngcritical debate disappears behind the veil of internal decisions concerning selection and presentation of the material.”[2] Debate, once a byproduct of the press, is itself commodified, restricted by formalities, aired to be watched without intervention or follow-up discussion. For this reason, debates are reduced to mere “personal incompatibilities,” trifles, minor disagreements, surrendering itself to the rampant relativism of the 21st century. In newspapers, “delayed-reward news,” valuable and informative, is vanishing, in its place “immediate-reward news,” which is tainted with too many clichés, touched up with drama, and made to sparkle with hyperbole, such that “the rigorous distinction between fact and fiction is ever more frequently abandoned.”[3]

By commercializing the press, the rich manage to hold onto power. They use propaganda to limit democracy. Playing the victim card, they complain that the wealthy minority are under attack from the powerless, uneducated minority. To combat the democratic instinct, they push for the “indoctrination of the youth,” a phrase actually used in official documents, emphasized by American philosopher Noam Chomsky (1928-) in his A images.jpegRequiem for the American Dream (2016) to critique the abuses of the media. Institutions like schools were told to be more strict in their requirements, to create criteria for education to brainwash children. The term “brainwashing” probably conjures up connotations of conspiracy; the fact is, brainwashing is very real, and very common, a technique mastered to influence people. Institutions try to limit free-thought, in hopes of making everyone conform to a single cutout. To cite an example, Chomsky refers to the Trilateral Commissions, an organization which, responding to the 1960’s, attempted to develop a “proper” society. There was purportedly “too much democracy,” so they needed to keep the masses in check, making people conform, passive, unquestioning. In post-Cambodia U.S. in the ‘70’s, local common spaces like the library and debate hall were closed off in universities to discourage critical discussion. In other words, the government attempted to shut down the public sphere, to prevent any criticisms of the state. Anyone who critiques the government, usually the educated minority of intellectuals, who impugns the media, is denounced as “anti-American,” a term which Chomsky traces to totalitarian regimes. To reduce criticisms of “concentrated power” (the state + corporations), the government discourages critical talk, alienating them, calling them traitors to the state, much as the Soviet Union did. Journalism was stifled. The public sphere cannot engage critically or rationally.

Famously, Chomsky said, “Propaganda is to democracy what violence is to totalitarianism.”[4] PR, then, is a method of cracking down on dissent, be it violent or nonviolent—a means of silencing and enforcing strict rules. Propaganda is more dangerous than censorship, he argues, because it, like PR, parades around as the public sphere, but is actually deceptive and misleading. Propaganda is brainwashing. This Screen Shot 2018-01-30 at 6.35.01 PM.pngdevelopment of PR and of propaganda stems from Edward Bernays, who coined the phrase “engineering consent,” a concept studied in depth by both Chomsky and Habermas. Bernays created what one official called “consent without consent,” because with the work of Bernays, PR was able to make decisions for people. As Chomsky relates from David Hume, power lies in the hands of the people; but if the people are made to think they have none, they will be powerless, and the government powerful. So the government exploits this. Fabricated consumption, a Veblen-esque term used by Chomsky, refers to the consumer culture of today, a culture in which we are told we need things, rather than want them. The media everywhere shouts, “Look at me!” “Buy this product!” Consumption is both uninformed and irrational, when it is supposed to be informed and rational! Evidently, all this has played a role in the 2016 Election. Rather presciently, Habermas writes that, with the decline of the critical public, those who do not ordinarily vote are swayed “by the staged or manipulatively manufactured public sphere of the election campaign”—notice the use of the word “manufactured.”[5] The presidential candidates were portrayed in a certain manner on purpose, because the corporations who owned them leaned in a certain direction. Unknown.pngBecause the media was biased and commercially influenced, it created a terrible environment, where discussion could not be grown, but rather created a desert, where no plants could grow, since there was no water, so they perished. Discussion was neither informed nor rational. Even if there were rational discussions, they were not factual, for the media reported no facts upon which to base them. This kind of political climate is poisonous, and offers no room for critical debate. “[A]n acclimation-prone mood comes to predominate, an opinion climate instead of public opinion,” declares Habermas; i.e., there is no talk about policy or the positions of the candidates; all there was was empty declarations like, “I’m voting for blah blah,” and “I’m pro so and so,” utterly devoid of thoughtfulness or decision.[6]

The decline of the public sphere and the commercialization of the media is no new concept, even here in the U.S. In the year 1934, the first Communications Act was passed, which formally established the Federal Communications Commission (FCC). This Unknown-1.pngorganization was created to handle media concerns, its service to the public interest. Then, in 1949, the controversial Fairness Doctrine was passed, a policy that required all media focus on pertinent, controversial topics and give equal airtime to opposing viewpoints, so as to allow for fair, balanced reporting based on facts, promoting discussions between parties, not just parochial, sectarian biases that supported one side, saying bad things about the other. In instating this, the FCC wanted to foster rational discussions, where both sides could be heard, and then citizens could make up their minds, instead of just listening to one and forming their decision without a second thought. With the Fairness Doctrine, the pros and cons could be heard and rationalized, challenged and defended. There would be less party polarization as a result—a problem we face very much today. The problem of the policy’s constitutionality arose, and it was challenged for impinging on First Amendment rights, so it was repealed in 1987, and formally eliminated in 2011. In 1975, the Cross-ownership Rules were passed by the FCC to “[set] limits on the number of broadcast stations — radio and TV — an entity can own, as well as limits on the common Unknown.jpegownership of broadcast stations and newspapers.”[7] These rules stipulated that a company could not own multiple mediums. Regulation of ownership was first defined thus. Giving equal voice to all media, the FCC made these rules to reduce and prevent media consolidation—the process in which big companies, or conglomerates, buy out other media companies, and thus hold legal and economic ownership of them. Like Chomsky, the FCC wanted to stop concentration of power. This set of rules appears to be a victory for the public sphere; unfortunately, it did not last long, and tragedy struck when the Telecommunications Act of 1996 was made active. Suddenly, the FCC repealed ownership regulations—hence, they deregulated the media—allowing for more companies to merge together and consolidate. From 2003-7, slowly but surely, the media was increasingly deregulated. Eventually, the Cross-ownership Rules of 1975 were null. Private concentration opened up. One of the terms stated that “whether a channel actually contains news is no longer considered in counting the percentage of a medium owned by one owner.” Companies could now hold 45% of the media market, as opposed 2Mp7dD3HM1Q7Q4QSc5zTjUym.jpegto the previous 25% in 1985.[8] This, the rise of oligopoly. By 1985, 50 companies controlled the media. Since the Telecommunications Act of 1996, over a course of several years, the number dropped infamously to five (or six, depending on the source) companies: Comcast, The Walt Disney Company, 21st Century Fox, Time Warner, and CBS/Viacom. Most recently, many an American has prophesied the “death of the Internet” as a result of a decision that took place on December 24, 2017: The FCC, after a long fight, repealed Net Neutrality. Why is it regarded as the death of a free Internet?—Because big corporations, such as Comcast, can now control data as they please. It used to be that data carriers equally distributed connection, but now, with it repealed, just like the Cross-ownership Rules, oligopoly can now thrive, meaning big companies control the market, stamping out smaller competitors, all in the name of money.

Unknown-1.jpegAnd what of fake news? What is it, and what implications has it for democracy and the public sphere? Fake news is defined as “false, often sensational, information disseminated under the guise of news reporting.”[9] Put another way, fake news is erroneous, nonfactual information based on getting attention, often with the use of shock to attract people. It conceals its falsehood under the “guise,” or cover, of “news reporting”; it uses the authority of the media to pull of its stunts. This is an existential threat to democracy for several reasons. First, it deceives the public. The public relies on the media to get information, but the press supplies them with none—or rather, it does, but it misinforms them, about everything, seeing as it is fake. Second, it besmirches the reputation of the media. Each time we read fake news and catch it, we lose more and more trust in the media, because we know we cannot believe a word it says. Considering there are good, factual, respectable presses out there, this is disadvantageous because it means that the preponderance of fake news seems to overcompensate for the good news out there, meaning media in general loses its credible character. Third, fake news does not make for critical discussion. If it is fake, then it is not factual, and if it has no facts, no logic, then it cannot be rational in any capacity.  Fourth, it signals the collapse of the public sphere and the recrudescence of feudalism, devoid of any criticism.

Unknown-2.pngIn a study done by Media Matters, Facebook was found to be one of the leading sources behind fake news circulation. Due to its algorithms, Facebook works like this: The more likes or views an article gets, the more it circulates, the more it spreads. The circulation of news is an active engagement; the more we interact with it, the more it interacts with us. Like a hot agent, the more it spreads, the more hosts it enters, which, in turn, spread it more, multiplying exponentially. Just clicking on the article, just coming into contact with it—this tells the system to send it to more people. The code says, “Oh! this must be popular, seeing as many people are clicking it; I’m sure everyone else will like it…,” and so sends it to more and more people, who then send it further. Worst of all is the fact that fake news is not ideological but commercial. Fake news is not necessarily for promoting a party, supporting one candidate intrinsically; rather, it is all for money, not surprisingly. One might find this fact hard to believe, as there were countless pro-Trump or pro-Hillary (and vice versa, anti-) articles. But the fact is, these fake articles that spread rumors or intentionally provocative comments are advertised not to gain support for either candidate, but to pander to their supporters, and so to make money. Yes, the advertisements were sent to respective supporters, but it was not to help them grow, but to, by the very essence of the article, make them click on it, thus making them money. It is not unknown that Facebook sells private information about its users. Millions of private accounts have their information sold to companies for large sums of money. Once the companies have our private information, they can manipulate us; they can manufacture our consent. If I were to put on my account that I supported a particular candidate, and if my information, which is kept private, concealed from public view, were to be sold to a company, then they could look at my profile, see who it is I support, and send me advertisements and articles supporting that candidate, or denouncing the other candidate, and I would not be able to resist: After all, we love to engage our subconscious biases. Any contrary information strengthens our resistance. Large companies, then, do us a disservice, pandering to us, selling us what we already like and  know, entrenching us in our beliefs, leading to confirmation bias, ultimately making 72li89phdx1y.pngthem lots and lots of money. Facebook has ads absolutely everywhere. Hence, they make money off of us. Going back to the threat of fake news, the biggest problem is its evolution. Originally, fake news used to be intentionally false, provocative, and contentious, designed to make its readers drawn to it, interested in finding out about the latest scandals, even if they were believable or not, obviously fake, with the purpose of entertaining. An example would be some kind of conspiracy, like “Hitler still alive in secret bunker in Africa.” This is “sensational” news. Fake news is now a disguised predator, a sheep in wolf’s clothing, preying on us gullible readers, presenting itself as real, authentic news. See, whereas sensational news was meant to be explicitly entertaining and false, fake news is more believable than it used to, meant to mimic real news, to pull us in with facts; it looks real, but is deceptive, too good to be true. Taking up the mask of real, credible news sources—which, notwithstanding, are fake—these sites adopt media names, like “San Francisco Chronicle Gazette” or “Denver Guardian.” The president of Media Matters, Angelo Carusone, remarks, “These sites actually capitalize on people’s inherent trust in the news media.”[10]

We pride ourselves on our democratic freedoms of speech and press, yet nothing could be further from the truth. Today is the age wherein left becomes right, up down, and right wrong, when everything we have come to know is flipped upside down, every fact we have accepted needing to be checked, then re-checked, just to make sure it is not images.jpeg“fake.” Such is the time we occupy. We cannot trust our media. There is a fundamental lack of discussion. Silent, powerless yet powerful, we have the power to make a change, if we want to. I am sure none of us would like to live in a country where the media purposefully obscures the news, covering up the government’s actions, adding glitter to it, to keep it from appearing as it is. And yet, we live in one. It is not so distant from a totalitarian state as we might think. Chomsky thought Orwell would be impressed, impressed beyond horror, at the extent to which we as a civilization have abandoned truth and honesty in our coverage of the government. The public sphere as we have come to know it, has faltered, trampled beneath our feet, like a clerk on Black Friday, as we insatiable consumers burst through the doors, indiscriminate, hungry, willing to feast on whatever is presented before us on a fancy platter. Bibs fastened around our necks, knives and forks tight in our fists, we voluntarily feast on the shiny and tasty-looking desserts placed in front of us, instead of eating our vegetables, salutary, good for us, though not as inviting. We have failed the public sphere. Rational discourse has been abandoned. But if we take the time to talk with one another, engage in discussion, and do our research, reading up on the latest news, attentive, then we can bring back honest, intellectual journalism. We must make our communication authentic.


[1] Habermas, The Structural Transformation of the Public Sphere, p. 171
[2] Id., p. 169
[3] Id., p. 170
[4] Chomsky, The Chomsky Reader, “The Manufacture of Consent (1984),” p. 136
[5] Habermas, op. cit., p. 214
[6] Id., p. 217
[9] (9m10s)
[10] Id., (9m18s)

For further information:
The Structural Transformation of the Public Sphere by Jürgen Habermas (1991)
Introduction to Critical Theory: Horkheimer to Habermas by David Held (1980)
The Penguin Dictionary of Critical Theory 
by David Macey (2000)

Chomsky on Democracy & Education by Noam Chomsky (2003)
A Requiem for the American Dream by Noam Chomsky (2017)
Dictionary of Sociology by Nicholas Abercrombie (2006)
The Chomsky Reader
by Noam Chomsky (1987)

Social Imaginaries by Charles Taylor (2005)
Media Cross-ownership
Consolidation of Media

Facebook and Fake News


The Media, Democracy, and the Public Sphere [1 of 2]

Unknown.jpegIt is hard these days to distinguish “fake” news from “real” news. Browsing the Internet, checking our social media accounts, we come across tens of advertisements and articles, all of which vie for our attention, one claiming to have found the secret to instant weight loss, another “exposing” a celebrity scandal, yet another reporting “objectively” on the Trump administration, or some commenting on foreign affairs. This is a gullible age, in which we believe everything at first sight. If it piques our interest, then we click on it. We depend on the news to get information regarding important affairs in our country and around the globe; without it, we are no different than the early Europeans, who were ignorant of the New World. It is a problem, understandably, when this very source of knowledge from which we get information about the world is no longer to be trusted, when we are forced to be wary, vigilant, and cautious of whether or not it is true. How telling it is, to have a media that requires its facts to be checked! As it is the press’s job to report on what is happening and give us citizens the lowdown on what is happening, it is not too much to ask of it that it be objective and tell us what we need to hear, not what we want to hear; because sometimes, what is the case, is not comfortable. But as has been evidenced by the 2016 Election, the media has failed miserably in its service, failing to fairly and impartially give the facts, too concerned with presenting opinions, caught up in commercial interests, its duty not to the people but to the rich and influential to whom Unknown-2.jpegthey pander, and it has impoverished many an American, depriving him of the truth, so that it has become a medium through which biases and polarizations are disseminated. The decline of the media’s duty has resulted—and is resulting in—a terrible thing: A crisis of democracy. Veritably, the media’s loss of power in presenting the truth paves the way for democracy’s dim demise. Furthermore, this loss of the media is a sign of another deterioration, one vital to democracy: The public sphere. With the failing of the media comes the failing of the public sphere, and with it the failing of democracy. What is the public sphere, and what relation has it to democracy? What happened to the media? Why is fake news an existential crisis for democracy? Such are important questions, which have to be answered.

Unknown.jpegThe theory of the public sphere was developed in detail by philosopher Jürgen Habermas (1929-), a German member of the Frankfurt School of Critical Theory, the focus of which was to critique society under a Marxist lens. Historically, the public sphere originated in the 18th century during the Enlightenment, arising in local salons and coffeehouses, where the public would gather. Usually, such locations were frequented by the bourgeois, for they were the educated middle class. Heavily into reading, having been brought up with a fine education, these intellectuals—among whose ranks were philosophers like Voltaire and Rousseau, writers, publicists, playwrights, and scientists—would come together in a central location and talk, sharing ideas from their works. These tight-knit groups of intellectuals, philosophes, and men of letters congregated to discuss politics. The governments of the 18th century were largely monarchical. For this reason, the government was representative; that is, it did not communicate directly, but indirectly. It did not present itself candidly, but represented itself, with false semblances, giving appearances, but never real insights; government happenings remained mysterious, concealed from the public, covered up so that the ruler’s intentions were never disclosed, leaving the people in a cloud of confusion. Inside coffeehouses and salons, the bourgeois exercised their 1st Amendment Rights, namely their Freedom of Speech and Freedom of Press, both of which they used to their advantage. Outside of political purview, they Unknown-1.jpegcould speak freely and frankly with each other, without fear of punishment. In order to publicize their ideas, in order to make known what the government tried to make unknown; in an effort to enlighten the people, the bourgeois created public centers and presses so they could help circulate and spread news and ideas. Newspapers sprang up rapidly throughout Europe, especially in England and France—intellectual hotspots at the time. The public sphere operated under Enlightenment ideals, such as the general will and the public interest. When the public discussed an idea, they made sure to come to an agreement. This means: They listened to what everyone had to say, debated, then came to a decision, although not an arbitrary one, nor one arrived at by the majority, but made according to the general will, the common interest, a piece whereof was shared by everyone involved. Since “consent of the governed” seemed to fall on deaf ears in government, it fell upon the people to take care of themselves, so they took it up themselves; the public sphere served the people; the public sphere was indebted to the people, who were truly sovereign.

Unknown.pngAs such, the public sphere was the successful bridging of the public and private. That is to say, the public sphere brought those from the private domain—separate, private individual citizens—together centrally in a local area—the public. Private life is the life lived by oneself, in the comfort of one’s home, in one’s everyday routine. Accordingly, the private sphere was merged with public interaction, birthing the public sphere, where private individuals created public intentions. Where they assembled became “common spaces,” from which comes the basis for the word “sphere” in “public sphere.” Take the Internet: Although its users are separated by screens, some in the same town as one another, others are miles away, in different states, or countries, or continents, yet collectively, they identify as “Internet users,” which is to say that the Internet, despite being spatially inclusive, is not a single location, but an abstraction. The sphere is spread across multiple mediums—what Charles Taylor calls its quality of being “metatopical,” or beyond location. For example, hundreds of people can assemble in a stadium, which we can then call a “common space,” physical and exact; but hundreds of people can assemble on the Internet, which we can then call a “sphere,” in this case the cybersphere, digital and inexact, though Unknown-2.jpegextending variously. The discourse within the public sphere goes beyond physical space, not in a single spot where many convene, as we have seen, but spreading out. Here, in the sphere of the public, the principle is public opinion. Public opinion, Taylor points out, must be distinguished between merely convergent and common. Common public opinion—public opinion proper—is a singular, focused goal, whereas convergent public opinion is a melange, a coinciding of intentions, interrelated, yet bearing no unity. In other words, common public opinion is a public commitment, convergent a private commitment. The difference can be illustrated thus: Intellectuals discussing politics in Starbucks is common public opinion, while sports fans coming together in a stadium is convergent because they have no collective goal, rather they are all converging, or coming to a point, through different, private paths. Taylor cites public polls as an example, interestingly, of convergent public opinion; this is because the people responding are doing so privately, committing to something individually, which, when added up, is public, and their answers are diverse, not at all unified. Common opinion is something agreed upon. Essentially, the public sphere is extrapolitical—it is outside the domain of politics. Whether you are a supporter of a sports team or a member of a charitable organization, you are recognized as a part of something official. You are an “official supporter” or “official club member.” However, the public sphere is not an official organization; it is the exact opposite. It is recognized by the people, not the government. Hence, it has no real power; rather, it is a check on the government, a means of balancing out its power. The “public” is not a determinate thing. It is an abstraction. It is not an individual; it is a 6e54eb8188ff871f883d0720f112f818.jpg.pngcollective. It is only a sphere when the public makes it one. As has been said, it is not a political association—far from it—but the coming together of the private into the public. The common opinion is focused on critiquing the government, the stress being on the sovereignty of the people, the consent of the governed. Ideally, the public sphere is a statement, one that states, We are the people, and you, the government, should listen to us. The public sphere demands the government’s attention. It demands the principle of supervision, which says a government’s actions ought to be made public to the citizens. Legislation ought to be made manifest so it is rational (using logic and reason), pressured (to make moral decisions in front of the people), and democratic (in the name of the people, who are involved).

So what is the purpose of the public sphere, and what is its objective? The public sphere, we have said, is to serve public interest. Now that we have the why, we need to know the how. Conceived in the Enlightenment, the public sphere is designed for critical thinking and discourse among the people, separate from the government. Communicative action Unknown.pngis the theory which argues that, through language, things can get done. When we tell someone to do something, and they do it, we have created action through communication. In the public sphere, the goal is to debate politics and achieve communicative action by means of discourse, in which everyone takes part, ultimately so that a consensus is reached, upon which a resolution is made, and action follows. Again, this is local, not institutional, so communicative action can be made anywhere, from a supermarket to a modern coffee shop, as long as it is a common space. What comes from debate should be “mutual comprehension” or “compatibility,” according to Habermas. This means people understand one another, understand their eyes, and their ideas can be related to one another, combined, or subsumed, as in a dialectical synthesis. After all, this is the desired outcome of any debate: Two or more people argue with logic to back up a side, listening to their opponent, then devising a response, with the intent of coming to a conclusion that is agreed to, thus settling the matter. It is important that it be mutual, or two-sided, because Truth can only be achieved through a consensus, a common understanding. The public sphere has a climate of debate where people can argue rationally, defend logically, and challenge politely. Social dialogue is constituted by the public sphere. Discussion is Unknown.jpegmeant to be free, open, impartial, and critical. Merging Freedom of Speech with Freedom of Assembly, the public sphere cultivates what Habermas calls an ideal-speech situation. Once achieved, an ideal-speech situation is a circumstance in which unimpeded, unfettered free speech is allowed to flow. People can speak their minds freely, ready to engage with others. Discourse ethics is the field of ethics that covers the morals of discussion, so the ideal-speech situation is an ethical doctrine, as it sets up a paragon of critical political debate. In an essay concerning universal pragmatics, written to delineate the proper usage of speech and guidelines on how to communicate effectively, Habermas came up with four aspects of effective communication.

  1. Comprehensibility
  2. Truth
  3. Correctness
  4. Sincerity

In conclusion, the ideal-speech situation generated by the public sphere is a situation wherein private individuals can gather to speak their mind, hear out others, challenge, defend, create arguments, and come to an understanding consensus—one that reflects everyone’s opinion.

Does the public sphere still apply today, and to what extent? What happened to the public sphere, and why is it collapsing? It would arrogant to assert the public sphere does not exist today, for political talk is rife as ever, and there are dozens of shows, radios, and broadcasting stations which cover politics, all offering commentary on current events, collaborative, contentious, and the process of globalization has allowed for far greater coverage, so there are more getting involved every day, and more people Unknown-1.pngtuning in, creating a very popular and argumentative political environment, where debates take place, either in YouTube comments or in High School corridors, for instance. However, it would be equally presumptuous and ignorant to deny that the public sphere is no longer potent in its goal, having diminished greatly since its creation in the 18th century. There is undeniably more involvement in politics in this age than the Enlightenment, yet there is concernedly less critical involvement, much less commonly public involvement. The literary circles of the philosophes, who would eagerly read the newspaper, excited to share their thoughts, ideas raging, have degenerated, and we no longer see this kind of intellectual commitment to political debate. In the centuries dividing these two ages, our media has grown immensely; now billions of people are interconnected globally, able to interact with one another. But newspapers, reporters, and radio shows still fail to incite critical discussion.

Who or what are we to blame for the failure of the media in the public sphere? Habermas says the answer lies in the structural transformation of the public sphere. This structural transformation, he describes, is the commercialization of the public sphere. What was once a noble critical culture (kulturräsonierend) became an ignoble consumer Unknown-2.pngculture. As soon as the media came under the sway of money, it became commercial, no longer a service for the public but a business for consumers, something which no longer informed but sold, which dealt not in news but commodities. News turned into ideology. Reporting used to be objective. It wrote down the facts, checked them, reviewed them, then posted them. Nowadays, reporting is subjective; reporters can write not about the event itself, but their personal reaction to it, a reaction that is colored by their beliefs. It focuses on how the person is portrayed, instead of what they actually did. As such, the editorial is what one feels or thinks, rather than what happened. Habermas writes that the press became the “gate through which privileged private interests invaded the public sphere.”[1] As soon as private corporations began taking over the press, the media became a medium through which to spread their self-interest. A result of this is the endangerment of the public sphere, because critical public institutions like the press were protected precisely because they were private, in the sense of being extrapolitical and composed of alike individuals, but commerce and technology and corporations now pose a private threat from the government and rich and powerful corporations, who collaborate to keep themselves safe from censuring, colluding to keep their power from the people. An example is newspaper credibility: The authority of a newspaper these Unknown-1.jpegdays rests in the publisher, not the publicist. If we come across Time, we immediately impute it with trustworthiness, despite the fact that we do not check who actually wrote the articles themselves. A writer can easily write whatsoever they please, and if it is approved by the publisher, it will be considered trustworthy, since we invest trust in the publisher, disregarding the publicist, to whom we ought to give equal attention. All this, of course, is spearheaded by the big corporations, who gain from this authority, using it to their advantage. While it can be argued contrariwise, Habermas says editorials were once respectable and intellectual, but are now not. Not only are they subjective, he says, but they, along with the press itself, are commercialized, advertised for money, so that it is no longer an honest pursuit, but a financially motivated one, for which people vie. Just like the Prætorian guard that swore to protect the Emperor and that ended up selling the throne to bidders, so the press that swore to serve the public honestly and with integrity ended up selling editorials.

Unknown-1.pngAnother thing that has weakened the public sphere is opinion management, or public relations (PR). The mission of PR is to bridge the public and private, much as the public sphere does, to communicate between the institution and the people, albeit in a more devious manner. PR appeals to the private under the guise of the public. Howbeit, unlike advertising, which explicitly and openly shows itself as such, PR uses news authority as a façade, a cover, under which to represent private interests as “public opinion” and “news.” In other words, PR disguises commerce as something to which the entire public assents. Opinion management, therefore, leads to the ruination of discussion by means of “sophisticated opinion-molding services under the ægis of a sham public interest.”[2] Grabbing attention with drama, shock-factor, and clichés, and using well-known celebrities as sponsors to inspire conformity and trust, PR advertises its affairs with hyperbole and misleading exaggeration. These “suppliers” of news, as Habermas labels them, recreate the representative government of the 18th century that shrouded its intents from the public. Habermas names this the “refeudalization” of the public sphere, because it takes us back to a feudal hierarchy of society, where we, the public, are reduced to lowly vassals, servants, who are indebted to and dependent upon the powerful nobles, who hide their power. No longer is the public sphere reserved for debate; it is used to represent prestige; it no longer critiques the government extra-politically, but has publicity complicit therein.


[1] Habermas, The Structural Transformation of the Public Sphere, p. 185
[2] Id., p. 195


For further information:
The Structural Transformation of the Public Sphere by Jürgen Habermas (1991)
Introduction to Critical Theory: Horkheimer to Habermas by David Held (1980)
The Penguin Dictionary of Critical Theory 
by David Macey (2000)

Chomsky on Democracy & Education by Noam Chomsky (2003)
A Requiem for the American Dream by Noam Chomsky (2017)
Dictionary of Sociology by Nicholas Abercrombie (2006)
The Chomsky Reader
by Noam Chomsky (1987)

Social Imaginaries by Charles Taylor (2005)
Media Cross-ownership
Consolidation of Media

Facebook and Fake News


What is Called Thinking?

Unknown.jpeg“What is lacking, then, is action, not thought. And yet—it could be that prevailing man has for centuries now acted too much and though too little.”[1] So says Heidegger in his lectures titled in German Was Heißt Denken?, or What is Called Thinking? The 21st century is the age of information, where, upon a single click, after typing in a string of words, one is transported in milliseconds to vast stores of knowledge. Humanity, civilization—we humans have progressed and will continue to do so, improving our technology year by year, solving problems once thought impossible, moving forward at an exponential pace, faster than the rockets which we send to space, faster than we can possibly conceive, at so fast a pace we can no longer keep up so that we are, ironically, left behind, so to speak, while constantly going forth into new lands of machines and thoughts. While we have in exploring the world and nature advanced incredibly, we have also, it seems, retreated considerably from ourselves; while we are busy discovering new lands, we have no time to discover ourselves. Everything is instantaneous. Everything is becoming effortless. We do, and we do, and we do. Our world is driven by action, inaction deemed a vice. Yet as Heidegger suggested, perhaps we have done too much, and not thought enough. We act before thinking, not the other way around. Although our goal is to keep moving forward, we do not know to where we are moving forward. We do not think. We are thoughtless, and so we are unthinking. Easily a favorite of mine, Heidegger’s What is Called Thinking? asks us to reconsider ourselves, to engage in discourse about what it means to think, and to slow down, take a breath, and just think.

The name of the book is at once blunt and questionable. When asked the question “What is thinking?” we are quick to say that we think we know what thinking is by thinking! Heidegger realized this was the immediate answer, but he maintained that it is something much more than that; that thinking, despite being natural to man, the rational Unknown.pnganimal, the thinking animal, is not something within our grasp; that thinking is not what we think it is, contrary to what we believe every day when we say we are thinking. In short, traditional thinking is not original thinking. We go around “thinking thoughts,” appearing “thoughtful,” and “giving thought” to matters, absorbed in our heads, always forming ideas. We are ignorant of the true nature of thought, Heidegger wrote, for we are lacking an idea of what thinking is. As in the opening sentence, Heidegger said we act too much and think too little, at the expense of our own nature. Since the Greeks, people have been saying we ought to stop thinking about life and start living it, and they say action is better than thought; so the convention was that we ought to think less and act more, even though today, there is a preponderance of acting and a dearth of thinking. So what is called thinking? Have we truly forgotten what it means to think?

idea_1920-800x600.jpgWe think about things that are thought-provoking, things that provoke, or bring out and excite, thought. That which is thought-provoking, said Heidegger, is what “gives us to think” and “in itself is to be thought about.”[2] In other words, a matter that is thought-provoking is just that because it is essentially something worthy of thought. Topics we declare thought-provoking “give us to think” in that they compel us, in that we are disposed, by nature of our rationality, to consider them. For instance, politics is considered by many thought-provoking (and emotion!) because, by our understanding of politics, it is something to be thought about, as it concerns us, and we can always discuss it, form ideas about it, and reflect on it, coming up with new solutions. Part of the glory of politics is that it is unanswerable; that is, it can never be perfected, nor can there be a single solution, for there will always be disagreement, contention, and so it remains to be thought about. No matter how much we discuss it, there is more to add—politics as such is thought-provoking because it intrinsically brings out thought in people. However, politics is only one thought-provoking thing: It is not the thought-provoking thing. The thing which is most thought-provoking—to it we are naturally disposed. For as thinking images.pngbeings, animals endowed with the capacity to think, we must, then, by logic, be able to think that which is most thought-provoking, that most primal, grounding thing which thought is and from where thinking gets its nature. Seeing as it is the most thought-provoking thing, it is by itself something worthy of thought; therefore, by being thought-provoking, it must provoke thought, it must because it wants to be thought call us to think it. Just as a child cries because it wants help, so the most thought-provoking thing wants us to think about it because it wants to be thought about. This is only reasonable. To put it another way, if we imagine the most beautiful thing ever, we would have to call it the most beautiful, and we would have to look at it, by virtue of its being the most beautiful. The most beautiful and the most thought-provoking draw us to them, attract us like a magnet or lodestone does. After all, who could resist thinking the most thought-provoking thing? It is, ultimately, the most thought-provoking. This led Heidegger to formulate the following statement: “Most thought-provoking in our thought-provoking time is that we are still not thinking.”[3]

What is Heidegger here saying? Today is what Heidegger called “our thought-provoking time,” by which means that, in the 21st century, we are dealing with so many problems, investigating so many phenomena, and spending so much time on our technology, that there are limitless possibilities for thought. With the “future” burning brightly in our minds from the glare of the possibility of a perfect world, where A.I. and technology become the crux of life, we are living, perhaps, in the most thinking time, since there is so much to think about! From exploration to space to manual-laboring A.I. to global Unknown-1.pngpolitics, the world of thought is at our fingertips. And the internet, providing us with information at lightning speed, is almost available to every single person. Despite this, Heidegger pointed out that this most thought-provoking of times, so seemingly immersed in ideas, is characterized by the absence of thought, real thought. This is the age of unthinking. We do not stop in the midst of living to contemplate, but keep going, like a nonstop machine which must always keep moving lest it die, the function of thinking forgotten, if not removed, vacuous, addlepated, left in its place an empty whole in which we know not what once belonged there. So why is it that, in spite of all the progress we have made, in the name of both science and philosophy, in the pursuit of knowledge, in the globalization of technology, in the improvement of academics and education—why is it that we are still not thinking? It is certainly not because we are unable, for we are, by nature, rational, thinking beings, beings capable of cognition, of abstract logic and computation, of forming ideas. No: We are unthinking not because we cannot think, but because that which is to be thought, has, in the first place, withdrawn. That is, the most thought-provoking matter has hidden itself from us, so to speak, has retreated from view. This is not to say that “the most thought-provoking” has vanished, as in is-no-longer-existent, but it has withdrawn, such that its absence is noticeable. Neglected, exploited, and forgotten, it refuses to arrive. That which is most thought-provoking refuses to be made present, or to be brought into mind, as we are not worthy of it. It is precisely because it is missing, because it is not here before us, because it is an absence rather than a presence that we must turn our thoughts toward “that which gives food for thought”; for, Heidegger said, what is not present is sometimes more important than what is, to the extent that it is mysterious, the variable in the math problem for which we are trying to solve, an empty space where it should be full, the missing piece of the jigsaw puzzle. By seeing that it is not there, we realize that it was there, at one point, and we must find it again. The mysterious, the withdrawn—it draws us with it. What is missing draws us on a string, tugging at us, pulling us in the right direction, which means we are on the right track once we realize it is gone.

Unknown.jpegThinking is “man’s simplest, and for that reason hardest, handiwork,” said Heidegger.[4] Indeed, what is the simplest task usually is the hardest, for exactly that reason. Drawing upon Eastern thinking, Heidegger said we must, if we wish to learn thinking, unlearn what we have previously been taught about it. Unlike math and science, which require years of study and education, thinking requires neither, but is a natural ability, albeit one which has been forgotten. It is man’s most natural, easy task. And yet we know naught of it. Interestingly, controversially, Heidegger wrote, “Science does not think.”[5] (What this means, I shall answer in the following post.) If this is an unthinking age, far from apprehending that which is most thought-provoking, and if we are never actually thinking, then what we are we doing now? We are, in the words of Nietzsche, last men blinking. Heidegger defined blinking as “the mutual setup, agreed upon and in the end no longer in need of explicit agreement, of the objective and static surfaces and foreground facets of all things as alone valid and valuable—a setup with whose help man carries on and degrades everything.”[6] We are approaching “the last man,” who shall do no more, who shall hold himself up in his own self-constructed world, where he is safe from the outside, and from where he can take cover while he wallows in his own complacency, conforming to what the other last men say, confident in his happiness, which is at the same time his ignorance, his narrow approach to things, things he reduces to nothingness so that they have no meaning, meaning which is only assented to by everyone else, and which maxresdefault.jpgapplies only to the “static surfaces and foreground facets,” the appearances of things. In this unthinking age, we proclaim happiness for ourselves, triumphant, glorious, having agreed mindlessly upon truths that have not been questioned, but which are taken for granted. We debase the value of things through our unthought. Unfortunately, we are almost at the point of becoming Nietzsche’s Last Man—“woe him who doth them [deserts] hide!”[7] As we become more entrenched in technology and less involved in thinking, we risk adding to the already-growing deserts abounding. So how then are we to learn thinking—if even we can?

Heidegger returned constantly to the question at hand, What is called thinking?, methodically, decisively, in order to better understand the project he was undertaking, and also to make sure we are following along, each step clear. It becomes necessary, he said, to re-examine the question. Considering it is the guiding force of the lecture, the question needs to be clarified so we can know what exactly we are asking, for what we are looking, and how we are to continue on in our inquiry. Hitherto, we have approached the question as: What is called thinking?—We take this to mean: What is the action designated as “thinking,” or what does it mean to think? Heidegger proposed that the question is more nuanced than this, and he divides the single question into four separate questions, none of which is the real question, all of which constitutes it. Accordingly, Heidegger’s analysis is as follows:

  1. What is the definition of the word “to think,” or “thinking”?
  2. What is the definition of “thinking” in the philosophical tradition?
  3. What is needed to think?
  4. What is it that makes us think?

Again, each holds as much value as the others, although Heidegger believed the fourth question is the most important, the one which will ultimately answer What is called thinking? The other three, though, play a role in that they are necessary in arriving at the fourth. Put together, all four make up the question at hand, as each of their meanings can be deduced based on a different reading and interpretation. In the book, Heidegger did not go chronologically; so I shall go in the same order as he, providing a summary of each, then piecing them all together, since they build off of each other.

Beginning with the first question, Heidegger asked, What is it we call “thinking”; that is, the task we call, or denote, “thinking”? Tracing the word’s etymology, he concluded that the word “think” comes from an Old English word thencan (þencean) and thancian, whence we also get “thank.” (The German denken, “to think,” and danken, “to thank,” are clearly resemblant of the English “think” and “thank.”) This coincidence appears to be Unknown-1.jpegunrelated, and one might wonder as to how one can possibly relate thinking to thanking; yet Heidegger argued that thinking is thanking, in the sense that “to thank” is to express grateful thoughts. At Thanksgiving, for example, we give thanks to everyone and everything we value and for which we are grateful. This is the same thing as saying we give thought. When we give thanks to our loved ones, we think about them—we give thought to them. Coupled with thought, we humans have memory, a word Heidegger excavated, and whose original meaning meant “mindful,” from the Latin memor. To have memory is to keep in mind. Therefore, our memory is devotion to grateful thoughts. Once we gather these grateful thoughts, and once we concentrate on them, then we are said to re-call them, to bring them back into mind. This is recollection. This is what we ordinarily mean by “memory.” In short, what we call “thinking” is being attentive to our expressions of gratefulness.

Skipping to the fourth question, Heidegger asked, What calls us to think? He admitted this was a weird way of asking the question: Usually, we are in the active voice, the person who asks, yet here Heidegger made us the passive voice, that which receives the call. Does this mean there is something outside of us which calls on us to think, implying we are not the ones who incline ourselves, who make us decide, to think? Heidegger Unknown.jpeganswered this earlier in the book when he stated that that which is most thought-provoking calls us to thinking. “To call” meant to invite, beckon, “reach out.” When we call, we call for something. A distress call is made to get help, in hopes that someone will receive the call and answer it. But as we have seen, that which is thought-provoking calls to us, weakly, quietly, a shy whisper that goes unheard, a distressed call which is ever in need of assistance. Alas, no one answers its call, so it is left there, alone, its voice weak, desperate. A call does not always have to be answered. We speak of a “call to action,” which is the meaning Heidegger is looking for. That which is most thought-provoking calls us in the sense of calling-to-action; it calls us, invites us, to think it. As a father calls his son to come forth, to action, we say the father is “commanding.” As that which is most thought-provoking call us to think it, to action, we say it is “commending.” To command is to order. Commands are strict and expected to be adhered to. To commend is to consign, to entrust to, to give into protection, just as a mother takes her child into her arms, Unknown-1.jpegwelcomes the child into her home. So that which must be thought commends us to think it—we are entrusted with thinking it, as though it is a duty, something we should do. If a friend commends us with watching his house while he is on vacation, then we are inclined to do it, lest we fail him and let him down; so we watch it to the best of our abilities. Since we are asked to do it, we do it. In a like fashion, when that which wants to be thought commends us to think it, we must. It is like a friend to us; we do not want to let its trust down; we shall endeavor to think it. Because we are, according to the philosophical tradition, rational animals, animals endowed with thinking, and because that which gives us to think calls upon us to think, we ought to give thanks, to express gratitude, to it. Without it, we would not be thinkers. We are given the ability to think precisely to think that which needs to be thought! In short, That which disposes us to think, or That which calls upon us to think, is that which is most thought-provoking.

figure1.pngJumping around to the second question, Heidegger asked, What is thinking as defined by philosophy up until now? Since Plato and onward, the essence of thinking in philosophy, Heidegger thought, is logos (λογος), and its dual component legein (λεγειν). Logos, he said, is a word, or expression. “The cat sat on the mat” is composed of six logos. The sentence itself is legein, a proposition. Taken this way, the proposition “The cat sat on the mat” states that there is a cat on the mat. Philosophy until Heidegger sets out to deal with logos and legein in a “logical” manner, by which we mean the forming of correct statements and propositions, by applying judgments to the world. To say Kant was a thinker who thought, for instance, is to say he formulated a philosophical worldview through judgments and propositions. In short, what has been called thinking in philosophy heretofore is the forming of judgments through propositions in a logical way.

Third, finally, Heidegger asked, What is needed for thinking? He quotes the Presocratic Parmenides, who wrote, “One should both say and think that Being is.” The meaning of this quote, and of thinking itself will finally be answered presently. In the meantime, what is needed is openness. To think, one must be open. In short, what is needed for thinking is openness and both saying and thinking that Being is.

Parmenides in text.pngWhat is called thinking? The answer lies, Heidegger thought, in the works of Parmenides, a 5th-century Greek thinker who first examined Being. In one of Parmenides’ fragments, he wrote, “Χρὴ τὸ λέγειν τε νοεῖν τ΄ ἐὸν ἔμμεναι,” which Heidegger translated initially to “One should both say and think that Being is.” What this means is not clear, and it remains obscure, so Heidegger broke it up into “Needful: the saying also thinking too: being: to be.” Analyzing this weird, puzzling, and practically nonsensical sentence, Heidegger picked out two words that stood out: λέγειν, to say, and νοεῖν, to think. If the essence of thought consists of saying and thinking that Being is, then what does this even mean, to say that “Being is,” or even to think it? The Greek word λέγειν, he said, means “to say,” although we use another phrase throughout our day to express the same thing. When we give someone a summary, we lay it out for them. We give them a summary by laying out everything they need to know, so that it lies at their feet, before them. Also, admiring a beautiful landscape, we say that nature lies before us. Accordingly, λέγειν means to let-lie-before, to merely have something in front of us, and to acknowledge it. Of Unknown-2.jpegνοεῖν, Heidegger noted that it meant to perceive, too. But perception is, in a sense, a reception, a receiving of sensory information; and when we pay attention to it, care for it, we take it to heart, where we keep it safe, protect it, furnish it, warm it. Hence, νοεῖν means taking-to-heart. Then there is the question of what “Being is” means. Hamlet pondered, “To be, or not to be, that is the question”—yet what does “to be” even mean? To say that Being is, is, at first, a repetition. Being, in order to be, must be; that is, must exist, because to be being, it has to be! Heidegger explained that the word “being” itself is a participle, both a noun and verb, which each describe each other, meaning there is a duality to being, a two-sided nature. He avoided a long, pedantic discussion of what it means “to be” in the book, instead Heidegger on Thinking.pngchoosing to translate “Being is” as “the presence of what is present,” for the reason that the latter is derived from the former, in Latin, and so better allows us to understand what it is we are saying and thinking. Thus, we get: “Useful is the letting-lie-before-us, so taking-to-heart, too, the presence of what is present.” In the next post, I will examine this phrase in depth, but for now, this simple explanation will have to do: To think is to be mindful of the world around us, to care for it, appreciate it, and give notice to it, expressing thanks every living moment, for Being, because the world comes into view for us, unconcealed, the Being of beings. In other words, That which is most thought-provoking, That which calls us to think—is Being.

We live in an unthinking age. Technology is ubiquitous, and science dominates the intellectual world. While the universe keeps getting bigger, the world keeps getting smaller, shrinking with advances in technology, a form of Being whose nature, Heidegger believed, is to conceal itself, to make itself something exploitable. Crouched in our seats, fingers tapping rapidly at the keyboard, kept alive by coffee and other drugs, we live our lives unthinkingly, without examining life, without taking time to just sit, no distractions Unknown.jpegpresent, nothing over which to worry, but just to sit alone and think, to give thanks and appreciate what is present. Heidegger was amazingly prescient when predicting today’s maladies which afflict us every single day. Ignorance is one of the greatest dangers we face—but a greater danger yet is not thinking. The path to thinking is not an easy one, he says, but it is one we must undertake. It is our simplest, and therefore hardest, task. There is no bridge between unthinking and thinking; no, there is a leap, a leap from which there is no returning, and for which there is no net. Only a handful of people have truly thought. As such, we must, as a new generation, in order to create a better future, a sustainable one, an educated one, slow down, stop acting, and think. Just think. There shall be a new beginning. And we are it. We shall usher in a new thinking age—if only we will all take this leap together…



[1] Heidegger, What is Called Thinking?, p. 4
[2] Ibid. 
[3] Id., p. 6
[4] pp. 16-7
[5] p. 8
[6] p. 75
[7] Nietzsche, Thus Spake Zarathustra, IV, 76.2, p. 343


For further reading: What is Called Thinkingby Martin Heidegger, trans. J. Glenn Gray (1968)


Is Water Wet?—A Philosophical Inquiry

Unknown.jpegRecently, a question has been circulating both the internet and, as I have experienced firsthand, my school, a question which is truly vital and which concerns mankind at its core. The question: Is water wet? I know what you are thinking. I will admit, the question is absurd, nonsensical, and, one might point out, fairly simple to answer. Yet many are torn up and in knots because of this simple question regarding an element with which we come in contact every day, one of the essential components of life. Some argue water is wet, others that it is not; and both have their reasons. What many of my peers neglect, however, is that this question is much more complex than it appears. Indeed, the question of whether or not water is wet images.jpegis not a trivial, everyday question; rather, it is something for the armchair philosopher to ponder—yes, the question of whether or not water is wet is, at its core, philosophical. And it is this critical perspective which is missing, which could illuminate the problem. All philosophical problems, declared Wittgenstein, are ultimately reducible to language problems. That is, a philosophical problem is really just a miscommunication, a squabbling over terms, terms that are not properly defined. As such, my approach to the question of whether or not water can be said to be wet requires that it be examined philosophically, and this involves an understanding of what exactly we mean by “water,””wetness,” and how exactly the one can be related to the other. By the end of this, I hope to provide an aqueous solution to this conundrum with the help of Aristotle (384-322 B.C.) and John Locke (1632-1704).  

images-1.jpegWhat exactly is water? Scientifically, I can say that water is the molecule H2O, made up of two hydrogen atoms and an oxygen atom. While I can explain it through chemistry, describing how it is the way it is, its interactions, or its properties, I cannot properly understand water thereby. All we need to know is that water is a liquid and for this reason that its shape is fluid and that it cannot be compressed into a solid. But this does not answer the question of what water is. Essentially, water is a substance. Substances, wrote Aristotle, “are the entities which underlie everything else, and… everything else is either predicated of them or present in them.”¹ In other words, substances are bearers of qualities; they are things that can be described. Like the subject of a sentence, a substance is that around which the sentence revolves, and it receives a predicate, or that which is said of the subject, which involves the verbs and adjectives. The substance can be described, as it takes descriptors but cannot itself be one. Just as a noun cannot be an adjective, so a substance cannot be a quality. As that which is being described, the substance can also contain within it qualities. Water, then, is a substance, because it can be described, it has qualities, it is that which bears qualities. One type of quality, the affective quality, Unknown-1.jpegmodifies a substance by being present in it. Affective qualities produce, or affect, a sensation in the substance based on what the quality itself is. For example, the quality of wetness is an affective quality because it produces an effect in its perceiver, and when wetness is present in a substance—say, water—the substance is said to be wet since it has that quality in it. Accordingly, it is wetness which makes water wet. Water as a substance is amorphous. It has no definite shape, but can conform. Is being fluid a quality? No, argued Aristotle. To be without resistance, to be fluid, is not to have that quality, but to be that shape. Despite its lack of shape, fluidity is how a substance’s parts are interrelated so that they appear formless, but really are. Unlike affective qualities, the fluid, amorphous shape of water is necessary and essential to it. Water is distinctively fluid. Although water can be said to be “distinctively wet,” I will discuss it later. Another important term to know is accident, which is a quality that applies to things contingently, i.e., unnecessarily. Wetness is not essential to water. In its essence, water can be imagined dry, not to mention the fact that it is able to take two other forms of matter. From Aristotle, we have learned that water is a substance, which means it has qualities, whereas wetness is an accidental affective quality, which means it is unnecessary and descriptive, but by no means defining.

Locke was an empiricist of the 17th century. He believed that all human knowledge was derived from the senses and not at all innate. All information is made up of ideas, which are everything from thoughts to perceptions to sensations. Every idea we humans have is made up of simple ideas, individual sensory data that comes from the senses. They can come from one sense or two, becoming a single idea in the end, though. A specific smell, like honey, for instance, is a simple idea: it is derived from a sensory organ, the nose, and is singular. These simple ideas, once gathered, stay in the mind as if it were a warehouse, and can from there be made into complex ideas, which are aggregates, or combinations, of ideas. It is impossible to experience a complex idea, for they are made up of simpler images-3.jpegones—one cannot get 3 without first adding 1 and 2. Based on this, water is a complex idea to the extent that it is made of many simpler ideas, namely its color (or lack thereof), smell (or lack thereof), texture, etc. When combined, all these sensory experiences add up to the individual idea of water. Concerning qualities, Locke said there were three: Primary, secondary, and tertiary. Primary qualities are necessary. They are actually found in external objects themselves. When we look at an object, we form a representation of it in our minds, and their primary qualities carry over, making them unchanging, absolute, non-relative. These primary qualities correspond to the physical makeup of the object—its corpuscles, which were the Early Modern conception of atoms. Each atom has extension, solidity, mobility, and figure, all of which therefore apply to the object itself. No matter where or when they are, objects retain their primary qualities. In contrast, secondary qualities are unnecessary in that they are not found in the objects themselves; instead, they are relative and illusory, mere creations of the mind. When Locke said they are not real, he meant they were not in the atoms themselves, but were an effect produced by them. Secondary qualities are vested in power, power to produce ideas. In effect, secondary qualities are not qualities in and of themselves, but are capable of making them. Wetness is wet, water is wet, but water is not wetness. Water does feel wet, but the wetness is not to be found in the water, but is produced by it. Locke said of secondary qualities that they are “nothing in the objects themselves but power to produce various sensations in us by their primary qualities.”² Thus, they are not sensed by themselves; they are caused by the arrangement of primary qualities, to which they are actually reducible. When we Unknown-2.jpeglook at water, our visual sense organs, our eyes, come in contact with it through waves, and the unique physical configuration of water is such that it produces the sensation of being clear, or lacking color. Color is not real. Water has no color because it is not inherent in the water, but is produced by it. So with wetness. Furthermore, secondary qualities are perceivable in two ways: through immediate and mediate perception. In the first, we the agents come in contact with the object, giving us a subjective experience of it. In the second, we experience the object coming in contact with another object. With wetness, we can experience it for ourselves as when we touch water, or we can see water wash upon a rock and get it wet. Either way, the quality of wetness is secondary. However, Locke also identified a tertiary quality, one which makes “a change in the bulk, figure, texture, and motion of another body.”³ To summarize, tertiary qualities have the power to change primary qualities. Wetness being the absorption of a liquid, it can affect an object’s mobility, texture, and figure by hindering, smoothing, and eroding.

images.jpegIn conclusion, water is wet, yet water is not wetness. Water is a fluid, amorphous substance, or bearer of qualities, of which one, wetness, is accidental—i.e., inessential—and not inherent to water itself, but rather a sensation perceived by the mind alone, completely illusory and artificial, a result of the physical configuration of water, by which we mean that the perception of wetness is neither within water nor exclusive to it, but one of many ideas which constitute the complex idea of water, which, it must be stressed, is not essentially wet, yet which nonetheless produces the sensation of wetness, albeit contingently, an adjective, an add-on, something predicated or said of the water. Moreover, wetness, qualitative of water, is not intrinsic insofar as it is a particular, not a universal. Wetness, because it is not exclusive to water, because it is widely applicable to other liquids, is not irreducible; in fact, it has already been said that wetness is reducible to primary qualities. When we look for the essence of something, its quiddity, what makes it it, we are looking for something eternal, unchanging, and irreducible—something so simple and essential, that 171114-npacific-full.jpgit is independent; but as we have learned, wetness cannot be alone, for it requires an object, a noun, a substance, something onto which it can latch, something to be wet, meaning wetness itself is not essential, but accidental in nature. Bluntly, lava can be wet, yet what applies to water applies, too, to lava; lava is wet, but it is not wetness, because wetness is not essential to it. Precisely because wetness is not essential to water—or any other liquid, for that matter—it is not wet. Water has wetness, making it wet, but is not itself wetness. Through an eidetic reduction, whereby the essence of a thing is revealed from its accidents, water is discovered to not be wet, as water is essentially not wet, and it can exist in three states by itself. And as wetness is not a substance per se, but a quality, it cannot stand by itself.

Water is wet in virtue of its wetness, which is not necessarily so, from which we deduce that water is not wet in virtue of its wetness.

Are you convinced? What do you think—is water wet? Leave your arguments below!

¹ Aristotle, Categories, 2b15
² Locke, An Essay Concerning Human Understanding, 2.8.10
³ Id., 2.8.23

For further reading: An Essay Concerning Human Understanding by John Locke (1990)
A Critical History of Western Philosophy by D.J. O’Connor (1964)
The Encyclopedia of Philosophy Vol. 6 by Paul Edwards (1967)
Philosophy: The Classics
3rd ed. by Nigel Warburton (2008)

Socrates to Sartre by Samuel Enoch Stumpf (1982)


Dickens and Dasein: A Heideggerian Analysis of “A Christmas Carol”

Tis the season to be jolly! At last, we come to the end of 2017 in order to celebrate the holidays with our families, home from school and work, carefree, warm, and surrounded by those we love. And what a great time it is, might I add, to wrap oneself in a cozy blanket and sit in front of the fireplace with a nice, good-ole book with which to christmas-4.jpgkeep company and entertain oneself; for nothing is better than snuggling up with a traditional story for the whole family. A classic in Victorian literature in 19th century London, Charles Dickens’ novella A Christmas Carol (1843) depicts Christmas through the eyes of the infamous miser Ebenezer Scrooge, who despises the tradition and wants nothing to do with it. It is a loved and cherished story of celebrating and embracing the Christmas spirit as well as personal transformation. A classic in continental philosophy in the 20th century, Martin Heidegger’s magnum opus Being and Time (1927) is considered one of the greatest works of his time, and it analyzes the fundamental structure of human existence through the eyes of Dasein, which serves as the only being which can inquire into existence itself. It is a complex and formidable study of what it means as human beings to be. Together, Charles Dickens, a Victorian novelist, and Martin Heidegger, an existential phenomenologist—an unlikely pair—define the human condition and how we can best live our lives by being true to and understanding ourselves and others. Enjoy your Christmas and have a great New Year! 

111159a.jpgThe first thing I would like to point out is the use of symbolism Dickens employs in A Christmas Carol and how it relates to Being and Time. Known for his brilliant characterizations and descriptions of people and things, Dickens emphasizes “light” through the novella, especially in the Ghost of Christmas Past and Present, the first of which has a fiery head that can be extinguished, the latter of which spreads it as he goes forth. For Heidegger, light also plays an important role, not symbolically, but existentially. He says Dasein (human beings) “is itself the clearing [Lichtung]…. Dasein is its disclosedness.”[1] The German word lichtung translates roughly to “clearing,” as in a “clearing in the woods.” In saying that humans are the clearing, he means that, in existing, we shed light on things, and they are revealed to us from obscurity. Symbolically, light represents wisdom, divine and cosmic purity, and 4775070_f7dc2c5e.jpgrevelation, the latter of which is most important here. Heidegger conceives truth to be essentially revelatory: Truth reveals that which has hitherto been concealed. He bases this on the Greek word for truth, aletheia (αλήθεια), which translates to un-coveredness. That which is made known is truth. As such, Heidegger goes on to say that human beings are their “disclosedness” [Erschlossenheit]. Human beings illuminate their world; they make sense of it; they uncover and thus disclose the world to themselves. Therefore, when Dickens paints the Ghosts as full of light and uses it elsewhere, it is because they bring to Scrooge truth. By leading him through time, they reveal to him truths he needs to come to terms with; his life is disclosed, and he uncovers things of which he was unaware, things which were once hidden to him.

We begin in the present, with Scrooge working in his office, cranky-as-ever. Some gentlemen come inside to ask for a donation to a local charity, which Scrooge rudely turns down, saying the poor people should either go to work or prison or die, so as to “decrease the surplus population.” He refuses to get involved in other people’s businesses, declaring “‘Mine occupies me constantly’” (22). The fundamental essence of christmas-carol-a-1.jpgman, Heidegger writes, is Care [Sorge]. What he means is that we are always involved, engaged, and concerned about things. We can care about things, and we can feel concern for others. We have a certain engagement with everything in our world. If I say, “I do not care about vegetables,” I care about vegetables in a certain sense, in that I have a feeling towards them, albeit a bad one. Scrooge may be called uncaring, but in truth he cares very much—just not in the right way. He is so absorbed in his work, so involved with his entire being, that he has no concern for others, but only himself and his business. A workaholic, he cares too much about his business and not enough about other things, such that his life is centered around his work and nothing else. In our everyday language, we say we get “involved in others’ businesses,” by which we mean that we take an interest to them, or we have concern for their affairs, in which sense we care about them. Thus, when Scrooge says his business always makes him busy, he is really saying by “business” two things: First, it is more important in the sense of money-making; and second, it is more important in the sense of not getting involved with others. Scrooge is what Heidegger calls inauthentic [Uneigentlichkeit] because he lives solely in the present. While this may seem like a good thing—especially with mindfulness being all the craze nowadays—it is not, because by situating himself in the present, using it as a time of activity, he is neglecting the past and especially his future. Normally, in subjective time, we see the present moment as a time of action; it is in the present that we act and make decisions; therefore, we are busiest in the present. Scrooge exists only in the present and is absorbed therein by his work, meaning he can get nothing else done. He is trapped by his work.

Heidegger likens birth to being “thrown” [Geworfen] into the world, insofar as we are, without warning or consent, violently catapulted into life, much like a strong pitch. It is disorientating, unexpected, and outside of our control. Once we are in the world, we find ourselves disposed to a certain mood, or state-of-mind [Befindlichkeit], at every instant. A sad mood makes life appear sad, a happy mood happy. When Scrooge’s nephew Fred visits Scrooge and asks him to celebrate Christmas with his friends, Scrooge replies, scrooge-and-fred-1971.jpg“‘What reason have you to be merry? You’re poor enough,’” to which Fred counters, ‘“What right have you to be dismal? What reason have you to be morose? You’re rich enough’” (16). Here one sees the effect of moods. Regardless of circumstances, our attitudes are influenced by moods. In this particular scene, one man is rich, the other poor, yet because of their dispositions, they regard the same situation—Christmas—differently. Jovial, amiable, and affable, Fred likes the season despite his lack of wealth. Stingy, biting, and mean, Scrooge despises the season despite his abundance of wealth. Further in the book, Scrooge sees Fred discussing Scrooge’s mode of being-in-the-world (existing). Fred laments that Scrooge is corrupted by his moods, that his unhappiness will be his ruin. His greed, he says, makes him lonely. But, were he to be happy, Fred suggests, he could love and be able to be with others. Later, The Ghost of Christmas Past pays Scrooge a visit and whisks him away to the town where he grew up, which Scrooge remembers happily. Everyone has facticity. Facticity is one’s past, the collection of “facts” one has about oneself. Our past is made up of things that cannot be changed, but which are permanent and given. Part of our facticity is the fact that we exist—we acknowledge it, but we cannot change it. Our past is our facticity because we are, as Heidegger says, already-in-the-world. We cannot come into existence now or in five minutes, because we already find ourselves existing. So, the two then fast forward to a moment in which Scrooge’s marriage is called images.jpegoff by his fiancée Belle. Upset that she has been replaced by his love of money, she cries, “‘May you be happy with the life you have chosen!’” (69). Scrooge is shaped by his facticity, namely his decision to forever dispel happiness and instead pursue wealth. As soon as Belle left him, as soon as he committed himself to this course, he could not change it. Because of this moment in the past, his later life is predetermined and foreshadowed by loneliness. This one choice made in past, a fact of his existence, affects his whole life. Scrooge is distressed by this scene and demands to go home, but The Ghost of Christmas Past tells him that it is not its fault that the past is the way it is, and that Scrooge should not blame it. The Ghost implies that no one is responsible for how Scrooge’s life turned out except himself. Considering facts are given and cannot be changed, Scrooge decides to resign himself to his past, submitting to it, letting it determine him. The loss of his soon-to-be wife and the neglect of his father are facts of Scrooge’s life that he lets determine him. Scrooge’s past is inauthentic.

Cratchitfamilyindex.jpgThe Ghost of Christmas Present takes Scrooge to Bob Cratchit’s home so he can see how his clerk lives. Scrooge feels bad for his disabled son Tiny Tim, into whose fate he inquires. If Scrooge continues on in his ways, the Ghost responds, then Tiny Tim will not make it to another Christmas. A disheartened Scrooge is mocked by the Ghost, who uses Scrooge’s own words against him. This moment reveals Scrooge in another mode of existence: falling [Verfallenheit]. In a state of fallenness, Scrooge is lost in the world and experiences forfeiture. Being “lost,” Scrooge loses himself in the present, in everydayness [Alltäglichkeit], so he forfeits himself, so to speak. In everyday life, we go about our business, do our job, eat, sleep, and repeat. There is nothing special, it is just average. In this way, we are “lost” in the world, and we lose sight of our real selves. We end up reverting to chatter, or idle talk [Gerede], to pass the time. We reuse phrases we hear from others and repeat them in trivial, frivolous, and uneventful conversations that distract us from reality. The Ghost of Christmas Present, however, points out that Scrooge has never experienced “the surplus” himself, has never walked among them in person, yet he remarks about them constantly, saying they should die. Hence, Scrooge has fallen to the “they” [Das Man]. The “they” is a vague entity, a collective, at once everyone yet at once no one, the indiscriminate individual, the voice of society. When asked why we do things, we answer, “Because they do it.” Accordingly, Scrooge’s chatter, his repeating what he hears from others, that the population should get rid of unnecessary people, comes from the “they.” He has become lost in them. He has lost himself in them. He is one of them. Fallen, forfeited, determined by social conventions, Scrooge’s present is inauthentic. By partaking in chatter, communicating through assertions, he reveals himself as fallen. Next, he is taken to Fred’s house, where he plays games with the guests, although invisible to them. One can interpret this metaphorically, as though he is both literally and figuratively invisible. He watches as they play the game “Yes or No,” a trivial game. Entertainment. Gossip. For once, Scrooge sees the “they” from the third-person, witnessing their chatter, of which he is the victim, something about which to be talked, a subject of ridicule. This objective exposure makes Scrooge aware of how dispersed the “they” is, how they pervade every part of life. He hears chatter about himself, listening to how he is portrayed himself as inauthentic by others. Finally, the Ghost of Christmas fbe887d568dcb70790119d0b88734ffc.jpgPresent gives his ultimate warning, revealing two depraved children beneath his robe: “‘This boy is Ignorance. This girl is Want. Beware… most of all… this boy, for on his brow I see that written which is Doom, unless the writing be erased. Deny it!’” (115). After, he requotes Scrooge’s chatter, condemning his fallenness into the “they.” The purpose of this is to show how Scrooge has fallen victim to the vices of Want and Ignorance. He cares for the wrong things, yet cares nonetheless. The former vice is his greed, the latter his lostness in the “they,” of which he is mostly unconscious, being-amidst-others and the world. In the present, humans are essentially fallen, by which they enter forfeiture, becoming inauthentic, losing themselves, ignorance the inevitable Doom which follows. The Ghost advises Scrooge to pull himself away from the “they” and back to himself.

Existentiality is the third mode of being. It is based on projection. Humans are able to plan ahead, to understand things. We think in terms of possibilities. When the Ghost of Marley comes to Scrooge on Christmas Eve, Scrooge is in disbelief. “Though he looked the phantom through and through, and saw it standing before him; though he felt the chilling influence of its death-cold eyes,… he was still incredulous, and fought against his sense” (31). Here, Marley’s phantom is a metaphor in itself—the arrival of Death. Scrooge, despite death being in front of him, flees from it, denies it. The possibility of death is passionately rejected by Scrooge, who is undeniably frightened, fearful of his life, unwilling to acknowledge its presence. Heidegger thinks death is underrated. He examines the human attitude toward death and concludes that, in everyday life, we see the possibility of death as a “not-yet,” something which will come but has not yet come, something in the distant future, something far away from us, something eventual, improbable, and incapable of touching us; in other words, we are, to use Ernest Becker’s phrase, in denial of death. Yes we will die, just not today. Or tomorrow. Or in the next year. But, eventually, we will! We push back death, unwilling to face it, giving it a deadline, as if it were on our terms, which it is not. Scrooge is not ready to die, so he does not believe in Marley, but says his senses are deluding him. Death itself is a delusion, he tells himself. During the fourth stave, Scrooge sees a dead body and gets to hear people talking about whoever it was who dead. As the reader, I do not think it is hard to predict d71f2d3b87522d55e23bdfee2335a072.jpgwho it is, personally, but Scrooge completely ignores the possibility of his death, ruling it out immediately, thinking he must still be alive—he has to be alive! In spite of all the evidence, from the business partners to the stolen furniture to the family in debt, he fails to deduce that it is he who is dead. The Ghost of Christmas Present, when at the Cratchit house, cautions Scrooge, “‘If these shadows remain unaltered by the Future, the child will die’” (98). What has this to do with existentiality? Scrooge, like all of us, thinks in terms of possibilities, in the process reducing Tiny Tim to a presence-at-hand; simply put, by thinking about Tiny Tim’s future, he sees him as a thing subject to time, as something that has possibilities, much as a pencil has the possibility of writing. Tiny Tim is considered to be something present, something that is “there.” Scrooge, for this reason, does not think of the future or project possibilities properly. Scrooge’s future is inauthentic. At the graveyard, Scrooge pleads with the Ghost of Christmas Yet to Come,

‘Are these the shadows of things that Will be, or are they the shadows of things that May be only?… Men’s courses will foreshadow certain ends, to which, if preserved in, they must lead…. But if the courses be departed from, the ends will change. Say it is thus with what you show me!’ (141)

Thinking of the future, Scrooge is determining whether it is contingent or necessary: Is his death necessary or unnecessary, a possibility or a certainty, a preordained event or an avoidable one? Has he free will? Is his future determined by his past completely, such that he signed his death warrant as soon as he chose his selfish, greedy path? If he is given a second chance, if he returns to his life, will the foreseen things happen, or can he change himself? Scrooge finally wants to become authentic [Eigentlichkeit].

Each of the Ghosts of Christmas represents something in the novella: Past, present, and future. However, up until now, I have not talked a whole lot about the Ghost of Marley. If he is a Ghost, and he visited Scrooge, of what is he, the first of all Ghosts, representative? What role does he play, for both Dickens and Heidegger? Jacob Marley, the dead co-owner of “Scrooge and Marley” and friend of Scrooge, is Scrooge’s call of conscience. In his famous monologue, Marley declares,

‘I wear the chain I forged in life… I made it link by link, and yard by yard; I girded it on of my own free will, and of my own free will I wore it. Is its pattern strange to you?…. Or would you know… the weight and length of the strong coil you bear yourself?’ (34-5)

mp_main_wide_christmascarol2008_452.jpgThe chains are a famous metaphor for the decisions Marley made throughout his life. Every single link, he says, is a choice he has made by himself, for himself. He repeats the phrase “free will,” which is important, because it means he alone made the choices; no one forced him to do them; he made his own life. Then, he asks Scrooge if the pattern is familiar. Like Scrooge, Marley stinted, grudged, and cared only about himself, leading to his lifestyle, which he regrets, a fate he abhors yet bears because he has to. Marley expresses remorse that he never went outside the building to see the people during Christmas time, but stayed locked up in his little cubicle working. This is what Heidegger calls guilt [Schuld]. Guilt is both a debt and a responsibility. Scrooge experiences guilt as a debt, because he has to pay off what he has done. His past actions, mind you, are part of his facticity, so he owes with his existence. Similarly, this debt is manifest in a responsibility for one’s actions. To be guilty is to look back at one’s past, to acknowledge that, while the past defines who we are, it does not define who we will be. Scrooge is determined by his past insofar as he has trouble forming intimate connections with others and he loves money, but this does not mean he has to be this way forever. He is indebted to his past, and must as a result carry this responsibility. Heidegger explains that when one is guilty, one is “full of not’s”—that is, we see what we are in contrast to what we are not. Since we are constantly making choices, we are simultaneously negating possibilities. By writing this essay, I am negating the possibility of having never written it, which would make me a different person, a person to whom I would be indebted, and for whom I would hold responsibility; conclusively, looking back, I would be guilty. Marley continues, complaining how sad it is “‘not to know that any Christian spirit… will find its mortal life too short for its vast means of usefulness! Now to know that no space of regrets can make amends for one’s 1.jpglife opportunities misused!’” (38). We only have one shot at life; in a word, YOLO. The point of Marley’s jeremiad is to remind Scrooge of his mortality, which has hitherto been neglected. In the present, Scrooge lives too absorbedly in the present, disregarding the future, paying no thought to it, as he is wrapped up in his business. How much change, how much good Scrooge could do, implores Marley, if he only realized his “vast means of usefulness”! Marley fears that if Scrooge sticks to his hermit-like existence, then it will be too late, and he will never get a chance to redo his life, as he did. Notably, he says, “‘Mankind was my business. The common welfare was my business…. The dealings of my trade were but a drop of water in the comprehensive ocean of my business!’” (38). Business has two meanings, of which work, the second, is subsumed under the first—the service of humanity. The business of Marley is the sum of his involvement, his care, in the world. Getting money is but a small portion of his engagement with the world; the other half was neglected, namely people. Similarly, Scrooge fails to conduct business with his fellow man. Only through the future can the past be changed. Scrooge, too narrow in his approach, cared too much and was Unknown.jpegconcerned too little, inspiring regret. After lamenting that he did not help the poor on Christmas Eve in life, Marley reveals he has come to warn Scrooge of how to avoid his very fate. First, Scrooge must realize that his facticity is inauthentic; to fix it, he must avoid the determinism of the past. Second, he must take up his duty toward man. In this way, the Ghost of Marley is the call of conscience, as Heidegger saw it. Conscience is itself a calling, a voiceless voice, which calls humans back to themselves. It is the call of the self back to come back to the self, away from the “they,” from inauthenticity, from fallenness, from forfeiture. It retrieves us from our absorption in the everyday. Through the call of conscience, we are made aware of our situation: We are alone, and wholly responsible for our choices. Marley beseeches Scrooge to personalize his past; he must make the past his before the past makes him its. Rather than fall victim to the past and let it define him, he must understand his past and how it shapes him. While he later denied it in an essay further in his career, Heidegger is here supporting Sartre’s “existence precedes essence.”

We are always in a mood. There is a peculiar mood, however, which leads to authenticity by making us confront our mortality: Anxiety/dread [Angst]. Unlike other moods, anxiety discloses our finitude to us. This necessary though unsettling state-of-mind allows us to Unknown.jpegrealize our essential mode of being: Being-towards-death [Sein-zum-Tode]. This is a scary idea, but Heidegger insists that it is at our core. Essentially, we are always moving towards death slowly. Time passes as it inches closer, year by year, moment by moment. Death is defined as “the possibility which is one’s ownmost, which is non-relational, and which is not be outstripped.”[2] Put simply, death is the only certainty in life. Everyone has to face death. No one is exempt from dying. It is insightful for Heidegger to propose that death is one’s “ownmost,” through which he communicates that death is my ownno one can die my death for me, I must die it myself. He notes that I can die for others in the sense of a sacrifice, but I am eventually going to die myself, independent of anyone else. We must all die on our own, for death is essentially private, unique to everyone. Death, then, is both unique and unavoidable, a necessity. Heidegger is quick to critique our views of death: According to him, the “they” in everyday life dismisses death, objectifying it as an observable event that will happen. Think about it: When we talk about death, we say it “will happen, just not right now.” The “they” postpones death and convinces us that we are immune to it. Truthfully, death comes to us all, and it is the ending of life: There are no more possibilities after death, for it is “not to be outstripped.” Scrooge, when he sees his 1984-xmas-future.jpggrave with the Ghost of Christmas Yet to Come, is filled with anxiety; he is immediately made aware of his mortality and the shortness of life on Earth; all at once, his Being is filled with intense emotions. Scrooge achieves resoluteness [Entschlossenheit]. To be resolute is to realize that one’s possibilities are one’s own. Resoluteness, in everyday language, means autonomy. A resolute Scrooge takes responsibility for each of his actions, considering they are his, and no one else’s. His life is his, so he must evaluate his possibilities for the future by himself, in the face of death. Together, being-towards-death and resoluteness become “anticipatory resoluteness,” which is just a fancy way of saying that one anticipates, or awaits, their death (hence anticipatory), thereby becoming resolute. An illustration: Scrooge sees his tombstone, realizing his mortality (anticipation), and decides thenceforth to become a new person (resoluteness). Achieving anticipatory resoluteness leads to a “moment of vision” [Augenblick], in which one reinterprets the past in relation to the future in the Present. The word “moment” is misleading, as it really refers to the fact that it happens in the Present (with a capital ‘P’), which is distinguished from the present, or the “now.” In the present, one is active: One acts in the present. In the Present, one is passive: Things happen to us in the Present. While you are contemplating your New Year’s resolutions, keep death in mind. Being resolute is like making a resolution—just make sure to anticipate death while you are at it! Heidegger describes authenticity in the following passage:

[A]nticipation reveals to Dasein its lostness in the they-self, and brings it face to face with the possibility of being itself, primarily unsupported by concernful solicitude, but of being itself, rather, in an impassioned freedom towards deatha freedom which has been released from the Illusions of the “they”, and which is factical, certain of itself, and anxious.[3]

To conclude, we get out of inauthenticity by confronting our own deaths, our ultimate possibility. We disclose ourselves through anxiety as beings-toward-death, a death which is certain, unique, and total.

Scrooge swears to the Ghost of Christmas Yet to Come he will change his ways, promising,

‘I will honour Christmas in my heart, and try to keep it all year. I will live in the Past, Present, and Future. The Spirits of all Three shall strive within me. I will not shut out the lessons that they teach. Oh tell me I may sponge away the writing on this stone!’ (142)

When I first read this quote, I almost jumped out of my blanket in joy; for while it is the climax of the story, the point where Scrooge truly resolves to turn his life around, it also could not line up more perfectly with Heidegger’s philosophy! Heideggerian temporality [Zeitlichkeit] is extraordinary: On the one hand, it is extra-ordinary in that it goes beyond and even shatters our everyday conception of time; and it is extraordinary inasmuch as it is a creative, insightful, and existential way of thinking about time. “Reaching out to the future, it [time] turns back to assimilate the past which has made the present.”[4] bigstock_Past_Present_Future_Time_Co_4799792.jpgWhat does this mean? Authentic temporality is subjective and finite: It is something experienced by us, and it has a beginning and an end. But unlike our view of time, which divides temporality into three separate dimensions—past, present, and future—Heidegger says time is a unity. Time is not broken up into infinite “nows” in the present, arising from the past and becoming the future. Inauthentic temporality is past, present, and future; authentic temporality is past-present-future, all in one. How can one be in the past, the present, and the future simultaneously, all at once? How is this possible, if even it is? According to Heidegger, when one exists authentically in time, one looks ahead to the future, to what they could be, at death, then reinterprets the past in light of this and becomes aware of how the past has shaped them, notices that what they are is influenced by what they were, and acts in accordance with this in the present—all in an instant. The future is predominant, though, since with it one anticipates death. Now, compare the following passage, from Heidegger, to the one quoted above, from Dickens:

In every ecstasis, temporality temporalizes itself as a whole; and this means that in the ecstatic unity with which temporality has fully temporalized itself currently, is grounded the totality of the structural whole of existence, facticity, and falling—that is, the unity of the care structure.[5]

The above passage basically restates what Scrooge promises to the Ghost of Christmas Yet to Come: Truly, Scrooge “will live in the Past, Present, and Future”! It is worth considering that Dickens took to capitalizing each of the “ecstasies” of time purposefully because he wanted to emphasize the importance of each structure of time. Conveniently—perfectly, I might chance to say—it fits with Heidegger, forming a union. And also, pay attention to the last part of Heidegger’s passage. He refers to the “care structure,” which is united by—look at that!—the three modes of existence: facticity, falling, and existentiality, each of which lines up with the three modes of time: past, present, and future. The care structure ties in with what was talked about earlier—our involvement in the world. As such, being is essentially linked with time, hence the title of Heidegger’s book, Being and Time [Sein und Zeit]. (Is your mind blown yet?). Another notion is then introduced by Heidegger: Fate [Schicksal]. But did not we discuss that existence precedes images-1.jpegessence earlier, that there is free will, not determinism? Fate is different for Heidegger than it is for us, unsurprisingly. One’s fate is existing in the authentic present. In a process he calls “historizing” [geschehen], we “stretch” ourselves along time. That is, we stretch ourselves between the past and the future, the beginning and end, birth and death. As with anything stretched between two ends, there is a middle ground. In this case, the Present. Our fate is to live authentically in the Present for ourselves, resolutely. It is during this time that we engage in the moment of vision, which, as we said, is not sustained for just a “moment,” but indefinitely, as long as one is authentic.

While planning this, I ran into a perplexing problem with terrible implications: If Christmas is a tradition everyone follows, an event “they” do, and if Scrooge celebrates it, then does that make Christmas inauthentic, something in the realm of the “they”? If this is so, then did Scrooge come all this way and listen to the Ghosts in order to authenticate himself to—what, to become inauthentic again? Does this unravel the entire plot instantly? Lo! luckily, Heidegger has a solution:

Repeating is handing down explicitly—that is to say, going back into the possibilities of the Dasein that has-been-there. The authentic repetition of a possibility of existence that has been… is grounded existentially in anticipatory resoluteness; for it is in resoluteness that one first chooses the choice which makes one free for the struggle of loyally following in the footsteps of that which can be repeated.[6]

The phenomenon known as repetition [Wiederholung] is reaching back into the past and “inheriting” something for oneself. He calls it “handing down.” Much as siblings give each other hand-me-downs or families hand down heirlooms, so we can interact with the past in a special way. Repetition does not necessarily have to happen out of conformity. Unknown.jpegLike Heidegger writes, it can be authentic when acted on through anticipatory resoluteness. If we consciously make the choice to celebrate an age-old tradition which others celebrate, too, then we are authentic. However, those who celebrate Christmas just because their families and friends do, without knowing why they celebrate, what the importance of it is—they are celebrating Christmas inauthentically. They are not giving it the respect it deserves. To celebrate Christmas, to partake in the Christmas spirit, requires that one truly choose it, and this is precisely what Scrooge does. Heidegger adds that authentic repetition “deprives ‘today’ of its character as present, and weans one from the conventions of the ‘they.’”[7] Not only is an appropriated past event not past at all, but it is completely free from the besmirchment of the “they.” Chosen authentically and intentionally in the face of death, projected in the long run, following a tradition makes it neither past nor present, but Present, because it is something which happens, that is not caused, and is not done to progress anything.

3067ba0d24d42f07115774045d4393a8.jpgCare as solicitude, or protection and concern, is thus enacted by an authentic Scrooge, who, embodying the Christmas spirit, united temporally, having encountered death, in a bliss mood, gives a young boy passing by money to buy a big turkey, which he delivers to the financially struggling Bob Cratchit; donates a large sum of money to charity, recanting his mistaken chatter; and befriends the Cratchits, joining the family, becoming a father figure for Tiny Tim, whose life he saves by saving his own. On his way out of the house, Scrooge stops to look at his door knocker, which once resembled Jacob Marley’s Unknown.jpegface, and exclaims, “‘I shall it as long as I live! … I scarcely ever looked at it before. What an honest expression it has in its face! It’s a wonderful knocker!’” (149). This seemingly unimportant moment is probably glanced over by readers, but it holds significance. We encounter things and objects in the world as either present-at-hand [Vorhandenheit] or ready-to-hand [Zuhandenheit]. The former are things that that just are; they are factical and given, and their presence indicates their name. The latter are things that can be used—equipment, if you will. As can probably be gained from this, you can conclude that objects are looked down upon as merely things, objects of use. Living things are more important than lifeless objects lying around. This is why this moment is worthy of our c2127e94e31cfe9a2181bb55974fd9dd.jpgattention. Heidegger explains, “The moment of vision permits us to encounter for the first time what can be ‘in a time’ as ready-to-hand or present-at-hand.”[8] Taken for granted, seen daily but not considered in itself, used mindlessly through subconscious habit, Scrooge’s door knocker only gains value when he sees Marley’s face in it. Now, as a being-towards-death, Scrooge sees the door knocker in a new light (symbolism!), disclosing it, revealing what was once hidden to him, finding pleasure in the simple things. One thinks of the common adage, “Live each day as though it were your last.” The night before was almost his very last, so he cherishes being alive, even being happy towards objects. The moment of vision discloses the world and objects, uncovering them as they are; and it is not just for a single instant, but for a lifetime. Scrooge is authentic Dasein.

Scrooge was better than his word. He did it all, and infinitely more…. He became as good a friend, as good a master, and as good a man as the good old City knew, or any other good old city, town, or borough in the good old world…. His own heart laughed, and that was quite enough for him (155).

And so, as Tiny Tim observed, God bless Us, Every One!

Screen Shot 2017-12-22 at 11.58.28 PM.png


*I want to dedicate this blog to my dad, who has himself encountered death in his time; who has, I want to think, remained authentic as a father for as long as I can remember; whose avid and ardent affection, appraisal, and adoration for Charles Dickens inspired me to write this post; and without whose support I would not be writing. May we have many more Christmases together!


[1] Heidegger, Being and Time, H. 133
[2] Id., H. 250-1
[3]  H. 266
[4] Edwards, The Encyclopedia of Philosophy, Vol. 3, p. 461
[5] Heidegger, op. cit., H. 350
[6] Id., H. 385
[7] H. 391
[8] H. 338

For further reading: 
Masterpieces of World Philosophy in Summary Form by Frank N. Magill (1961)
The Columbia History of Western Philosophy by Richard H. Popkin (1999)
Existentialist Philosophy: An Introduction by L. Nathan Oaklander (1992)
The Encyclopedia of Philosophy
Vol. 3 by Paul Edwards (1967)

Time, Narrative, and History by David Carr (1991)
A Christmas Carol by Charles Dickens (1994)
Being and Time by Martin Heidegger (1962) 


Freedom: Blessing or Curse?

Unknown.jpegWars have been fought over it, countries overthrown in the name of it, debates had over it, philosophy and religion in argument about it, and science is not even sure of it—freedom. Ah freedom, that wonderful, cherished value! What would we be, where would we be, without it? While some are not even sure whether it even exists, some have shed blood in order to attain, toppling tyrants, stopping servitude, and limiting laws, all so that we humans can be free. But what does being free mean? There are two types of freedom: Negative and positive. The former is freedom from, which is freedom in the fundamental sense, the former freedom to, which is, arguably, a newer sense of the word, and the sense in which we commonly portray it now. We are always seeking freedom from the government and therefore freedom to make our own lives. Freedom is easily one of the most valued ideas to humanity, as evidenced by history. However, for the sake of thought experiment, should we all accept freedom as a given—absolute freedom, that is—would it be a blessing, as we all think it is, or would it be a curse? Ought we be careful about what we wish for?

Nowadays, we must be independent and make our own decisions. You go the shoe store and we see dozens upon dozens of different shoes, all similar in essence, but unique in design, color, and size; you go to the grocery store for cereal only to discover that there Unknown-1.jpegare twenty kinds, in both brand and flavor; you want to pick out some new bedsheets, but—alas!—there are colors across the spectrum, and some have your favorite characters or TV shows on them, while others have nice designs—what do you do? According to some writers and marketers, we are experiencing “overchoice,” a phenomenon where we are bombarded with so many choices, that it is overwhelming, making it harder to make decisions. Today’s generation, it is feared, is more anxious than previous generations, yet we are expected to make major life decisions that will impact us forever, such as where we want to be educated, what job we want to get, whether we want a family or not! To some philosophers, called libertarians, we have radical freedom—illimitable, boundless, unconstrained freedom, to the extent that to be, is to be free. In looking at Jean-Paul Sartre and a bit at Søren Kierkegaard, two Existentialists, we shall look at the human condition as defined by freedom.

Unknown-2.jpegExistence, said Jean-Paul Sartre (1905-80), precedes essence. By this, he meant that humans have no blueprint when they are made, no inner nature. He denied there was any universal image of “man,” out of which we were fashioned. An atheist, he denied God, and thereby any a priori good; because there is no omnipotent or omniscient being, there is nothing that is deemed good or bad in itself, rather everything is left undetermined. There is no “path” for us to follow, nor any moral code to which to adhere. “Man is nothing else but what he makes of himself,” he said.[1] Building on the previous statement, Sartre is saying man is born as nothing, a tabula rasa, a blank slate, indeterminate, like everyone else—then he must create himself through his choices. As an empty cup into which anything can be poured and it then becomes that substance, man can “project” himself into the future; he, through his willed actions, motivated by choice, literally throws himself forth into a new time, leaving the past, entering the future, a new person. By thinking about his future, man can change himself, by seeing what he is not, and therethrough, what he could be. Describing choice, Sartre wrote, “The end, illuminating the world, is a state of the world to be attained and not yet existing.”[2] Every decision we make affects how life will turn out for us. If we want something to happen, if we act upon it, then it will become a reality. Projecting ourselves into the future, we actualize possibilities. But once a possibility is actualized, it cannot be revoked, it cannot be taken back; it is irrevocable, it is permanent, and it defines our character. One silly mistake one night when we are acting out of character, and our whole future is affected. We are always in a situation Unknown-3.jpegwhen we make choices, situations that influence how we interact with the world. Sartre defined situation as “the contingency of freedom… already illuminated by the end which freedom chooses.”[3] Our intentions reveal the world in relation to that intention so that how we act is in accordance with our freedom; or, simply put, depending on our choice, we will see the world differently as it relates to our choice. For instance, Sartre uses the example of a mountain climber and a lawyer. A mountain climber surveys a mountain with the intent of climbing it, so he evaluates the mountain as either “climbable” or “non-climbable,” an evaluation that will either encourage or discourage him to climb it. However, a lawyer, looking at the same mountain, passing by it, sees it through his choices as either “attractive” or “unattractive.” When you go to work, you have the absolute freedom, the ultimate choice, to change how you perceive your work in light of your choices. Things will appear differently for us based on our goals, for the better or worse.

Weighed down by these seemingly innumerable choices, especially the life-changing ones, we experience what Sartre calls anguish. Anguish is the realization of our essence—freedom. Deciding whether or not to move to the city you have always wanted to move to and get a new job, but unsure of whether it will actually work, you feel anguish. You examine all the choices before you, and, projecting yourself into the future, look at all Unknown.jpegthe ways these choices could play out. You are frozen in anguish, because this choice will permanently affect you: What if you do move and do get your dream job, but it ends up failing, and you have no Plan B—then what? Anguish, Sartre contended, does not actually hinder choice, but is an essential part of it. Yes, you are paralyzed by anguish, yet it is through anguish that you get a better look at your life and all your choices, motivating you to think each possibility through carefully. Anguish is tied to the future, and its existence is usually marked by the question What will I do? We also experience forlornness, which is outlined by the acknowledgement that there are no guidelines for us to follow. Without God, there is no morality. “Everything is permissible if God does not exist,”[4] Sartre wrote. This controversial proclamation says that if there is no higher morality, if there is nothing on which to base our morals, nothing to act as an enforcer to our values, then we can do anything. There are no Commandments that prohibit us from taking the life of another. What is to stop us from, right now, going outside and doing the first thing that pops into our mind? Nothing, said Sartre. Our morals come about through our choices. “The only way to determine the value of [an] affection is, precisely, to perform an act which confirms and defines it.”[5] My biology teacher, describing the difference between values and morals in bioethics, stated that values are things that matter to us while morals are what we actually act on. What Sartre was communicating, then, was that we must show what we care about, instead of saying what we care about. If you say you value your family yet do not spend time with them, then your morals show the contrary. Values are made valuable if and only if we act on them. Our choices define our values. There is no moral standard for everyone. Each of our values is unique to us. Despair is defined as relying on the self because nothing is reliable. Not nature, not the future, not images.jpegour friends, nothing. We act on possibility alone. There is always the possibility that things will not turn out the way we want them, so we despair. And dread, similar to anguish, is designated by Kierkegaard (1813-55) as “the dizziness of freedom.”[6] With so many choices, we feel dizzy just contemplating all our possibilities. Think of today’s consumer culture, as described in the second paragraph! “In dread,” Kierkegaard argued, “there is the egoistic infinity of possibilities, which does not tempt like a definite choice, but alarms and fascinates with sweet anxiety.”[7] Honestly, think of every single possibility there is. It is impossible. Occupation, home, family, furniture, education, companions, pets, books, phones, colors, foods—a plurality of categories within which are a plurality of particulars! Focus in on the major life moments, and you will feel dizzy, like looking into an abyss. Everything that I can dream of doing. Distant, but close. Countless. Finite. Our choices tempt us and push us away simultaneously through anxiety.

Unknown-1.jpegSo what about quietism? What if we find our infinity of possibilities too overwhelming, so we retreat into solitude and let others make the decisions for us? As freedom and doing are our essence, said Sartre, we must do rather than think. Retreat is itself an act, but it is not enough. He advised us not to resign ourselves and leave it to others, but to do what we want to do. Create what you have always wanted to see, hear, or read. Make your ideality a reality. Quoting a popular phrase, my friend likes to say, “Don’t wait for the perfect moment, make the moment perfect.” This is the attitude Sartre wants us to adopt. Carpe diem. Unfortunately, the excitement of making our own choices brings with it a little caveat. In being responsible for yourself, you are responsible for mankind. Like Kant’s Categorical Imperative, your choices influence how others choose. Each choice we make reveals our ideal image of how man should choose. If you decide to buy Honey Nut Cheerios, then you are saying that all shoppers should buy Honey Nut Cheerios. Everytime you make a choice, ask yourself, What if everyone acted as I do? To further brighten the mood, Sartre said, “I am abandoned in the world… in the sense that I find myself suddenly alone and 57.jpgwithout help, involved in a world for which I bear the whole responsibility without being able, whatever I do, to tear myself away from this responsibility for an instant.”[8] Comforting. Very cheerful. Sartre pointed out that some want to eschew their awesome responsibility, complaining that they did not choose to be born, for as soon as we are born we are immediately responsible for ourselves and without guidance. On the contrary, Sartre countered that by taking an attitude toward our birth, be it positive or negative, we make it “ours,” and we possess it thereby. Birth, for Sartre, is not a part of our facticity, our simple fact of existence, but is rather reconstructed by our consciousnesses, whereby we transcend our birth, detaching our being from our beginning, thus meaning that we, in a sense, choose our birth by complaining about it.

Sartre’s most famous quote is, “Man is condemned to be free.” How paradoxical! “Condemned” brings with it negative connotations, whereas “free” brings with it positive connotations—how can the two be in the same sentence? He said we are condemned to be free because as soon as we are born, we have no way of throwing off the burden of images.jpegresponsibility, but must trudge our way through life, defining ourselves through our choices, without reliance on any external power, on any people outside of ourselves, always independent, alone, isolated, the future of our lives, of humanity, in our heart, and one choice, considered out of an infinitude of possibilities, can make all the difference, all the while we know there is no guarantee that the future will work out, such that with great freedom comes great responsibility. Personally, when I came across Existentialism, and when I read about Sartre, I flushed with awe and excitement: How cool it is that I get to create myself and my morals! Now, when I read Existentialism, I get scared, I get nervous, I am full of anguish. How terrifying it is, to be alone, to know that every choice I make, conscious or unconscious, defines me. This goes to show that Existentialism is for the courageous, the brave. Not everyone can be an Existentialist, but those who are up for the challenge can accept it. I admire Existentialists. Am I one? Yes and no. In the end, though, whether freedom is a blessing or a curse depends on one’s attitude towards the human condition. When you hear the Existentialist motto, “You are your choices,” you have two choices, choices you have to make on your own; you can either say, “Yes, I am my choices!” or “Oh god, I am my choices?”

So what do you think: Is freedom a blessing or a curse?

[1] Kessler, Voices of Wisdom, “Existentialism is a Humanism,” p. 408
[2] Sartre, Being and Nothingness, p. 584
[3] Id., p. 596
[4] Kessler, op. cit., p. 410
[5] Id., p. 412
[6] Kierkegaard, The Concept of Dread, p. 54
[7] Id., p. 56
[8] Sartre, op. cit., p. 680

For further reading: Being and Nothingness by Jean-Paul Sartre (1966)
Voices of Wisdom by Gary E. Kessler (2007)