What is Dreaming, What Do Dreams Mean, and Why Do We Dream?

170419.jpgMartin Luther King, Jr. once had a dream—and last night, so did I. At the end of a long day, we all get in bed, close our eyes, and go to sleep. Then the magic happens. It is as if when our eyes close, the curtains of the mind are opened, and a lively and magical performance begins on the stage of our unconscious, with dances, songs, and monologues. Bright, intense, and nonsensical, these images in our heads visit us every night, although we are quick to forget them, as they soon fade away, almost as though they never happened. Dreams feel real, yet they are unreal, illusory. Sometimes they capture things that have happened to us, but sometimes they show us things that have not yet happened, and sometimes yet they show us things that are happening. Dreams are the greatest mysteries of the night, which is why they have attracted so much attention, both from individual thinkers and collective civilizations, who have attributed to dreams some sort of importance. What are dreams, really? Why do we dream? Do other animals dream? These are all questions psychology has been asking and will continue to ask. As of right now, none of these questions has a confident answer, but is constrained to theory. We humans will not rest (no pun intended), though, until we get the answer; we will refuse to just “sleep on it”—literally, because we cannot. So in today’s post, we will be exploring the science behind dreaming, the history of dreaming, and the different interpretations of dreaming that have been proposed. Although no definitive answers will be yielded, we will still gain some valuable insights on the nature of dreaming.


What is dreaming?

Types-of-brain-waves.jpgIt is not like we start dreaming as soon as we get into bed. Instead, sleep has to pass through several stages in order for dreaming to initiate. Researchers study brain waves with electroencephalograms (EEG’s)—a fancy word that refers to the skill of finding and interpreting electrical activity in the brain at a given moment. With these brain waves, psychologists have found that there are at least four stages that occur in the sleep process: First, in our everyday waking lives, our brain produces beta waves, which are released when we think, usually at 13 or more cycles per minute (cpm); second, when we close our eyes and start to relax, alpha waves start to kick in at 8-12 cpm; third, theta waves are produced at 4-7 cpm when we enter light sleep, or NREM, and begin to feel drowsy; fourth, we experience delta waves, which are 4 or fewer cpm, created during deep sleep, known as REM. It is in this last stage, when Delta waves are produced, that we experience most of our dreams. But what is a dream?


A dream is “a sequence of images, emotions, and thoughts passing through a sleeping person’s mind.” In addition, “Dreams are notable for their hallucinatory imagery, discontinuities, and incongruities, and for the dreamer’s delusional acceptance of the content and later difficulties remembering it.”[1] One important thing which is to be gleaned from this definition is the fact that dreaming is not just confined to visual displays and imagery in pictures; rather, dreaming can involve many other senses. A lucid_Dreaming.jpgquestion many people are curious about is whether blind people can see things in their dreams, or deaf people hear. What has been found is that people with blindness, because they have never seen anything, dream using senses other than sight, and the same thing applies to deaf people. In other words, people with congenital blindness, who were born blind, can hear or smell different things since they have been exposed to such stimulation, but not with their eyes. Another interesting thing about dreams is that, besides not just being about pictures, dreams can also communicate intentional states, i.e., motivations, fears, desires, etc. Dreams are set apart from waking life due to their being illogical. Whereas there is a logical cause-and-effect and sequence of narration that follows a common story in real life, there are random and disorganized events in dreams. As such, they are characterized as fantastical, belonging more to fantasy and fiction than reality, adopting unrealistic exaggerations and possibilities, more incredible than realistic. When we say there is a uniform narrativity to life, we mean there is a set plot, with a beginning, middle, and end; but with dreams, there is no such narrativity, for there is nothing that links events together in any reasonable way.


Now, regarding what actually happens during dreaming: Once we reach deep sleep, once the brain starts putting out Delta waves, we switch between two stages, REM and NREM. REM stands for “rapid eye movement,” and there are about 4-5 of them that cycle through the night, 90 minutes at a time. While we are in REM, brain waves paradoxically Unknown.jpegresemble those that happen when we are awake. If we were to look at the brain when we were awake, then it would look just like how it is when we are in REM—despite the fact that REM is deep sleep, when the entire body is paralyzed and in total relaxation. This is a kind of “dream-state,” as psychologists like to refer to it, in which animals who undergo it are very stimulated. All mammals, not just humans, experience this dream-state. The only difference is how long each animal spends in the dream-state; depending on the average lifespan of the animal, they will dream more or less. We humans are in the middle. The reason REM is named so is because the eyes literally twitch rapidly while shut, which seems to contradict all logic, and the reason why psychologists are so puzzled about the phenomenon. Speaking of puzzling phenomena, some people experience “sleep paralysis” when they regain consciousness during sleep, only to find their bodies rigid, unable to move, as if stapled to their beds, their throats pinched. Why we wake up randomly, we do not know. Why we are paralyzed—this we do know: Psychologist Michel Jouvet found in an experiment that the pons, located in the lower region of the pons2.jpgbrain stem, actually inhibits all motor activity, meaning the muscles are completely stopped. He performed a study in which cats had their pons removed, and then he watched them at night to see what happened. Because he got rid of the part of the brain that stopped muscles from being used, the cats, he observed, actually moved around quickly and ferociously, in a predatory manner, because they were, Jouvet supposed, dreaming about catching mice. What this revealed is that, if the pons were not activated during sleep, there would be many more injuries at night. It has been reported by a number of people that they experience a sort of “falling reflex”; upon falling in their dream, they wake up, as if reacting to the fall and catching themselves. Imagine, then, what would happen to many of us in some of our more threatening dreams, if it were not for the pons in the brain stem.


What about NREM? NREM stands for “non-rapid eye movement”—I know, creative. As is to be expected, NREM is not called “deep sleep” for a reason; NREM is a lighter form of sleep that is not as engaging. To better illustrate the difference, take people who can Unknown-1.jpegsleep through their alarms, and those who cannot; the former are in deep sleep, the latter in light. For a time, it was thought that dreaming only occurred during REM; however, later studies disproved this, stating that dreams do occur during NREM, just that they are less memorable and exciting. Other things that have been found about dreaming regard the environment and dream content. The external environment of a sleeper has been discovered to affect their dreams. For example, a case had test subjects enter REM-sleep, then the tester would spray them with water. Upon waking, the subjects said they dreamt of some form or another of water, be it seeing a waterfall or swimming in a pool. What surrounds a dreamer or what they touch can create associations related to the outside stimulus, or effector. Such dreams are “self-state dreams,” since their content is centered around the state in which the self finds itself. Sometimes, self-state dreams can also lend insight into future actions. One thought-provoking fact is that 80% of reported dreams are negative (Domhoff, 1999). Accordingly, for every five dreams we have, only one of them does not involve bad things happening to us.


Another subject of inquiry—one which is unbelievably trippy—is lucid dreaming. When dreams are very high in lucidity, or clearness, we are aware of ourselves as dreaming. lucid-dreaming1.jpgLet us put it this way: Lucid dreaming is knowing that we are dreaming. But are we just dreaming that we are dreaming? If you want a quick look at the philosophical problem of dreaming, then you can read here! Aside from the armchair philosophy of dreaming, there is a little more substance to lucid dreaming. For instance, lucid dreamers feel like they have no sense of touch, allowing them to pass through otherwise impassable obstacles, and they also apparently lack any other sense beside sight. Lucid dreams are also said to be more bright than regular dreams. When aware of dreaming, dreamers can ignore natural laws, doing things that defy logic and physics. All of this raises the question of why we even dream in the first place. If sleep is necessary for us to rest our bodies, then why not just sleep, why have hallucinatory visions at night? Unfortunately, we have no solid answers. There is only speculation. I will discuss these speculations in further detail at the end, but for now, here is a brief overview.

  1. Wish-fulfillment. According to this theory, dreams are symbolic representations of repressed, unconscious urges that are mostly erotic. The problem with this theory is that, surprisingly, dreams with sexual content are actually quite rare and uncommon (recall that 80% of dreams are negative).
  2. Memory storage. Those who support this theory argue that because memory is improved during REM, it stands to reason that the purpose of dreams is to filter out the day’s experiences. If you have ever heard that it is unwise to study right before going to bed, then it comes from this. Just like your body, your brain needs time to recover, so if you jam it with knowledge right before bed, then you will overload it, and your learning will not be as effective; the brain works more efficiently if it takes in smaller chunks over a longer amount of time.
  3. Neural pathways. Random sensory information from outside stimulates the brain as it sleeps, strengthening their neural connections. Thus, this theory says dreaming’s purpose is to solidify our neural pathways.
  4. Biological model. Activation-synthesis is the theory that the brain stem creates random imagery that is interpreted by the limbic system, which colors it. Hence, seemingly meaningless visuals are turned into emotional, colorful images that resemble conscious life.
  5. Cognitive development. For some, dreams reflect our cognitive development. As evidence, they use the fact that children have relatively simple, crude dreams, whereas adults have more complex, egocentric dreams. The complexity of dreams depends on how much knowledge one has.

A History of Dream Interpretation

Egypt ba.jpgSince the earliest civilizations of man, dreaming has held an important place in our culture. If we explore the human mind over 4,000 years ago, then we will find the earliest records of dreaming to date. A document known as the “Chester Beatty papyrus” was excavated and is dated to be from around 2,000 B.C. On it are written 200 dreams that were reported and interpreted by the Egyptians. Based on Mesopotamian mythology, and adapted from Assyrian sources, this Egyptian dream codex reveals the universal nature of dreaming. The fact that these three great civilizations—Egypt, Mesopotamia, and Assyria—all gave such immense attention to dreams, that they were related in study, shows how intimate dreams are to the collective conscious of a people. In all three societies, dreams were ways of contacting invisible realms through the guidance of either good or bad spirits. Then came Abrahamic monotheism. Christianity, Judaism, and Islam all interpreted dreams as coming directly from God to them in their sleep. Understandably, these dreams were heavily filled with religious metaphors and symbolism.


A little later and the Greeks would become fascinated with dreams. The Greeks had their own religious groups—some might say cults—called “Mysteries,” and many a Mystery was focused on dreaming. In order to have better dreams, Greeks encouraged sleep to each other with oils and drugs, so that they would be more immersed. An important aspect of Greek life was the oracle: Each day, hundreds of travelers would go to oracles to have their fortunes told. Dream interpretation was done in the same manner. Specialized interpreters would have a place in the temple, where they were surrounded by natural smoke that they would read and decode, then pass onto the dreamer. During the Archaic period, though, a shift occurred. The Pre-Socratic philosophers began to steer away from religion and toward scientific, rational thought. Mystery and dream divination, or magic (oneiromancy), would be replaced with more empirical observations. Each of the following philosophers accurately predicted modern-day conclusions by themselves.

  • Heraclitus (c. 535-475 B.C.) claimed dreams were nothing more than day residue, i.e., recollection of things that happened throughout the day.
  • Democritus (c. 460-370 B.C.) thought dreams were the result of the external environment acting on an individual’s consciousness.
  • Plato (428-348 B.C.) proposed that dreams were a manifestation of wish-fulfillment based on repressed desires in the soul. He also thought dreams were the divinely inspired and could grant people creative impulses.
  • Aristotle (384-322 B.C.) argued against prophetic interpretations, instead declaring dreams to be the result of sensory stimulation that could potentially affect our future actions based on their content.

Unknown-2.jpegThus, the study of dreams officially became scientific in nature. Artemidorus, coming 400 years after Aristotle, born in the same country as Heraclitus, wrote the largest encyclopedia of dreams for his time, the Oneirocritica. In it, he distinguished between two types of dreams: Insomnium and somnium. Insomnium is a dream-state whose contents are about the present. These are dreams that deal with current problems and daily occurrences. Somnium is a dream-state whose contents are about the future—self-state dreams, in other words. These dreams are “deeper,””more profound,” than insomnium dreams because they give us insight. But Artemidorus came up with even more fascinating idea, one that has hitherto been neglected and still does not receive a lot of merit today: Dream interpretation reveals more about the interpreter than it does the dreamer. Apparently, according to Artemidorus, by gaining the background of a person, by interpreting their visions in light of this, we gain insight about ourselves because we mix in our own beliefs and symbolism that they would otherwise miss. Contemporaries of the Pre-Socratics in the East—the Chinese, Buddhists, and Hindus—were the heirs of the Egyptians forasmuch as dreams were glimpses of a higher realm, a truer reality, to them. In their dreams, they would experience the transcendence of their souls from the corporeal world.


The scientific study of dreams would come crashing down in the Middle Ages, which saw a reversion back to religious symbolism. Only this time, the underpinnings were moral and theistic. The problem of interpretation came down to the whether the dreams were communicated by God or not, in which case it was either angels, and therefore holy, or demons, and therefore wicked. Thus, medieval dreamers had to discern between truth and untruth. A few hundred years more, and we get the great rebirth, the Renaissance. It is from the Renaissance that we get our contemporary connotations of dream interpretation, for it was during this time that divination once again became dominant. The Renaissance saw a surge of interest in practices like occultism, mysticism, numerology, astrology, prophecy, and hermeticism—in a word, magic. Nowadays, these associations still carry over, so when we hear people talking about interpreting dreams or discussing horoscopes, we tend to brush them off as useless, arcane magic.


Fast forward 400 years to the Modern Age in the 19th century. Still traumatized by the Renaissance, people in the 1800’s were hesitant to study dreams or consider their importance seeing as dreams were seen as “unscientific” and therefore unworthy of serious thought. The magical side of dreams was not wholly abandoned or dismissed, contrary to what some might think; literary geniuses celebrated dreams for their Dr_Jekyll_and_Mr_Hyde_poster_edit2.jpgcreativity. Famous Romantic poet Samuel Taylor Coleridge wrote his unfinished poem “Kubla Khan” after an inspiring dream, but he never finished it because he was interrupted by a mysterious “person from Porlock”; novelist Robert Louis Stevenson wrote Strange Case of Dr. Jekyll and Mr. Hyde based on a dream he had, too, in which he saw his hidden, unconscious urges battling his outward, conscious behavior; and Edgar Allen Poe also said his dreams contributed to his works. Around this time, in the mid 1800’s, anthropology was becoming a much-studied topic, so anthropologists were traveling around the world studying primitive tribes. What they found predated Jung’s theory of archetypes, and they also found that these tribes usually made their decisions based on dreams they had—the resurgence of prophecy. Next comes the 20th century and the rise of psychoanalysis, dominated by two figures, Sigmund Freud and Carl G. Jung, to whom we shall presently turn.


Modern Day Dream Interpretation Models

maxresdefault.jpgBefore discussing the psychoanalytic tradition, we will first return to the earlier models of dream interpretation (the cool name for which is oneirology) we discussed. The first model is the cognitive model, according to which dreams are a means of thinking about the day during the night. When we dream, our mind is thinking just as we normally would, but with multiple layers of metaphors emphasized unconsciously. In this way, everyday imagery is “translated,” so to speak, into metaphorical forms that stand in for ordinary things. These forms, furthermore, are colored by our emotions, so that they reflect our inner world of moods and feel significant to us. This theory also groups together the cognitive development one, so dream quality will differ based on one’s brain development. Some scientists contend that dreams are important for problem-solving. There is a scientific basis for the phrase “sleep on it,” after all. When we sleep, our unconscious and subconscious are most active, so thoughts we did not know we even had float around, and some by chance end up back in our conscious, while those in our conscious sometimes drift off into the Unknown-1.jpegsubconscious. Either way, ideas move around. A friend of mine told me the story of how he lost his headphones, only to dream about how he lost them two months later, whereupon he found them in the exact location of which he dreamt. How did something so insignificant, something that happened two months in the past, chance to occupy his dreams? The best explanation, I told him, was that after a while, his brain, by its own whims, conjured up the memory of where he left it. Why it took so long, I do not know. Whether timing is important or not and how long an average memory takes to resurface are also questions worth asking. Over time, the brain will relax, and things that were troublesome and problematic will be relieved, I can only theorize. This leads to the next idea, namely that dreams reflect our current state and condition, environment, gender, age, and emotions, according to the cognitive model.


Another model we discussed briefly was the biological model. In light of biopsychology, dreams are nothing more than mere creations of neuronal firings processed by the thalamus into visual displays that make no sense. As such, interpreting dreams is useless considering they have no inherent meaning. Personally, I am not proponent of the biological method for two reasons: First, (I know this is a terrible reason) it is too bland and boring, and it is too reductive for my tastes; and second, if these neuronal firings are so random, then how can they create coherent (in the sense of “being familiar”) images that do make sense and that resemble complete narratives and sequences? This is not to say that the cognitive model is more correct than the biological model—not at all. As I have said, these are just theories, and neither has been verified indubitably.


Freud-b.jpgMost famous, hands down, is the psychoanalytic theory, first propounded by Freud, and then expanded upon his student, Jung. Starting with Freud, he described dreams like this: “Dreams are the means of removing, by hallucinatory satisfaction, mental stimuli that disturb sleep.”[2] In Freud’s eyes, dreams arise from the irrational, hidden side of ourselves—the unconscious. As a result, dreams need to be interpreted by a therapist. Dreams work through association, creating nonsense connections between ideas that are seemingly unrelated. Since dreams are irrational and incoherent, interpreters use a technique called “free association” that Freud loved to use. The analyst says a word, and the patient says whatever comes to mind. The logic goes that if the dream is formed by associations, then the intuitional associations said by the patient will point to their roots. Having done this, the analyst can then find associations of which the patient was initially unaware. One thing Freud did that remains of a subject of interest is his splitting of dream content into manifest and latent content. Manifest content is the storyline of the dream, the surface-level meaning. On the other hand, latent content is the deeper, symbolic, underlying meaning of the dream. Whereas the dreamer has access to the manifest content, only the analyst has access to the latent content, because latent content is unconscious and therefore hidden from view; it has to be uncovered through free association. What is this elusive latent content, and why does the mind go through the trouble of disguising it? Freud said that dreams protects us from waking up due to “mental stimuli”—but to what kind of mental stimuli was he referring? He believed that the latent meaning of dreams were repressed, unacceptable ideas.


The basic formula for a Freudian dream is “any kind of trivial occurrence + a traumatic childhood memory.” Subsequently, dreams take some kind of ugly truth and dress them up with ordinary occurrences. This is why Freud said that dreams protect us from disturbances. If these unacceptable ideas were to be shown to us in full light, then we would never be able to sleep; we would be too disgusted or traumatized. Dreams prevent Unknown-2.jpegus from waking up by playing out fantastical scenarios that reflect our wishes, goals, and fears. By hidden means, the dream releases our repressed memories. Freud posited a theoretical “censor” inside the mind, a kind of watchguard that makes sure nothing from the unconscious creeps into the conscious. Obviously, then, a feeling of aggression cannot be made manifest; instead, the unconscious is clever, so it disguises the feeling of aggression, such that it is able to sneak past the sentry and make it into the conscious in the form of a dream that makes no sense, but which nonetheless has a deeper meaning. This explains why dreams are confusing and unclear, yet meaningful. How the unconscious goes about disguising the repressed ideas is called the “dream-work.” Its four methods are condensation, displacement, symbolization, and secondary elaboration.

  1. Condensation is what happens when two or more ideas are merged together into a single thought.
  2. Displacement is what happens when an emotion is misdirected toward something other than its target.
  3. Symbolization is what happens when an object is made to stand in for another.
  4. Secondary elaboration is what happens when the subject tries to recall their dream, only to distort the facts.

Unknown-3.jpegBy using all four tricks, unconscious impulses manage to invade the conscious mind. Freud went further and identified two types of dreams. Dreams of convenience are dreams related to one’s day. Closely linked to day residue, dreams of convenience focus on some kind of fear or wish that occurred during the day visually. The other type of dream is one of wish-fulfillment, for which Freud is most well-known. Basically, he said that dreams are a way of satisfying our desires with our imagination. Because we cannot satisfy these desires in reality, we are forced to do so in sleep, in ideality. These desires are either erotic or aggressive. To use an example, one night I was really thirsty, and I went to bed on my trampoline (for fun, of course!). I dreamt I got out of the trampoline, went all the way inside the house, got a drink of water, walked back to the trampoline, and fell asleep. When I woke up, I had no memory of getting up, and I realized that I could not possibly have gotten water, as it was far too cold, and it was a long walk. Thus, I came to the conclusion that I dreamed about getting water in order to satiate my thirst before going to bed. To summarize, here are Freud’s ideas about dreams:

  1. Repressed childhood memories are revealed through associations.
  2. Said memories are either painful or unrefined, which is why they are repressed.
  3. Dreams are illogical, resembling an infantile imagination.
  4. Dreams have sexual and/or aggressive themes.
  5. Dreams are disguised wish-fulfillment.

6534180_orig.pngThe reason we no longer believe in the psychodynamic model of dreams is because, simply put, there is no evidence at all that supports it. Carl Jung was Freud’s student, although he would later distance himself from his teacher’s ideas in order to develop his own in more detail. To begin, he classified dreams into three categories. The lowest level of dreams are day residuals and just focus on things that happened throughout the day. Above these are self-related dreams, dreams that are about us, our mental states—stuff like that. The highest dreams, however, are archetypal dreams, which are the deepest ones possible, for they connect us with each other through the collective unconscious. I feel the quickest way to present Jung’s views are by enumerating them and then contrasting them to Freud’s:

  1. Dreams are essentially creative.
  2. Dreams are a part of the collective unconscious. Each of us, no matter who we are, shares the same symbols and universal characters, or archetypes.
  3. Dreams reveal the personal unconscious, too. We learn about the hidden parts of who we are through dreams.
  4. Dreams give insights into the future.
  5. Dreams are positive and constructive, providing insights to the self.

And as contrasted to Freud:

  1. Dreams are meaningful in- and of-themselves, not by interpretation.
  2. Dreams represent present, not past, problems.
  3. Dreams are best interpreted based on patterns and recurrences rather than individual interpretations. Rather than look at each dream by themselves, it is better to look at them together.
  4. A holistic analysis of dreams is more efficient than free association.
  5. Symbolism is not repressed, but archetypal.

If we want a quick summary of the psychoanalytic model, then we can say that Freud’s focus was sexual, and Jung’s archetypal. But while they differed in many respects, they also had these traits in common with the modern world:

  1. Dreams give clues to life.
  2. Dreams bring the unconscious to the surface.
  3. Dreams are based on day residue.
  4. Sensory stimulation affects our dreams.
  5. Universal archetypes are a part of our collective unconscious.
  6. Dreams are a.) repressed or b.) creative.

1370918.large.jpgIn conclusion, while there is a rich history of studying dreams, there are also countless unanswered questions regarding dreaming. Will we ever know them? Who knows. Until then, we can only dream of what they might be. Since the Egyptians, who believed in otherworldly journeys, to the modern psychoanalysts, who believed in hidden symbols, there have been many views of what dreams are, and many revisions, too. What we can see from the history of oneirology is that how dreams are interpreted depends upon the culture in which one finds oneself. Where one lives, how one lives, what language one speaks—these can all affect how we interpret dreams. Does this mean that there is no objective meaning of dreams, that the purpose of dreams differs between peoples? The question remains of whether dreams are even meaningful in the first place, or whether they are, in fact, just biological accidents created by the brain. These questions create a living nightmare for psychologists. One thing that is for certain is that dreams are very personal, intimate things that happen to all of us, that are unique, and that are private to us alone. I have my dreams, and you yours. (Get ready for the cliché ending…). But then again, what if this is all a dream?  

 

 


[1] Myers, Psychology, 8th ed., p. 285
[2] Freud, The Interpretation of Dreams, p. 499c*

*From Adler’s Great Books of the Western World, Vol. 54

 

For further reading:
The Encyclopedia of Human Behavior Vol. 1 by Robert M. Goldenson (1970)
Psychology: Mind, Brain, & Culture 
2nd ed. by Drew Westen (1999)
In Defense of Human Consciousness 
by Joseph F. Rychlak (1997)

Introduction to Psychodynamics by Mardi J. Horowitz (1988)
Schools of Psychoanalytic Thought
by Ruth L. Munroe (1956)
The Secret Language of the Mind
by David Cohen (1996)
Psycholog
8th ed. by David G. Meyers (2007)

Advertisements

4 Strategies To Stay Motivated

Unknown.jpegEvery Thursday, we dread coming to class. Slowly, nervously, we walked into the gym, not knowing into what we were walking or what to expect. He calmly sauntered ahead of us, set down his clipboard and music box, opened the door for the girls, and stood there, arms crossed, as if plotting his latest machination—of which we, the students, were the victims. We got in our lines, got through our warm-ups, then stood there dumbly, looking amongst ourselves with frightened eyes, shrugging, asking with our eyes, “What is it today?” with desperation, with full knowledge that none of us would walk out of there alive. Suddenly, after clearing his throat, our P.E. teacher announced, “Alright, get behind the sideline and listen up.” We got behind the sideline. He turned to face us. He gestured. “Today, for your fitness test, you will sprint from here to the sideline and back, followed by a burpee. You will repeat this, each time adding one burpee on, until you get to 10 burpees. When you are done, shout ‘Time!’ and go get some water.” So that’s what our fitness test would be that day. It sounded terrible. In total, we would be doing 20 sprints and… oh god… 55 burpees. I looked at my friend who was next to me. How are we gonna survive? Are we going to die today? These questions would be answered shortly. Until then, it was just I and the present moment—just I and the workout. And the key to it all: Keeping the right mindset to stay motivated and get through it.


images.jpegMotivation, we all know, is a complicated and fickle thing, a thing that usually comes and goes without our willing it, as though a fairy sprinkles her magic dust on us, and we become motivated, only for it to vanish into thin air when we are done, leaving us unmotivated and lazy, incapable of doing anything more. There are no real shortcuts to becoming motivated. Most of the time, it just has to happen. When I say, “I am motivated,” with “motivated” being in adjective form, I say it as such because it is done to me. Really, I am implying that there is something actively motiv-ating me. As such, I am passive. I am the recipient of motivation, whereupon I am motivated to do something. Whether it is doing a fitness test like I have to do every Thursday in P.E. or going to go a job that one hates, the only way to get through it, the only way to survive—is to be motivated. In tough moments, when we are pushed to our limits, when our arms feel like they are gonna fall off, when the stacks of paper that have to be read are piled to the roof, when all seems unbearable, when all hope seems lost—it is at these moments that we need motivation the most. To get through them, we must stick with them and try to stay motivated.


As it turns out, I did not, in fact, die that Thursday after completing my 20 sprints and 55 burpees, although it almost felt as if I died. I got through it, though, by keeping the right mindset. Today, I will be sharing my 4-step method of staying motivated, from which you can hopefully benefit, too! This can be used during exercises, work, or anything else, if you make it work. I have yet to give it a catchy name, but for now, it is the MMAA method:


  1. Macro. The first tactic I used was thinking at the macro, or large, scale. In the back of my mind, I always had an idea of how far I was in the workout. For example, I would remind myself, “I have ‘x’ sprints left and ‘y’ burpees left.” This way, by Unknown-1.jpegthinking about it in terms of the absolute, the ultimate, the whole, I was able to keep track of my progress. Taking inventory of where one is and where one has to go, allows for clearer thinking and planning. The macro aspect is the long-term. It takes into account the beginning and the end, the start and the finish, but not the middle in between, because then one gets caught up in the details; on the contrary, one must keep their eyes set on the whole, the bigger picture, in relation to which the smaller parts stand. Thinking macro is absolute and always directed toward the bigger sets, the bigger picture overall. 

  2. Micro. Second is thinking on the micro, or small, scale. During the workout, once I had established where I was in terms of the macro, I could then break it down into smaller units, into sets, and from there, into individual repetitions. This way, a larger workload became a series of smaller, more manageable ones. The macro makes way for the micro. To use an example: If I had to do nine burpees, then having to do nine burpees would be the macro approach, but the micro approach would be doing three sets of three. The bigger picture—nine burpees—was broken into the smaller pictures—manageable sets, three sets of three—which could easily be completed. The two work together. Illustrating further, if I were still sticking with the 3×3 burpees, and I was completing the first three, then the next 2×3 would then be the macro, and the current three the micro: This is because the micro is oriented, or Mosaic-Magic-840x400.jpggrounded, rather, in the present, in the relative and relational. Micro thinking is always a part of the whole, as opposed to macro thinking, which is the whole itself. The macro makes a mental map, and the micro draws the pathways connecting the landmarks. If one only thought macro, then they would be overwhelmed; if one only thought micro, then they would be lost. As such, the two mutually coexist and are dependent upon each other. Another idea that I touched on is that of the present. Because the macro takes into account the future, the micro takes into account only the present—not what I will do, in the future, what is still left, but what I am doing, right now, at this moment. While the macro image of three sets of three burpees exists in my mind, projected into the future, the micro conception of  “I am doing one burpee at the moment, out of three” is being done at the moment. What this means is that the micro, unlike the macro, is twofold: It simultaneously breaks down the macro and enacts it. In summary, the macro is a long-term projection of the bigger picture and what needs to be done, and the micro is the short-term breaking down of the macro into smaller parts that can be completed realistically.


  3. Action. Next is action. The name does not say anything important, nor does it seem groundbreaking. To be motivated requires that some action be done, does it not? Is not action redundant, then? Only to an extent, insofar as it is never considered in itself. Going back to the fitness test, I would find myself in the second half of the workout frequently asking how I would get through it. On the macro level, I had 10 burpees to do, and on the micro level, I had two sets of five to do. However, as I Unknown-3.jpegjumped, squatted, then pushed myself to the ground, I struggled, both physically and mentally. Already I had done 45 burpees, so my arms and legs were tired, and I was out of breath. Oh, if the workout could just end already! I thought. But this got me on a train of thought: Time is that through which things unfold, and unfolding is an action, meaning the only way to pass time is to act; and what this meant was, the sooner and quicker I acted, the sooner the workout would be over. Let me put it another way: Just sitting there on the gym floor hoping for the workout to end, acknowledging the pain and fatigue I was feeling, thinking both macro and micro—none of these would make the workout end quicker unless I actually did them. So while I knew I was tired, and while I knew I had to push out these last reps, the longer I dwelled on these things, the longer it would take me to finish, meaning the longer I would dwell, the longer I would hurt. Ultimately, thinking too much causes delay. Another way of thinking about action: Overall, the macro plan is to do my final 10 burpees and two sprints, yet having this plan is but what sets me on my way to doing them. Having this big picture in my mind does not change anything, per se. All it does is linger as a thought. It has no potent effect. I could sit on the sideline the entire day repeating to myself, “You have 10 burpees and two sprints,” but those numbers will not go down until I start on them. Until then, the numbers remain the same. Until then, nothing will change. So, in those moments when I found it nearly impossible to finish my reps, and when I asked, “How will I do the last four burpees?” the answer was, “By doing the last four burpees.”


  4. Absurdism. No matter what task it is we are doing, we at one point or another ask ourselves, “Why are we even doing this? Why should I even be doing it? What consequences are there if I do not do them?” That Thursday, in the midst of the fitness test, these questions came up many times in many forms. For comfort, I like to think back to Existentialist Albert Camus’ response to the problem of suicide. In Unknown-2.jpeghis essay, Camus references Sisyphus of Greek mythology, who has been punished by the gods to indefinitely push a boulder up a hill, who, having pushed the boulder to the top, watches it roll down to the bottom, forced to start all over again, ad infinitum. What does this have to do with anything? Well, Camus said that, although this is not the best of circumstances, we must bear it the best we can. Applying this reasoning, we can all find solace and wisdom in our goals: While a hard, laborious, and tedious task may be imposed upon us, and we do not want to do it, we might as well do it happily and do it to the best of our ability. If you think about it, there really is no reason to do it, no overarching purpose. But if we are doing it already, and if it is expected of us, why not jump in and make something of it? Sweating in the school gym, feeling like spaghetti, I knew that I could at any minute stop doing whatever I was doing, give up, forfeit, throw in the towel, call it quits—I could surrender to meaninglessness, to the absurd—or I could overcome the absurd, triumph over it. I could take the meaningless and make it meaningful. I could fight against the pain and turn it from suffering into vanquishing. It is a process of strengthening. There was no universal law that I had to do a fitness test, and by all means, I did not have to do it; but I decided that, despite its purposelessness in the long run, I might as well push through it and prove myself in spite of the void it presented to me and my classmates.


images-1.jpegIn conclusion, motivation is not a singular, simple thing—yet then again, we already knew that. I had conceived of this blog during a nap, and I had planned out a perfect image of it in my head; but as soon as I started writing out the strategies, I found that it did not correspond with the image in my head, and I felt like it had been a waste; I wanted to rewrite the whole thing—but I lacked, of all things, the motivation to do so! Somehow, out of sheer willpower, I managed to jump back and rewrite it; hence, what you are now reading. The MMAA method, albeit widely applicable, is certainly not the approach for everyone, and it may not work for every single task. Howbeit, the four steps need not be taken together as a package, no; rather, you are free to do whatsoever you like with any of the methods, be it adapting them to your own strategy, or taking one or two and starting from there. Ultimately, it is subjective, considering that is the very nature of motivation—it differs for everyone. The main takeaways, in summarizing the four strategies are:


  1. Have a clear idea of the bigger picture, including reference points, and a clearly defined beginning and end.

  2. Think about the bigger picture in small terms, in terms that are doable, that can be done mindfully.
  3. Plan, but do not plan such that it gets in the way of enacting that plan. Reflecting too much on the plan prevents it from coming into play.
    And finally:
  4. There may not be an immediate meaning behind your work, and you have to be fine with that: Make your own meaning, and embrace it. Maybe it is not the best thing to be doing, and yes, maybe you have better things to do, but for now, you might as well have fun doing it!

And yes, many of the ideas expressed herein are not new, and perhaps you have read something similar before; but hopefully, you have gleaned at least something of value that you can apply to your life!


Stay motivated, readers! Keep reading!

Plato and Plotinus on Love and Beauty

Unknown.pngWhat makes something beautiful? What is love (Baby don’t hurt me)? These are questions that we ask in our lives because we experience them both every day. They make up a large part of our experience, and without them, we know not what life would be like, nor whether it would be worth living. For this reason, these questions have been asked by philosophers, who, thinking about æsthetics, the philosophy of beauty and art, have also questioned these fundamental aspects of reality and the human condition. One of the most enduring contributions is from Plato. In today’s misguided world, many people, without having even read Plato’s principle work The Symposium, talk about “Platonic love,” throwing it about in conversations with friends and family, thinking, mistakenly, that it refers exclusively to a non-sexual relationship between two people. People like to claim that they and their coworker have a “Platonic relationship” without knowing what they are really saying, or without bothering to see what the great Greek philosopher himself had Unknown.jpegto say regarding love; for while the non-sexual aspect is important, this understanding is commonly used, but it does not capture the whole picture. Little do they know Plato originally referred to pederasty—relationships between older men and young boys, a common practice in Ancient Greece! A spiritual interpreter of Plato, Roman philosopher Plotinus continued Plato’s work in his Enneads. Together, Plato and Plotinus represent the ancient views on both beauty and love in their transcendental nature, whose ideas have shaped our understanding for ages to come.


symposium-vase.jpgThe Symposium is one of the more fun dialogues by Plato. In it, Plato, Socrates, and Aristophanes—a famous comic playwright—join a symposium, or drinking party, in which they go around the table sharing speeches, engaging in intellectual discussion on the subject of love, each of them drunk. Pausanias’ turn comes up, and he begins his speech by identifying two types of love. According to him, the other speakers had been mistaken in not defining what kind of love they were praising. So Pausanias corrects them by asserting that there actually two kinds, aligning with the two goddesses representative of them: The Common Aphrodite and the Heavenly Aphrodite. Beginning with the Common Aphrodite, Pausanias says that this kind of love, which is purely erotic—that is to say, inspired by Eros (Έρως)—is a shallow kind of love, insofar as it is a love of the body. Of the two kinds, this is the “wrong” love. Common love is temporary; because it is of the body, and because the body is temporal, subject to change with time, impermanent, it means the love, too, will be temporary. This Common love is very common these days; we see it all the time when we hear people saying, “This person is so hot” or “They are so beautiful.” This is not to say that it is wrong to call someone beautiful; rather, the problem lies in the intent. Are you attracted to this person purely for their looks, or is that an added benefit? There is nothing wrong with saying someone is beautiful—in fact, if you think that, then you should tell them. However, the problem with loving someone for their looks, Pausanias argues, is that their body will inevitably age and deteriorate. Interestingly, in the Buddhist tradition, if you are infatuated with someone, then you are instructed to meditate upon their decaying body as a reminder that their body is not images.jpegpermanent, but will wither with time, turning your mind off of their physical beauty, and onto their spiritual beauty, which is permanent. This same line of reasoning will be used by Pausanias. So what happens when someone, loving another for their looks, years later, does not look at this person the same, but decides they love them no more since they have changed? Well, because their love was attached to something temporary, their love is temporary, and so, Pausanias continues, the lover will flee. They were just in it for the beauty, yet when the beauty is gone, so are they. Similarly, he warns against loving someone for their possessions, namely their status or wealth. As with beauty, one’s reputation and financial situation are not always going to remain the same. If you love someone, and they lose all their money one day by chance because money is unreliable and everything can change in a moment, then you will love them no longer; the attachment was to a temporary thing. One’s money is not a part of them; it is external to them. Likewise, the regards of many are fickle. Who knows if someone will retain their reputation? Love must be directed toward the right object. Such material objects are just that, and they lack significant value. A Common lover is immature. He is not emotionally prepared for a committed relationship. He is full of energy, but empty in compassion. He wants passionate, sexual love. But once he wants it no more, he will leave. He is interested in one-night stands, not a devoted romantic relationship. Common love is short-lived.


Next, he explicates Heavenly love. This kind of love, as opposed to the Common, is of the soul and, therefore, righteous. Unlike Common love, Heavenly love is not shallow, but deep, in that it is spiritual and mutual: It is spiritual because it is literally of the spirit, the breath, the soul, and it is mutual because it is reciprocated—both lovers are Unknown-1.jpegin it for the sake of the other. It is also mutual in the sense Aristotle thought it mutual, namely that the lovers, in entering a romantic pact, agree thenceforth to help perfect each other; that is, they serve both themselves and the other, each aiding the other. Say one lover is trying to form a habit, the other to break a habit. In this situation, the lovers will love each other while at the same time mutually helping and perfecting themselves. It is two-way. Heavenly love is between two lovers, two subjects, not a lover and a beloved, a subject and an object. Heavenly love is profound, and reaches to the lowest depths. Temporary and lowly is Common love; permanent and transcendent is Heavenly love. The latter is permanent because it is not of the body, but of character. One’s looks can change very easily, and while one’s character is not exempt from changes, it is much slower and intentional than the body. Psychologists (and even Socrates will eventually say the same thing) argue that character is not a permanent thing, changing with age much as looks do. For the most part, however, character is a pretty stable, consistent thing, and it takes a lot to change it dramatically. Is it really worth loving someone who is physically attractive if they have a combative, unfriendly personality? In 40 years, will they still look the same as when you first loved them? No. In 40 years, will they still be combative and unfriendly? Yes. As such, a person’s body is not righteous, whereas character, one’s soul, is. Heavenly love is also transcendent. It is transcendent because it steps over the appearance of a person, the outer boundaries, the external face, the artificial construction, and it pierces through them, gives insight, sees not outer beauty, but inner beauty. Transcendental love loves a person for who they are inside, not outside. It is a love of their essence. And in contrast to the immature Common lover, the Heavenly lover is mature, prepared, and ready. This is a devoted, long-term relationship.


To evaluate Pausanias’ position, let us look at whether his views make sense. Just as he distinguishes between two kinds of love, one short and exciting, one long and content, so psychologist Elaine Hatfield distinguishes between two types of romantic love: Unknown-2.jpegPassionate and companionate. The first, passionate, is sexual and full of intense energy, although it only lasts for a short time. This is the kind of love teens have, when they are full of idealism and optimism, expecting great things from a partner; they are excited and will jump too quickly into things in the heat of the moment. This is embodied by Common Aphrodite. The second, companionate, is calm and full of compassion. Think not of teens in love, but a couple who has been married for 20 years. Here, you will see two people deeply in love with each other, neither of whom would leave the other at the drop of the hat, but who are, at their core, devoted to each Unknown-3.jpegother, devoted to perfecting each other. They have arguments, but they resolve them. They love, and will continue to love, each other. This is embodied by Heavenly Aphrodite. It seems Pausanias was spot on! Most often, this is the paradigm that is titled “Platonic love.” Plato gets a lot of backlash for his views these days. To “love someone for their personality” has become a universal joke. This is often said facetiously, with a smile on one’s face, meant to be ironic or sarcastic. And regarding those who actually mean it—they are met with derision. Consequently, almost nobody really means it when they say it. Yet then again, this is only a fraction of what “Platonic love” truly is.


The next speaker, Aristophanes, is the favorite of many, for his speech is the most remembered, the most entertaining, and, perhaps, the most influential even today. His is the speech on soulmates. Back in the day, relates Aristophanes, man and woman walked alongside a third sex, which was a combination of the two: A half-man, half-woman. It was a single organism, with two of every body part, seeing as it was two people put Unknown-4.jpegtogether, in a perfect, rolling circle, a symbol of perfection and completion, as Nussbaum points out [1]. These humans, composed of two people, were thus twice as powerful, and twice as ambitious. They decided, like the Giants, to attack the gods, which was a bad idea; Zeus promptly split up these dual humanoids. As a result, the two halves went about looking for their other half desperately, hoping to be reunited. Filled with longing and Eros, they wandered sadly, bereaved, dejected, almost to the point of depression. The halves could not function on their own; they needed each other. Since they spent all their time moping, busying themselves with finding their other halves, they were unable to make sacrifices for the gods. Zeus took pity on them and moved their sexual organs to the front to make mating easier. When two soulmates find each other, they immediately embrace, pressing their bodies together in an attempt to become one again, to press themselves into each other. They hug and kiss, holding themselves close, wrapping their arms around the other, then pulling tightly. Yet no matter how hard they try, no matter how hard they embrace each other, they cannot put themselves together again.

It is such reunions as these that impel [lovers] to spend their lives together, although they may be hard put to it to say what they really want with one another, and indeed, the purely sexual pleasures of their friendship could hardly account for the huge delight they take in one another’s company. The fact is that both their souls are longing for a something else—a something to which they can neither of them put a name, and which they can only give an inkling of in cryptic sayings and prophetic riddles (The Symposium, 192c-d).

So what is love? As Aristophanes reports, when lovers are asked this very question, they cannot answer. If you were to ask a teacher what teaching is, then you would expect them to know—it is their business. By nature, then, should not lovers, who are held tightly in the grip of love, know in what state they are? Surely, they should. On the contrary, love is such a powerful, binding force, such an irresistible pull, such an enigmatic drive—who could possibly define it while in its throes? Well, to answer the question of that at which love aims, Aristophanes proposes the following: Say Hephæstus were to ask the two halves if they wanted to be welded together so as to be inseparable for the rest of their lives, not even “until death do they part” (as they would remain together in the Underworld), a single entity forever. No one would refuse such an offer, for they want, deep down, to be “merged … into an utter oneness with the beloved” (The Symposium, 192e). The idea of soulmates is still popular till this day. Many of us believe we are just walking through life without an aim, a sinking feeling of incompleteness pervading our being, as though there is something more to life, something, someone, out there waiting for us, our other half, who is perfect, who is everything we want them to images.jpegbe, who will make us happy, who will be the missing piece to this jigsaw puzzle we call life, the summum bonum, the most absolutely beautiful person—and it is just a matter of finding them; but until then, we remain incomplete and, therefore, unhappy. This mythological story is at once humorous and enchanting. I really like the idea of hugging as an attempt to bring the other person to oneself, to make oneself complete; it is a creative, thoughtful moral that is poetic in its presentation, and I think it is very powerful. Whether or not this story is true, many of us still believe it, and it is yet another part of “Platonic love.”


Unknown-2.jpegThen comes Socrates’ turn. It is his speech which is left out of the everyday conception of “Platonic love,” despite Socrates’ being Plato’s mentor. In the dialogue, Socrates speaks on behalf of Diotima, a woman he met who taught him about the nature of love. What is love, exactly? Love is a desire, and a desire is for something, and if one already has what one desires, then it is not a desire any longer; therefore, love is a longing for something one does not have. What is this something? Is it Aristophanes’ other half? No. Love, says Socrates, is a desire for the Good, with a capital “G,” meaning the highest good, the ultimate good, that from which good things derive their goodness. Hence, what is beautiful is what is good and noble. Everyone wants goodness to an extent. This requires qualification. First, all objects of our desire, be it a living thing or a goal, are good. For example, if I want to write a blog, if my desire is to write a blog, then I am aiming at something which, if I investigate further, is essentially good since it is of benefit to me. Second, everyone, regardless of their disposition, wants the good, whether they know it or not. A doctor and a murderer both seek the good, although we say the latter is errant in his ways, or is ignorant thereof. In other words, even if we do not have an idea of what the Good is, we still want it anyway. It is natural. It is human. Nobody intentionally desires what is bad for them. But what separates desiring from loving is immortality, states Diotima. Whereas if my goal is to exercise more often, then I am seeking the Good, if I love someone, then I am seeking the Good in them, and, from what I gain therefrom: Longevity. It is a strange idea to read. However, what Socrates is saying is that we want the Good forever. We always want to have in our possession the Good—not today, not tomorrow, but for time immemorial. When we love someone, we tend to analyze them, parse them into traits, which we then classify as positive or negative. We look at people’s love-1.jpgpro’s and con’s. As is our nature, we like good traits and dislike bad traits in people. I like a person for her altruism but dislike her for her stubbornness. So when I say I like “her,” I really mean: I like the Good in her. This is similar to something Pascal wrote 2,000 years after Plato, that we love people not for themselves, but for their qualities. The reason we like good qualities in people is that they are reminiscent of the Good, and what is Good is good for us; a person’s good personality helps us to flourish. Using the previous instance, the altruism of a girl will help me, but her stubbornness will not. Furthermore, because we are mortal and fated die, and because we are terrified of death, we try to find ways to achieve immortality, at least artificially. We do this by creating something by which we will be remembered. We want a lasting name for ourselves. Some people do this by two means: Having children, so as to carry on the line, to bear one’s name, and creating art (art, here, is to be interpreted broadly as any kind of creation), so as to have a creation which manifests one’s ideas. Before continuing we can summarize Love in three points: First, love is of the Good and Beautiful (the two are synonymous); second, love is the same object for every desire and goal; third, love is for creation, be it through children or art, with the goal of longevity.


If the Beautiful is behind all things, and if we desire it so much, then how do we encounter it? What is the true purpose of love? Diotima introduces Socrates to a ladder, or ascent, of love, which leads up to Beauty. The ladder starts at the bottom and ends at Unknown.jpegthe top, rising from particulars to universals, concrete to abstract. Starting with a single, individual body we consider beautiful, we meditate upon it, find everything there is that is beautiful in it. In modern terms, we look at someone we love and find desirable traits, traits valued by our culture, traits that make someone beautiful. Having done this, we can then realize that the body of one person is just as beautiful as the body of another. There is a good message here: Everyone is beautiful in their own way. Each has their own unique beauty. While this person is beautiful for x reasons, this person is beautiful for y reasons, although they are both beautiful in the end. Once we grow accustomed to this, we can grasp that the mind and soul are more noble than the body. We move away from Commonly love and toward Heavenly love. Beauty is seen as permanent and virtuous. Next, we ascend to ideas, laws, customs, institutions. We learn to see knowledge as beautiful. Finally, once we have seen the Beautiful in all earthly and intellectual things, we can perceive Beauty as such, Beauty itself. The journey upward can be summarized thus:

And the true order of going, or being led by another, to the things of love, is to begin from the beauties of earth and mount upwards for the sake of that other beauty, using these as steps only, and from one going on to two, and from two to all fair forms, and from fair forms to fair practices, and from fair practices to fair notions, until from fair notions he arrives at the notion of absolute beauty, and at last knows what the essence of beauty is (The Symposium, 211c-d).

In the ascent, in other words, we abandon the individual for the absolute. Love is no longer person-centered but idea-centered. The intellect takes over for the eye. Senses are devalued to thought. Instead of the material and lower, we see the Beautiful in the higher and spiritual. Once we have loved the Good, Beauty as such, we can find Beauty in all things. In short, there is no more favoritism. What this means is: No longer do I see Unknown-1.jpegbeautiful and ugly people, but I only see the Beauty in them. There is no one more beautiful than another, since we all share in the same Beauty. A true lover of Beauty does not discriminate, but rather sees Beauty everywhere, from people to animals to nature. Beauty is no longer temporary but permanent. The lover need not depend on a specific person or artwork to see Beauty, for it is everywhere. Suppose I derive a great pleasure in van Gogh’s “Starry Night,” but in no other piece. This is an undeveloped love. However, after I have attained a vision of the Good, I soon find that every artwork is beautiful, not just “Starry Night”; for this reason, I am not dependent on a single beautiful thing to know Beauty. Universal love can be found anywhere once envisioned. And unlike the body, subject to change, Universal Beauty is changeless. Love is the guide up the ladder; it draws us toward the Beautiful through Eros, the daimon of Love. Plato compared “the soul of a philosopher, guileless and true” to “the soul of a lover, who is not devoid of philosophy” (The Phædrus, 249a). The philosopher, or lover of wisdom, is the same in purity as the lover of Beauty; for in wisdom, there is Beauty. What is the beautiful like? In this quote, Plato describes Unknown-2.jpegwhat the famous Realm of Forms is like: “There abides the very being with which true knowledge is concerned; the colourless, formless, intangible essence, visible only to mind, the pilot of the soul” (The Phædrus, 247c). From this we can gather that the Form of the Good or Beautiful is permanent and unchanging. It remains the same eternally. The Beautiful is absolute, not relative. Things are not “more beautiful” but are either beautiful or not-beautiful. Beauty, lastly, is the same to all things. A statue has as much beauty as does a shoe. It achieves this through instantiation: The partaking of instances. Explained in another way, beauty instantiates itself, by which it is meant that, a particular instance of beauty, for example Michelangelo’s “David,” is beautiful precisely because Beauty is inside of it. Love is a form of madness, Plato famously wrote. In a very poetic (and long) passage, Plato illustrates what it is like to be in love:

But he whose initiation is recent, and who has been the spectator of many glories in the other world, is amazed when he sees any one having a godlike face or form, which is the expression of divine beauty; and at first a shudder runs through him, and again the old awe steals over him; then looking upon the face of his beloved as of a god he reverences him, and if he were not afraid of being thought a downright madman, he would sacrifice to his beloved as to the image of a god; then while he gazes on him there is a sort of reaction, and the shudder passes into an unusual heat and perspiration; for, as he receives the effluence of beauty through the eyes, the wing moistens and he warms. And as he warms, the parts out of which the wing grew, and which had been hitherto closed and rigid, and had prevented the wing love-on-a-swing-Cropped.jpgfrom shooting forth, are melted, and as nourishment streams upon him, the lower end of the wing begins to swell and grow from the root upwards; and the growth extends under the whole soul—for once the whole was winged. During this process the whole soul is all in a state of ebullition and effervescence,—which may be compared to the irritation and uneasiness in the gums at the time of cutting teeth,—bubbles up, and has a feeling of uneasiness and tickling; but when in like manner the soul is beginning to grow wings, the beauty of the beloved meets her eye and she receives the sensible warm motion of particles which flow towards her, therefore called emotion, and is refreshed and warmed by them, and then she ceases from her pain with joy. But when she is parted from her beloved and her moisture fails, then the orifices of the passage out of which the wing shoots dry up and close, and intercept the germ of the wing; which, being shut up with the emotion, throbbing as with the pulsations of an artery, pricks the aperture which is long-distance-relationship-advice.jpgnearest, until at length the entire soul is pierced and maddened and pained, and at the recollection of beauty is again delighted. And from both of them together the soul is oppressed at the strangeness of her condition, and is in a great strait and excitement, and in her madness can neither sleep by night nor abide in her place by day. And wherever she thinks that she will behold the beautiful one, thither in her desire she runs. And when she has seen him, and bathed herself in the waters of beauty, her constraint is loosened, and she is refreshed, and has no more pangs and pains; and this is the sweetest of all pleasures at the time, and is the reason why the soul of the lover will never forsake his beautiful one, whom he esteems above all (The Phædrus, 251-2)

Anyone who has ever been in love—in other words, all of us—can appreciate the beauty with which Plato speaks here. “If … man’s life is ever worth living,” Diotima confides to Socrates, “it is when he has attained this vision of the very soul of beauty” (The Symposium, 211d).


What are we to make, then, of Platonic love? Despite all its transcendent glory, the ideal of Platonic love has its flaws. A professor of the Classics, Martha Nussbaum criticizes Plato’s account of love on three grounds: Compassion, reciprocity, and individuality.

  1. Unknown-1.jpegCompassion: According to Nussbaum, Platonic love lacks compassion. The practices for which he calls require that one look down upon “worldly” things as beneath oneself. Bodies, for example, are to be dismissed as gross presentations, renounced instead for mental pleasure. This kind of attitude instills an egotistical superiority. One thinks oneself superior to others, who are reduced to objects of desire; and these people are then devalued. The lover takes precedence. Also, suffering, which is a temporary condition, is frowned upon, demanding that the lover take on a Stoical indifference to pain, which is unnecessary. Homeless people, for example, are objectified as suffering for no reason, instead of contemplating the Forms.
  2. Unknown.jpegReciprocity: Platonic love is one-sided. To engage in this kind of love is to be egocentric. Only the self exists, and the opinions and emotions of others are not gauged, but ignored. It does not matter how the other person feels, as long as the lover, gets what they want: The Good. It is not like you love someone, and they love you back; rather, it is just you loving someone. In this sense, the beloved is not an end-in-themselves, but a means-to-an-end. You love someone not for their sake, but in order to reach the Good. The agency and autonomy of the beloved are ignored. They cannot act for themselves.  
  3. images.jpegIndividuality: Lastly, in pursuing Platonic love, the individual, the beloved, is dropped. When we say we love someone, do we ever consciously think, “I love x because in them is instantiated the Good”? No. We say we love them for who they are. The person with whom we are in love is considered unimportant in the long run, used as a stepping stone to the Good, a step ladder that will be discarded, cast away once it has been climbed. By treating the beloved as a sacrifice to reach the Good, we are, in effect, denying their faults, the things that make them different; i.e., we are denying their uniqueness, their individuality. As Nussbaum jokingly puts it, “‘I’ll love you only to the extent that you exemplify properties that I otherwise cherish.’”[2]

In short, Nussbaum argues that Platonic love is just far too objective, idealistic, and detached to be applicable. This is just one side, though. Others, like Paul Friedländer, cite that Platonic love actually does incorporate the individual beloved, and awards them a higher place. From personal experience, I agree that Platonic love tends to dismiss the beloved; but I do think the idea of Beauty manifest in individuals is quite real. Tell me your experiences in the comments, and whether or not you agree with Plato!


220px-Plotinus.jpgFrom hence we move to Plotinus, the Egyptian-Roman founder of Neoplatonism, whose spiritual ideas were based on Plato’s theories, and who influenced a nascent Christianity. Although we have covered the argument that Plato’s conception of love is idealistic, looking at Plotinus’ views makes Plato sound like a common-sense realist. Plotinus is even more spiritual than Plato, and even more contemptuous of the physical world, which he viewed as a hindrance. It is recorded that Plotinus constantly remarked that his body was ugly and that he looked forward to being released from it. In one anecdote, his student Porphyry wrote that an artist came to Plotinus’ school because he wanted to make a portrait of Plotinus; but Plotinus turned him away, ashamed to be seen in his body—how ghastly it would be to have a representation of such a hideous thing! Love for Plotinus is a unio mystica, a mystical union, drawing upon similar imagery to that of Aristophanes, but with God, whom he calls “the One.” Beauty lies in symmetry, in wholeness. When it comes to a certain instance of beauty, the whole is both greater than and equal to the sum of its parts—but this does not make a whole lot of sense. The whole is greater because it partakes in the Beautiful. It is equal because it must be constituted by only what is Beautiful. His reasoning is that all parts must be beautiful in order to be Beautiful. Beauty + beauty = Beauty, but beauty + ugly ≠ Beautiful. Therefore, a Beautiful Unknown.pngthing must be greater than its parts, but must also be composed of all-Beautiful parts. Put together, they all form a harmony in union. Evidently, Plotinus borrows Plato’s theory of instantiation: “[T]he material thing becomes beautiful—by communicating in the thought (Reason, Logos) that flows from the Divine” (The Enneads, I.VI.2). Put another way, a beautiful thing is beautiful because Beauty is in it. If there is no Beauty in it, then it is not beautiful. The things which make up the art are not beautiful in themselves; it depends on their symmetry in an arrangement. The Idea of Beauty is thus imposed on Matter itself. Imagine a blank canvas. It is not beautiful. Then, a bucket of different colors of paint is thrown onto the canvas. In this image, the canvas is matter, and the paint is Beauty. It is only when the canvas is so arranged that the paint can make it beautiful that it becomes Beautiful. Plotinus also references Plato’s ascent up the ladder, with a little change:

It [the Realm of Ideas] is to be reached by those who, born with the nature of the lover, are also authentically philosophic by inherent temper; in pain of love towards beauty but not held by material loveliness, taking refuge from that in things whose beauty is of the soul- such things as virtue, knowledge, institutions, law and custom- and thence, rising still a step, reach to the source of this loveliness of the Soul, thence to whatever be above that again, until the uttermost is reached. The First, the Principle whose beauty is self-springing: this attained, there is an end to the pain inassuageable before (The Enneads, V.IX.2).

istock-653098388-b874e6221d237c909723bbf13f388fadaa20e281-s900-c85.jpgJust like Plato, Plotinus believes the philosopher is most inclined toward love of the Beautiful. Also, the two agree that love ascends from the soul to virtue to knowledge to customs to Beauty itself. The difference lies in the starting point. For Plato, the lover begins with a person with whom they are in love; for Plotinus, the lover begins by shunning the person, by turning away from all things physical and material, jumping straight to the soul. Why does one jump immediately to the soul? Because the soul, Plotinus claims, is itself beautiful. There is a metaphor of “falling” in Plato and Plotinus, which mirrors that of Adam and Eve’s fall in The Bible, in which the immortal souls of men lived in the Realm of Forms, only to succumb to temptation, thereby causing it to fall into the material world of change and impermanence. This means that, just as Adam and Eve received Wisdom right before the Fall and retained some of it, so the souls of men received a vision of the Beautiful right before the Fall and retained some of it. By falling into the physical world, the soul became impure, ugly. As Plotinus puts it, “[A] soul becomes ugly … by a fall, a descent into the body, into Matter” (The Enneads, I.VI.5). The religious metaphors here are obvious. The soul thus becomes “ugly,” associated with grime and dirt. In my blog about Orphism and its influence on Pythagoreanism, we see the same kind of thinking: The body (σωμα) as a tomb (σημα), the pure trapped in the impure, seeking release, yearning for reunion with the World-soul, or, in this case, the self-love.jpeg.pngOne. Despite being a radical purist, Plotinus is a very wise guy with a lot of good things to say, and we should heed him. The following is a much-celebrated excerpt of Plotinus, one read and admired by many who find in it a beautiful and inspiring message, written with much the same elegance as Plato, considered the best of his writing. In it, he tells us all to look inside ourselves and realize that, deep down, beneath our appearances, we all have an inner beauty. Sometimes, we just need some self-love, and Plotinus reminds us to give ourselves this much-needed assurance. Read it for yourself:

Withdraw into yourself and look. And if you do not find yourself beautiful yet, act as does the creator of a statue that is to be made beautiful: he cuts away here, he smooths there, he makes this line lighter, his other purer, until a lovely face has grown upon his work. So do you also: cut away all that is excessive, straighten all that is crooked, bring light to all that is overcast, labour to make all one glow of beauty and never cease chiselling your statue, until there shall shine out on you from it the godlike splendour of virtue, until you shall see the perfect goodness surely established in the stainless shrine (The Enneads, I.VI.9).


Unknown-1.pngWhat have we learned today? Well, what we have not learned for certain is what love and beauty are. Despite the brilliance of these thinkers, they are no closer to the truth than we are. As to what love and beauty are—my guess is as good as yours, and that is not a bad thing; I think it is rather a good thing, really, and perhaps it should stay that way. We should all ask ourselves what love and beauty are, because they are essential to a well-lived life. To ask what love and beauty are, and to experience them fully and intimately—this is a part of the examined images.pnglife. Plato and Plotinus’ ideas have survived for ages and shall continue to influence us in the future. Yet their wisdom is not perfect, and their theories are not flawless either. It has been shown that their views, debatably, are impractical. From soulmates to the Ancient Christians with their agape to the modern philosophers like Pascal to contemporary man seeking love in an unloving world, we are all asking the same question as Haddaway: What is love? A most mysterious emotion it is, one we barely beginning to understand. What is life without love? Without beauty? As soon as we start asking these questions, we are on the way to wisdom. To actively pursue the answers to these questions requires that we all be philosophers. If we want to know beauty and love, we must be lovers of wisdom, philo-sophers.  

 

 


[1] Nussbaum, Upheavals of Thought, p. 483
[2] Id., p. 499

 

For further reading: The Greek Thinkers Vol. 2 by Theodor Gomperz (1964)
Upheavals of Thought by Martha Nussbaum (2001)
Plato: An Introduction by Paul Friedländer (1958)
On Plotinus by C. Wayne Mayhall (2004)
The Enneads by Plotinus (1991)
The Symposium by Plato (1973)
The Phædrus by Plato (1973) 

Do Babies Exist?

My friends and I were sitting on the deck one Summer afternoon sipping cokes by the pool while discussing different philosophical matters. It was a hot day, and I was introducing Descartes’ philosophy to them—as any normal person in an everyday conversation does—and explaining why it was important and what it meant for us. I set Unknownit up like this: He asked if his whole life were an illusion, a dream, and if there were an Evil Demon that was deceiving him, causing his senses to be misleading. It is impossible, I explained, to distinguish between waking reality and a dream, according to Descartes. However, searching for a first principle, a single starting point of knowledge from which to start, he realized he had been thinking this whole time. The process of questioning whether he was in a dream presupposed that there was a questioner who was doing it. This led him to remark, “Cogito, ergo sum,” or “I think, therefore I am.” By doubting all his senses, he was led to the conviction that he could not doubt that he was doubting in the first place; for otherwise, he would not be able to doubt: He would have to exist first before he could be deluded.


UnknownAfter hearing this, my friends seemed pretty convinced, and pondered it a bit. Out of nowhere, one of them said, “Well, babies aren’t self-conscious.” A pause. “So do babies exist?” Taken aback, unprepared for such a response, I readily dismissed the notion, called it absurd, and tried to think of an answer. We began debating whether or not babies knew they existed, or whether they could even think about thinking. Of course, the question itself—do babies exist since they are not self-conscious?—is actually grounded in a misunderstanding: Descartes was not trying to prove his existence; rather, he was trying to prove he had certainty, something undoubtedly true. But for the sake of argument, we entertained the idea. Common face shouts till it is red in the face, “Obviously, yes, babies exist! Only a madman would doubt their existence. I mean, we see them right in front of us—they’re right there, they exist!”[1]


This prompts the question: If we are conscious of a baby existing, yet they themselves are not conscious of themselves existing, do they exist? Babies are fascinating creatures. They are copies of us, miniature humans who must learn to cope with and understand the world in which they are living through trial-and-error. Seeing as they are capable of such amazing cognitive feats like cause-and-effect and language acquisition, investigating their conscious abilities sounded intriguing. A delve into developmental psychology, the study of how humans develop through life, yields interesting insights into this psycho-philosophical problem.


Unknown-1.jpegJean Piaget was a developmental psychologist who studied the development of children throughout the 20th-century. Today, his influence is still felt in psychological literature and continues to impact thought regarding childhood development. For years he observed, tested, and took notes on infants, from birth to early adulthood, using the data to devise his famous theory of cognitive development, which takes place in four stages: Sensorimotor, preoperational, concrete operational, and formal operational. The first stage, sensorimotor, takes place starting at birth and ending at the age of two. During this period, the baby’s life is geared toward adjusting to the world. Babies are “thrown” into this world, to use a Heideggerian term. They are born immediately into life amidst chaos, with all kinds of new stimuli to which to react. Confused, unable to make sense of things, exposed to strange sights and sounds, the baby cries and thrashes about, trying to find some sense of security. It is bombarded all at once by sensations and experiences. It is disoriented. This is a brave new world, and it is full of data that needs to be interpreted and sorted out in the baby’s mind. In order to navigate through the world, the newborn uses its motor skills and physical senses to experience things. The baby interacts with its environment, including people, grabbing with its hands, sucking with its mouth, hearing with its ears, and smelling with its nose. Imagine being in a cave for years, devoid of all sensory information, when, one day, you are let out and, having forgotten what it was like to experience the world, you are overcome by the magnitude of the environment, so you try to relearn as much as possible, greedily taking in everything that you can—well, being in the womb is kind of like being in a cave for the baby, meaning it is doing the same thing: It is getting a grasp of reality by engaging its senses in any way that it Unknown-3.jpegpossibly can. The baby is an empiricist who delights in its senses as though life were a buffet. Oh, there is something I can touch! Ah, that smells nice, let me smell it! While it cannot yet register these sensations, the infant uses its senses to obtain a primitive understanding. They are actively mapping out the world according to their perceptions, simple though they are. According to Piaget, babies eventually learn to pair coordination, knowledge of their body and its movement, with determination. Once they are able to effectively use their body parts in a way that is conducive to their survival, they develop their sense of where these limbs are in relation to each other, called proprioception. This allows them to use determination in regard to this newly acquired coordination. Babies can now direct themselves with autonomy and do something. However, this is a simple form of determination; it is not like the baby has free will and can decide or choose to do this or that. Whereas the baby can move toward a particular object, it cannot decide mentally, “I am going to crawl over to that thing”; it just does it out of pure, unthinking volition.


At three months, a baby can sense emotions and, amazingly, recreate them. Seeing their parents sad, an infant can react to this with a fitting response, as in being sad themselves. By being able to tell what someone is feeling, the baby can imitate them, showing that the baby has at least a simple recognition of empathy. Around this time also, the baby actively listens to their social scene, picking up on spoken language. It is incredible (in both senses of the word) because it is now that the infant unobtrusively Unknown-4.jpegand quietly internalizes and processes everything it hears like a sponge, learning speech cues, such as when to talk and when to pause; the rhythms of speech, including cadence; vocabulary; and nonverbal communication, which makes up the majority of social interaction. Here is a tiny little human just crawling around the house on all fours who cries and eats and goes to the bathroom, all the while they are actually learning how to speak—who could possibly fathom what is going on in that small, undeveloped mind! A little earlier, around two months usually, the baby already shows signs of early speech when it babbles. Nonsense sounds are uttered by the baby, who is trying to imitate speech, but who is not complex enough to reproduce it entirely. Four to five months into development, the baby can understand itself as a self-to-Others, or a self-as-viewed-by-Others. I have my own image of myself, but I understand that I am perceived by other people, who form their own images of me. One study shows that, from four to nine months, the infant has changing patterns of involvement in play. In the earliest stage, the baby will, if it is approached by the parent, play peekaboo. Because they have not yet learned that things exist independent of them in time, babies think that the parent disappears when they are covered, and is surprised to find they are still there. A few months later, nine months, the baby is able to take on the role of the initiator who wants to play peekaboo, instead of the responder who will play peekaboo if asked. This proves that babies learn to combine determination with intention (Bruner, 1983).


Just three months later, when the infant is officially one year old, it achieves a self-image. Looking in the a mirror, it can recognize itself and form an early identity. Like chimps, babies can now respond to themselves as an actual self in the mirror, noticing, for example, a mark on their forehead, and realizing that it is not on the mirror, but on themselves. During 14-18 months, an infant is able to differentiate an Other’s intentions from their own (Repacholi & Gopnik, 1997). Children like to think in terms of their own desires. If a kid wants a cookie, they act on their desire. Thus, when they are 14-18 months old, they can distinguish Others’ desires as different from their Unknown-5.jpegown. Within this period, the baby can also know that it is being imitated by someone else. If a parent mimics something the infant is doing, the infant knows their own behavior is being shown to them. Finally, the 18-month marker designates when the baby begins to start its sentences with the first-person “I.” With a sense of self, the infant is able to roleplay, in which it takes on new identities, or roles, and is able to play “as them.” Second-order emotions, also known as self-conscious emotions, like shame and embarrassment, arise in the child at this time, too. Children possess some semblance of self-consciousness.


After the sensorimotor stage is what Piaget called the preoperational stage, which takes place between the ages of two and seven. It is at this stage that the infant constructs their own world. Through the process of assimilation, the toddler creates mental schemas, mini blueprints conceived in their minds, frameworks by which reality is processed then Unknown.pngmade sense off, allowing them to structure reality in a way that is useful to them. When a new experience is undergone, it is made to fit the pre-existing schema. Because these schemas are very simple and basic, they are obviously inaccurate, although that is not point of them; they are not supposed to be innate categories of the mind, as Kant would have thought of them, but early hypotheses made from the little experienced gathered by a child. One time, my cousins came over to play video games; we were playing a level in Lego Indiana Jones where we had to drive around on a motorcycle chasing cars. My cousin’s little brother pointed excitedly at the cars zooming down the streets, exclaiming, “Doo-doo!” I hopped on a motorcycle and chased after them, only for him to look at the motorcycle and, again, shout, “Doo-doo!” My cousin and I tried to tell him that a car and a motorcycle were two separate things. In his mind, he saw a moving vehicle with wheels, so he created a mental schema. Anything that fit under that description—a moving vehicle with wheels—would be considered by him to be a “Doo-doo”—in this case, both the car and the motorcycle, despite their being different things. This illustrates that schemas are not always accurate; they are for classifying and categorizing things. Of course, this leads to a new process observed by Piaget: Accommodation. We come to an age where we discover that our schemas are inadequate because they do not fully represent reality. As such, we have a kind of “schematic crisis,” as we are met with an anomaly, something which sticks out, something which does not fit with our prevailing theory. Hence, we must remodel our thinking. Consequently, we are forced to find a way to reconcile the already-existing category with this new piece of data, either by broadening the schema, or by creating a new one altogether. Babies thus learn to make more accurate classifications as they learn new things and create new schemas with which to interpret Unknown-6.jpegreality. Once these schemas are built up, the infant is able to engage in organization, through which they order their schemas. Some are judged to be more inclusive or exclusive than others, and so are co-ordinated based thereon. In the case of my cousin’s little brother, he would have to organize his schemas like this: Broadly, there are vehicles, under which we might find cars and motorcycles as types, which can themselves be expanded upon, for each comes in different kinds. This way, reality is structured in levels, or hierarchies, not necessarily in importance, but in generality and specificity. Organization is a synthesis of assimilation and accommodation. All this schematizing segues into the next point, namely that in making sense of the world, we give sense to it.


The preoperational period is characterized by symbolic representation in toddlers. In philosophy, the study of meaning and symbolism is called semiotics, and it is closely related to what babies do, interestingly. Life is separated into two concepts: Signs and symbols. Signs are fixed things—concrete objects. Symbols are relative meanings—abstract values—usually assigned to signs. While every car I see is always a car, its meaning is not always the same and is liable to change. For some, it can represent, can be symbolic of, freedom, if you are a teen just getting your license; transportation, if it is how you get around; dread, if you hate road trips or have to wait hours during commute. The point is, everyone sees the same sign, but for everyone the symbol has different meanings. Preoperational toddlers are able, then, to understand objects not just in their literal, concrete sense, but as standing for something, as abstract and meaningful. Babies are not passive, as I have said, but on the contrary, very much, if not entirely, active. By interacting with the world around them, they experiment, learn, and conceptualize. Around three years, the baby is fully capable of speaking, feeling, having motives, and knowing the relation of cause-and-effect.


Unknown-2.pngOne of the consequences of Descartes’ Cogito is its resulting solipsism: The thinker, the Cogito, is only able to prove his own existence, whereas Others’ existences are uncertain. Is this a requisite for existence? Is self-certainty a necessity? If so, the case is a difficult one for babies. Controversially, Piaget proposed that babies are egocentric; his theory is widely contested today in psychological circles. The meaning of egocentrism can be guessed by looking carefully at the word’s roots: It means self-centered; however, it is not self-centeredness in the sense of being prideful, selfish, and concerned with oneself, no—it is more closely related to anthropocentric, in the sense that the self is the central point from which all others points are judged or perceived. For this reason, Piaget suggested that infants can only see things through their own perspectives, not through Others’. You may be wondering why I sometimes have been capitalizing “Other.” Philosophically, the problem of egocentrism is closely related to solipsism, resulting in what is called “the problem of Other Minds,” which is the attempt to prove the existence of selves outside of our own, of whose existence we are uncertain, so they are called “Others,” giving them a kind of external, foreign connotation. I digress. Babies, so thought Piaget, are unable to take Others’ perspectives, so the must rely on their own perspectives. To do this, they reason from self to Other. Infants’ egocentric tendencies, when combined with their inability to acknowledge objects as existing permanently outside of them, lead to a subject-object dualism, a subjective idealism, in which the self is distinguished and utterly separated cup-faces.jpgfrom the physical world. It becomes “my” viewpoint, or “your” viewpoint, subjective, relative. As long as I look at an object, a toddler thinks, it exists. And yet, the toddler also has a social self, which it develops through its interactions with other children. Many psychologists have claimed that, by playing, children are able to acknowledge the existence of not just Others, but Others’ emotions. It is evident in roleplaying, where the children pretend they are someone they are not, and act accordingly, placing themselves within a new self, which they adopt as their own, and interact with the other children, whom they see as someone else, whom they acknowledge and actively engage with, responding to how they are treated, and sensing emotions.


A dominant, popular theory that attempts to refute Piaget’s egocentrism is “Theory of Mind” ([ToM] Wellman, 1990). Wellman found that babies develop an awareness of Others at the age of three, when they operate on belief-desire reasoning. Motivation for kids consists of a belief, what they know, and a desire, what they want. A child might be motivated to have a cookie because they know where the cookie jar is, and they are hungry for one. Using this kind of reasoning, the kid attributes their own intentions to another. Looking at his playmate, the toddler assumes, “Well, I want a cookie, and I know where they are, so this kid, like me, because he has the same beliefs and desires as I, must want a cookie, too.” Is it faulty and inaccurate? Wildly. Does it make sense, realistically? Yes. The Theory of Mind is a primitive form of empathy, a kind of empathetic stepping stone. It is simple and selfish, because it assumes that images.pngchildren have the same beliefs and desires. One often sees this in children trying to console one another: An infant sees another crying, and, because he takes comfort in eating ice cream, believes the other will take comfort in it, too. Critics like Vasudevi Reddy criticize Theory of Mind because it is too detached from actual interaction and ends up actually attributing one’s own self-certitude to another, resulting in what she calls a “Neo-Cartesianism” of sorts. It promotes solipsistic thinking by denying the existence of an independent thinker with emotions, instead attributing to them own’s own ideas, thereby increasing a toddler’s dualistic thinking.


Unknown-8.jpegAccording to Reddy, a baby’s communication with Others’ already presupposed intersubjectivity, or being involved with people on a personal level. Babies are self-aware to an extent at birth because, the argument goes, the baby is able to distinguish itself from the world around it. To act, is to know both the self and the object. It is similar to Fichte’s philosophy in that the Ego becomes aware of itself by recognizing everything that is not the Ego, creating the Non-ego; in other words, it is through the Non-ego—the world—that the Ego knows itself. The world, or Non-ego, is created purely with the intent of being a moral playground for the Ego. Following from this is the idea that the baby, coming into contact with the world, immediately knows it as not-itself, and so uses it as its playground, activating all its senses to learn about reality. If we could not tell the environment apart from ourselves, and we thought ourselves a part of it, how could we act independently of it, with our senses? This is an argument against Freud and Piaget, who both said newborns cannot tell themselves from the world. As a solution to egocentrism, psychologists found that parents play an important role early on. Parents should teach their children early on to differentiate self from Other. Too much similarity between the baby and parent means more egocentrism in life, which is harder to unlearn. Reddy’s RquLcsxM.jpgsolution is to avoid Cartesianism and Theory of Mind and instead pursue a second-person perspective, one between I-and-Thou, You-and-I. This way, there is direct access to another’s intentions. Babies, through play, function on this second-person level by directly interacting with their peers. For Piaget, babies achieve consciousness when symbolism and schematism come together as one to create meaningful representations. An understanding of how things fit together and how they function is what Piaget considers consciousness. On the other hand, metacognition, the ability to think about thinking, does not arise until the age of 11, Piaget’s formal operational stage.


The following are milestones in the evolution of a baby’s cognitive abilities, summarized in eight chronological key events:

  1. Coordination
  2. Self vs. non-self
  3. Know special/loved people
  4. Know + respond to name
  5. Self-image
  6. Pointing to objects (symbol)
  7. Use “I” in sentences
  8. Know Other Minds

Unknown-9.jpegSo, to answer my friend: The question of whether or not babies exist is actually not so straightforward as one might think. It could be argued that babies exist when they are one, when they establish their self-image for the first time, and thus are, in one way or another, conscious of themselves. Or it may be that babies exist once they turn 18 months, and they can use “I,” roleplay, and experience reflexive emotions. Here, babies are aware of themselves as actors, are willing to play with others and take new perspectives, and are able to perceive how they are themselves perceived by others. Yet then again, it is possible that it is only when metacognition is possible, when we are able to doubt that we are doubting, when we are able to posit a hypothetical Evil Demon trying to deceive us all, that we exist—in which case… babies do not exist at all! Do only children and preadolescents and onwards exist? Maybe when we are born, we do not exist, we are in a state of utter nonexistence and non-being, and it is only when we reach 11 that—POOF!—we magically pop into existence.

 


[1] This is obviously a satirical question. Babies do exist. It is more of a thought-experiment, or armchair philosopher problem. I find the comment to be so outrageous that it is funny, and I thought it made for a perfect reason to research if babies are conscious. 

 


For further reading: How Infants Know Minds by Vasudevi Reddy (2008)
Developmental Psychology 8th ed. by David R. Shaffer (2010)
The Secret Language of the Mind 
by David Cohen (1996)
The Science of the Mind
by Owen J. Flanagan, Jr. (1984)

Happiness as Eudæmonia

Averill on Happiness.pngHappiness, according to psychologist James R. Averill, a Eudaemonist, is a means-to-an-end, contrary to what his predecessor Aristotle thought. After taking into account both survey reports and behavioral observations, he devised a table of happiness (see below). It is a 2×2 table, one axis being “Activation,” the other “Objectivity.” The four types of happiness he identified were joy, equanimity, eudaemonia, and contentment. He narrowed it down to the objective standard of high immersion known as “eudaemonia,” a term for overall well-being that finds its roots in Aristotle’s Nicomachean Ethics. Aristotle wrote that eudaemonia was achieved through activity, as when we are so engaged in doing something, we forget we are doing it, and lose a sense of time—time flies when you’re having fun. As such, happiness for Aristotle is not a typical emotion in that it occurs for periods of time. You cannot always be in a state of eudaemonia. Rather, it can be actively pursued when you immerse yourself in meaningful work. To be happy is not to be happy about or for anything because it is essentially an object-less emotion, a pure feeling. Eudaemonia is distinguished from equanimity by the fact that the latter is the absence of conflict, the former the resolution thereof. Equanimity has been valued by philosophers as a state of total inner peace; on the other hand, eudaemonia is the result of achieving a images.jpeggoal, which necessarily entails conflict, viz. desire vs. intention. When you are confident in your abilities and set realistic goals, when you are able to complete their goals, having overcome conflict, you can achieve happiness. Too many short-term goals means not experiencing enough of what life has to offer, while too many long-term goals means not being accomplished or confident in yourself. The measure of happiness, then, is relative, not absolute, and differs from person to person. What remains absolute, however, is that this sense of achievement can be had privately, by yourself, and publicly, when it is done for your community, family, or close friends. Inherent to eudaemonia, Averill asserts, is purpose: Behind happiness is direction, intention, and devotion. This led him to claim that “Pleasure without purpose is no prescription for happiness,” meaning you should not resort to hedonism to be happy, but must seek pleasure in meaningful actives into which you can immerse yourself.

Averill’s Table of Happiness:

Subjective: Objective:
High activation: Joy Eudaemonia
Low activation: Contentment Equanimity

 


For further reading: Handbook of Emotions 2nd ed. by Michael Lewis (2000)

“Talking To” vs. “Talking With”

We spend too much time talking to one another—I think it is about time we start talking with one another.


Unknown.jpegWe might add to this talking about another, by which we mean talk that focuses on another person, often in a derogatory way. In the case of the latter, we refer to gossip, which is malicious, narrow, and crude. Unfortunately, it occupies speech most. Over half of conversations, I would argue, concern others at one point or another, in which they are discussed behind their backs, without knowledge, the unwitting victim of vitriolic verbal venom. Psychologists say this arises from two motives: First, gossip is engaged in order to learn about threats, about who is dominant, as this was important in Neolithic times; second, to compensate for one’s own self-esteem, or lack thereof. Picture nothing worse than two people scheming together in private, and you are the subject of their ridicule and criticism, and you have no knowledge of it as they attack and slander your name and reputation, so that it spreads into rumors, which are accepted prima facie, then used against you—infectious, like a virus, a deadly one.


When we talk about the former, we mean it in a sense with which we are more comfortable; in fact, it is used colloquially by almost everyone: “I was talking to my boss the other day,””My friends and I talked to each other on the phone,” or ”I love talking to people.” The word “to” is a preposition, so used transitively, it takes a verb and is directed toward an object. Already, we see a twofold implication. Plainly, the word Unknown.png“toward” when used in the context of persons is alarming and carries with it negative connotations. While we can be gracious toward another person, it is rare; we usually hear angry, hateful, prejudiced, etc. toward another person. In other words, the word “toward” means to direct something at someone, like a projectile—which words are. Therefore, we hurl words toward another, which is precisely what “talking to” means. This in-itself implies one-way communication. To better illustrate what I am describing, replace to with at. “I was talking at my boss the other day.” While they are different words, the meaning is not changed; rather, the word “to,” seemingly less aggressive and affrontive, is accepted as more acceptable and respectful, despite masking a darker message. Similarly we say we “give things to people,” as though they are the recipient. Taken this way, “talking to” means delivering words to people. But a gift given is not reciprocated. A delivery is sent to one destination to be received, meaning the interlocutor is the receptacle for the speaker’s words—they are reduced to something which receives, as though it is lifeless. Just as a mailbox is designated for receiving mail, so the person whom is being talked to is designated as “something” to receive their words. This leads to the second implication of the preposition “to.” Because “to” receives an object, it means the other person is become an object—that is, they are objectified, made into an object. The person becomes a mailbox, a mere thing, an object whose only reason for existence is to house mail, to be that which receives words; the person is something into which words are deposited and then left. When we endure something, we “take” it. We take the abusetake the lecturetake the pain; when we talk to people, we expect them to take our words.


Thus, when we talk to one another, we are not having a conversation. A conversation requires that two people be involved. It involves an exchange of words—not a depositing of them, nor a receiving of them. When we reduce each other to receptacles, things to store our baggage, we leave no room for exchange. Nobody puts mail into a mailbox and expects it to come back to them; so when you talk to someone, you hurl words toward them and expect them to receive it, but not return it. Talking to is hurling-toward-to-1.jpgdeposit. Everyone knows, however, that if you want a response, you do not just throw it and expect it to stay there. Accordingly, we must learn to talk with one another, rather than to one another. To talk with is to engage in conversation, in two-sided talk, in which words are passed from one to another. Not hurled or thrown but passed, granted, welcomed, exchanged. Whereas one deposits money into the bank to keep it there, one exchanges money into the bank to get its equal value. Who exchanges a 10-dollar bill for 10 one-dollar bills gets the same value back from what they gave. Conversation is an exchange. We converse with. From this we conclude that talking with is exchanging-for-equal-value, by which we mean that: What we put in, we get back. This is conversation. This is discussion. This is healthy communication, where both parties are heard, none prioritized ahead of the other, and where neither is objectivized, reduced to an object, but heard out. Everyone’s opinion is heard in talking with, whereas only one is in talking to. I think it is about time we stop talking to one another and start talking with one another.


Such will be a good start to creating a better future.

 

Technology and Social Media: A Polemic

§1


Much gratitude is to be given to our devices—those glorious, wonderful tools at our disposal, which grant us capabilities whereof man centuries ago could only have wished, the culmination of years of technology, all combined in a single gadget, be it the size of your lap or hand. What a blessing they are, to be able to connect us to those around the world, to give us access to a preponderance of knowledge, and to give longevity to our lives, allowing us to create narratives and storytell; and yet, how much of a curse they are, those mechanical parasites that latch onto their hosts and deprive them of their vitality, much as a tick does. That phones and computers are indispensable, and further, that social media acts as a necessary sphere that combines the private and public, creating the cybersphere—such is incontrovertible, although they are abused to such an extent that these advantages have been corrupted and have lost their supremacy in the human condition.

§2


Technology is ubiquitous, inescapable, and hardwired into the 21st-century so that it is a priori, given, a simple fact of being whose facticity is such that it is foreign to older generations, who generally disdain it, as opposed to today’s youths, who have been, as Heidegger said, thrown into this world, this technologically dominated world, wherein pocket-sized devices—growing bigger by the year—are everywhere, the defining feature of the age, the zeitgeist, that indomitable force that pervades society, not just concretely, but abstractly, not just descriptive but normative. In being-in-the-world, we Millennials and we of Generation X take technology as it is, and accept it as such. To us, technology is present. It is present insofar as it is both at hand and here, whereby I mean it is pervasive, not just in terms of location but in terms of its presence. A fellow student once observed that we youths are like fish born in the water, whereas older generations are humans born on land: Born into our circumstances, as fish, we are accustomed to the water, while the humans, accustomed to the land, look upon us, upon the ocean, and think us strange, pondering, “How can they live like that?”

§3


As per the law of inertia, things tend to persist in their given states. As such, people, like objects, like to resist change. The status-quo is a hard thing to change, especially when it is conceived before oneself is. To tell a fellow fish, “We ought to live on the land as our fathers did before us”—what an outlandish remark! Verily, one is likely to be disinclined to change their perspective, but will rather accept it with tenacity, to the extent that it develops into a complacency, a terrible stubbornness that entrenches them further within their own deep-rooted ways. This individual is a tough one to change indeed. What is the case, we say is what it ought to be, and so it is the general principle whereupon we take our case, and anyone who says otherwise is either wrong or ignorant. Accordingly, following what has been said, the youth of today, the future of humanity, accepts technology as its own unquestioningly. As per the law of inertia, things tend to persist in their given states—that is, until an unbalanced force acts upon it.

§4


What results from deeply held convictions is dogmatism. A theme central to all users of devices, I find, is guilt; a discussion among classmates has led me to believe that this emotion, deeply personal, bitingly venomous, self-inflicted, and acerbic, is a product of our technological addictions. Addiction has the awesome power of distorting one’s acumen, a power comparable to that of drugs, inasmuch as it compromises the mind’s judiciary faculty, preventing it from distilling events, from correctly processing experiences, and thereby corrupting our better senses. The teen who is stopped at dinner for being on their phone while eating with their family, or the student who claims to be doing homework, when, in reality, they are playing a game or watching a video—what have they in common? The vanity of a guilty conscience—would rather be defensive than apologetic. The man of guilt is by nature disposed to remorse, and thus he is naturally apologetic in order to right his wrong; yet today, children are by nature indisposed thereto, and are conversely defensive, as though they are the ones who have been wronged—yes, we youths take great umbrage at being called out, and instead of feeling remorse, instead of desiring to absolve from our conscience our intrinsic guilt, feel that we have nothing from which to absolve ourselves, imputing the disrespect to they who called us out.

§5


Alas, what backward logic!—think how contrary were it to be if the thief were to call out that poor inhabitant who caught them. Technology has led to moral bankruptcy. A transvaluation of morals in this case, to use Nietzsche’s terminology is to our detriment, I would think. Guilt is a reactionary emotion: It is a reaction formed ex post facto, with the intent of further action. To be guilty is to want to justify oneself, for guilt is by definition self-defeating; guilt seeks to rectify itself; guilt never wants to remain guilty, no; it wants to become something else. But technology has reshaped guilt, turning it into an intransitive feeling, often giving way, if at all, to condemnation, seeking not to vindicate itself but to remonstrate, recriminate, retribute, repugn, and retaliate. Through technology, guilt has gone from being passive and reactive to active and proactive, a negative emotion with the goal of worsening things, not placating them. Digital culture has perpetuated this; now, being guilty and remaining so is seen as normal and valuable. Guilt is not something to be addressed anymore. Guilt is to be kept as long as possible. But guilt, like I said, is naturally self-rectifying, so without an output, it must be displaced—in this case, into resentment, resentment directed toward the person who made us feel this way.

§6


—You disrupt me from my device? Shame on you!—It is no good, say you? I ought get off it? Nay, you ought get off me!—You are foolish to believe I am doing something less important than what we are doing now, together, to think it is I who is in the wrong, and consequently, to expect me to thusly put it away—You are grossly out of line—You know naught of what I am doing, you sanctimonious tyrant!—

§7


When asked whether they managed their time on devices, some students replied quite unsurprisingly that they did not; notwithstanding, this serves as a frightful example of the extent to which our devices play a role in our lives. (Sadly, all but one student said they actually managed their time.) They were then asked some of the reasons they had social media, to which they replied: To get insights into others’ lives, to de-stress and clear their minds after studying, and to talk with friends. A follow-up question asked if using social media made them happy or sad, the answer to which was mixed: Some said it made them happier, some said it made them sadder. An absurd statement was made by one of the interviewees who, when asked how they managed their time, said they checked their social media at random intervals through studying in order to “clear their mind off of things” because their brains, understandably, were tired; another stated they measured their usage by the amount of video game matches played, which, once it was met, signaled them to move onto to something else—not something physical, but some other virtual activity, such as checking their social media account. I need not point out the hypocrisy herein.

§8


I take issue with both statements combined, for they complement each other and reveal a sad, distasteful pattern in today’s culture which I shall presently discuss. Common to all students interviewed was the repeated, woebegone usage of the dreaded word “should”:
—”I should try to be more present”—
—”I should put my phone down and be with my friends”—
—”I should probably manage my time more”—

§9


Lo! for it is one thing to be obliged, another to want. Hidden beneath each of these admissions is an acknowledgment of one’s wrongdoing—in a word, guilt. Guilt is inherent in “shoulds” because they represent a justified course of action. One should have done this, rather than that. Subsequently, the repetition of “should” is vain, a mere placeholder for the repressed guilt, a means of getting rid of some of the weight on one’s conscience; therefore, it, too, the conditional, is as frustrated as the guilt harbored therein.

§10


Another thing with which I take issue is when the two students talked about their means of time management. The first said they liked to play games on their computer, and they would take breaks intermittently by going elsewhere, either their social media or YouTube to watch videos. No less alogical, the other said they would take breaks by checking their social media, as they had just been concentrating hard. How silly it would be for the drug addict to heal himself with the very thing which plagues him! No rehabilitator assures their circle with alcohol; common sense dictates that stopping a problem with that which is the problem in the first place is nonsense! Such is the case with the culture of today, whose drugs are their devices. In the first place, how exactly does stopping a game and checking some other website constitute a “break”? There is no breach of connection between user and device, so it is not in any sense a “break,” but a mere switch from one thing to the next, which is hardly commendable, but foolish forasmuch as it encourages further usage, not less; as one defines the one in relation to the next, it follows that it is a cycle, not a regiment, for there is no real resting period, only transition. Real time management would consist of playing a few games, then deciding to get off the computer, get a snack, study, or read; going from one device to another is not management at all. Similarly, regarding the other scenario, studying on one’s computer and taking a break by checking one’s media is no more effective. One is studying for physics, and after reading several long paragraphs, sets upon learning the vocabulary, committing to memory the jargon, then solving a few problems, but one is thus only halfway through: What now? Tired, drained, yet also proud of what has been accomplished thus far, one decides to check one’s social media—only for 30 minutes, of course: just enough time to forget everything, relax, and get ready to study again—this is not the essence of management; nay, it is the antithesis thereof! No state of mind could possibly think this reasonable. If one is tired of studying, which is justifiable and respectable, then one ought to (not should!) take a real break and really manage one’s time! Social media is indeed a distraction, albeit of a terrible kind, and not the one we ought to be seeking. Checking a friend’s or a stranger’s profile and looking through their photos, yearning for an escape, hoping for better circumstances—this is not calming, nor is it productive. A good break, good time management, is closing one’s computer and doing something productive. Social media serves to irritate the brain even more after exhaustion and is not healthy; instead, healthy and productive tasks, of which their benefits have been proven, ought to be taken up, such as reading, taking a walk, or exercising, among other things: A simple search will show that any of the aforementioned methods is extremely effective after intense studying, and shows signs of better memory, better focus, and better overall well-being, not to mention the subconscious aspect, by which recently learned information is better processed if put in the back of the mind during something else, such as the latter two, which are both physical, bringing with them both physiological and psychological advantages. Conclusively, time management consists not in transitioning between devices, but in transitioning between mind- and body-states.

§11


The question arises: Why is spending too much time with technology on devices a problem in the world? Wherefore, asks the skeptic, is shutting oneself off from the world and retreating into cyberspace where there are infinite possibilities a “bad” thing? Do we really need face-to-face relationships or wisdom or ambitions when we can scroll through our media without interference, getting a window into what is otherwise unattainable? Unfortunately, as with many philosophical problems, including the simulation theory, solipsism, and the mind-body problem, no matter what is argued, the skeptic can always refute it. While I or anyone could give an impassioned speech in defense of life and about what it means to be human, it may never be enough to convince the skeptic that there is any worth in real-world experiences. It is true that one could easily eschew worldly intercourse and live a successful life on their device, establishing their own online business, finding that special person online and being in love long distance—what need is there for the real world, for the affairs of everyday men? Philosopher Robert Nozick asks us to consider the Pleasure Machine: Given the choice, we can choose to either hook ourselves up to a machine that simulates a perfect, ideal, desirable world wherein all our dreams come true, and everything we want, we get, like becoming whatever we always wanted to become, marrying whomever we have always wanted to marry, yet which is artificial, and, again, simulated; or to remain in the real world, where there are inevitable strifes and struggles, but also triumphs, and where we experience pleasure and pain, happiness and sadness—but all real, all authentic. There is, of course, nothing stopping one from choosing the machine; and the skeptic will still not be swayed, but I think the sanctity of humanity, that which constitutes our humanity, ought never be violated.

§12


What, then, is the greatest inhibition to a healthy, productive digital citizenship? What can we do to improve things? The way I see it, the answer is in the how, not the what. Schools can continue to hold events where they warn students of the dangers of technology, advise them on time management, and educate them about proper usage of technology and online presence; but while these can continue ad infinitum, the one thing that will never change is our—the students—want to change. Teachers, psychologists, and parents can keep teaching, publishing, and lecturing more and more convincingly and authoritatively, but unless the want to change is instilled in us, I am afeard no progress will be made. Today’s generation will continue to dig itself deeper into the technological world. They say the first step in overcoming a bad habit or addiction is to admit you have a problem. Like I said earlier, technology just is for us youths, and it always will be henceforth, and there will not be a time when there is not technology, meaning it is seen as a given, something that is essential, something humans have always needed and will continue to need. Technology is a tool, not a plaything. Technology is a utility, not a distraction. Social media is corrupting, not clarifying, nor essential. We have been raised in the 21st-century such that we accept technology as a fact, and facts cannot be disproven, so they will remain, planted, their roots reaching deeper into the soil, into the human psyche. Collectively, we have agreed technology is good, but this is “technology” in its broadest sense, thereby clouding our view of it. We believe our phones and computers are indispensable, that were we to live without them, we would rather die. To be without WiFi—it is comparable to anxiety, an object-less yearning, and emptiness in our souls. How dependent we have become, we “independent” beings! This is the pinnacle of humanity, and it is still rising! Ortega y Gasset, in the style of Nietzsche, proclaimed, “I see the flood-tide of nihilism rising!”¹ We must recognize technology as a problem before we can reform it and ourselves. A lyric from a song goes, “Your possessions will possess you.” Our devices, having become a part of our everyday lives to the extent that we bring them wheresoever we go, have become more controlling of our lives than we are of ourselves, which is a saddening prospect. We must check every update, every message, every notification we receive, lest we miss out on anything! We must miss out on those who care about us, who are right in front of us, in order to not miss out on that brand new, for-a-limited-time sale! But as long as we keep buying into these notification, for so long as we refuse to acknowledge our addictions and the problem before us, we will continue to miss out on life and waste moments of productivity, even if they are for a few minutes, which, when added up at the end of our lives, will turn out to be days, days we missed out on. As my teacher likes to say, “Discipline equals freedom.” To wrest ourselves from our computers or phones, we must first discipline ourselves to do so; and to discipline ourselves, we must first acknowledge our problem, see it as one, and want to change. As per the law of the vis viva (and not the vis inertiæ), things tend to persist in their given states, until its internal force wills it otherwise. We bodies animated with the vis viva, we have the determination and volition to will ourselves, to counter the inertia of being-in-the-world, of being-online, whence we can liberate ourselves, and awaken, so to speak. We, addicts, have no autonomy with our devices—we are slaves to them. Until we break out of our complacency, until we recognize our masters and affirm our self-consciousness thence, and until we take a stand and break from our heteronomy, we will remain prisoners, automata, machines under machines. We must gain our freedom ourselves. But we cannot free ourselves if we do not want to be freed, if we want to remain slaves, if we want to remain in shackles, if we want to plug into the machine. A slave who disdains freedom even when freed remains a slave. Consequently, we cannot be told to stop spending so much time on our devices, to pay attention to whom or what is in front of us; we must want to ourselves. Yet no matter how many times or by whom they are told, today’s youth will never realize it unless they do so themselves. They must make the decision for themselves, which, again, I must stress, must be of their own volition. Until then, it is merely a velleity, a desire to change, but a desire in-itself—nothing more, a wish with no intent to act. It is one thing to say we should spend less time, another that we ought to.

 


¹Ortega y Gasset, The Revolt of the Masses, p. 54

Harper Lee’s Guide to Empathy

Unknown.pngIn the 21st Century, surrounded by technologies that distance us, by worldviews that divide us, and by identities that define us, we do not see a lot of empathy among people. While we see friends and family every day, we never really see them, nor do we acknowledge that they, too, are real people, people who have opinions like us, feelings like us, and perspectives like us. Harper Lee is the author of To Kill a Mockingbird, a novel that itself has many perspectives, many of which are in conflict with each other. Set in the 1930’s South, the book takes place during the Great Depression, when many lost their jobs, and a time of racism, when laws were passed that prohibited the rights of black people. The protagonist is a girl named Scout who lives in the fictional town of Maycomb with her brother Jem and father Atticus, who is an empathetic lawyer. Through interactions with her peers, Scout learns to take others’ perspectives and walk in their shoes. In To Kill a Mockingbird, Harper Lee teaches that, in order to take another’s perspective and practice empathy, it is required that one understand someone else’s thoughts or background, try to relate to them, then become aware of how the consequences of one’s actions affects them.


Before one can truly take another’s perspective, Lee argues, one must first seek to understand how someone thinks and where they come from. After hearing about Mr. Cunningham’s legal entailment, Scout asks if he will ever pay Atticus back. He replies that they will, just not in money. She asks, “‘Why does he pay you like that [with food]?’ ‘Because that’s the only way he can pay me. He has no money… The Cunninghams are country folk, farmers, and the crash hit them the hardest…’ As the Cunninghams had no money to pay a lawyer, they simply paid us with what they had’” (Lee 27-8).  Scout is confused why the Cunninghams pay “like that” because it is not the conventional way of paying debts. Money is always used in business transactions, yet Atticus allows them to pay through other means. Atticus acknowledges that the Cunninghams are having economic problems. He empathizes with him by drawing on his background knowledge, namely that, because he is a farmer who gets his money from agriculture, he does not Unknown.jpeghave the means to pay. The Great Depression left many poor and without jobs, so Atticus is easier on Mr. Cunningham; he knows it would be unfair to make him pay when he hardly has any money. Accordingly, Atticus accepts that the Cunninghams are trying their best, and he compromises with them. He willingly accepts anything Mr. Cunningham will give him, since he knows it will come from the heart. For this reason, Atticus can empathize by thinking outside normal conventions to accommodate Mr. Cunningham’s situation. Just as Atticus understands the Cunninghams, so Calpurnia empathizes with them when she lectures Scout not to judge them. Jem invites Walter Cunningham from school over to have dinner with him and Scout. Reluctantly, Walter agrees, but once he starts eating, Scout takes issue with his habits; so Calpurnia scolds her. Calpurnia yells, “‘There’s some folks who don’t eat like us… but you ain’t called on to contradict ‘em at the table when they don’t… [A]nd don’t you let me catch you remarkin’ on their ways like you was so high and mighty!’” (Lee 32-3). Because Scout is not used to the way Walter eats, she immediately judges his way as different from her own, thereby patronizing him. Hence, she is not empathizing because she is not considering his point of view, but is only evaluating her own. Calpurnia states that not everyone eats like Scout does, showing that she, unlike Scout, does not form generalizations; rather, she rationalizes, recognizing that he comes from a different home, a different home with different manners. Since she empathizes with Walter in this way, Calpurnia tells Scout not to “contradict” him, meaning it is rude and unsympathetic not to consider Walter and his background. Furthermore, she warns Scout not to act as though she is “so high and mighty,” especially around others who are less fortunate and who differ from her, such as Walter. By criticizing Walter’s eating and thence abashing him, Scout is being sanctimonious, declaring that her way is the better than anyone else’s. Calpurnia gets mad at Scout for this, as it is egocentric; i.e., she is concerned with herself and cannot consider others’ perspectives. Consequently, Calpurnia shows empathy by understanding that people have different perspectives, while Scout does not. Both Atticus and Calpurnia are empathetic because, as shown, they actively try to understand other people and selflessly consider their perspectives.


Unknown-1.jpegOnce a person’s way of thinking and past is understood, one is able to see oneself in that other and make connections with them. One night, Scout, Jem, and Dill sneak off to the Radley house and are scared away, Jem losing his pants in the process. Jem decides to retrieve his pants, regardless of the danger involved therewith. The next morning, he is moody and quiet, and Scout does not know why. Upon some reflection, she says, “As Atticus had once advised me to do, I tried to climb into Jem’s skin and walk around in it: if I had gone alone to the Radley Place at two in the morning, my funeral would have been held the next afternoon. So I left Jem alone and tried not to bother him” (Lee 77). Scout follows her father’s advice and “climb[s] into Jem’s skin,” symbolizing that she has taken his perspective and seen life therethrough. She asks herself the vital question of what it would be like to be Jem; in doing this, she has visualized herself as Jem, has visualized herself doing what he did, thereby understanding him. The first step in empathizing—understanding—allows her to relate to Jem and put herself in his position: She imagines what it would have been like to risk her own life, how she would have felt doing so. As a result, she examines her emotional reaction and projects it onto Jem, relating to him, feeling as he would feel. Had she not tried to understand Jem’s position, had she not related to him emotionally, she would have never known why Jem was being moody. Jem’s “funeral would have been held the next afternoon,” says Scout, realizing why Jem is upset. If she felt that way herself, then she would not want anyone bothering her, either, seeing as it is a traumatic event. Scout connects to Jem on an emotional level, empathizing with him. Another instance in which Scout shows empathy by relating is when she connects with Mr. Cunningham. Jem and Scout sneak out at night to find Atticus, who is at the county jail keeping watch over his client, Tom Robinson. While they near to him, a mob closes in on Atticus and threatens to kill Robinson, so Scout tries to find a way of civilizing them and 120130184141-mockingbird-6-super-169.jpgtalks to Walter’s father. Thinking of conversation, she considers, “Atticus had said it was the polite thing to talk to people about what they are interested in, not what you were interested in. Mr. Cunningham displayed no interest in his son, so I tackled his entailment once more in a last-ditch effort to make him feel at home” (Lee 205). In this moment, Scout recalls that it is polite to relate to others and consider their views rather than her own. She hereby distances herself from her egocentrism, instead concerning herself with what someone other than herself wants. Empathizing requires that one cross the gorge of disparity, and Scout bridges this gap between self and other to find that she has things in common with Mr. Cunningham, common things of which she would never have thought prior. Before this connection could occur, Scout had to know his background, of which she learned when talking to Atticus; additionally, she had his Unknown-1.pngson over and learned about him then, giving her something in common with him with which to talk. Since Scout knows Walter, she thinks him a topic to which the two can both relate, seeing as Walter is close to his father, creating a strong connection. However, she notes that he “displayed no interest in his son”; thus, she thinks back further, remembers another thing they have in common, then relates to it in an attempt to “make him feel at home.” The phrase “feel at home” denotes acceptance, belonging, and coziness—being warm and welcome—so Scout, in coming up with certain topics that will be of interest to Mr. Cunningham, seeks to make him feel like he is a welcome person, to put herself in his shoes and consider what he would like to talk about, what would make him feel accepted as it would her. Through these moments in the text, Lee shows that empathy is relating to and identifying with another by removing one’s own position and taking theirs.


Empathy is accomplished when one takes another’s perspective in order to know their actions will affect them and consider how they would make them feel. Jem and Scout find out Atticus has been insulted and threatened by Bob Ewell in chapter 23. They are confused as to why their dad did nothing to retaliate, why he just took it. He tells Jem, Unknown.jpeg“[S]ee if you can stand in Bob Ewell’s shoes a minute. I destroyed his last shred of credibility at the trial, if he had any to begin with… [I]f spitting in my face and threatening me saved Mayella Ewell one extra beating, that’s something I’ll gladly take. He had to take it out on somebody and I’d rather it be me than that houseful of children out there’” (Lee 292-3). Atticus directs Jem to “stand in Bob Ewell’s shoes” so that he can understand his perspective, and therefore how Atticus’ actions could have affected him. Knowing Mr. Ewell has many children, finding a common link therein, Atticus can relate to him, imagining how horrible it would be if his children were beaten. Bob Ewell, upset over the trial, wants to take out his anger, so he displaces it onto Atticus, which Atticus says is better than his displacing it on his children. Taking the pacifist route, Atticus avoids exacerbating the situation, aware that fighting back would cause things to worsen, and he steps outside himself to become aware of how his actions will not just have direct effects, but indirect effects as well: Angering Bob Ewell would make him want to physically harm Atticus, but would further encourage him to be more hostile to his children in addition. As such, Atticus takes into account the long-term consequences and empathizes because he is aware of how his actions could possibly obviate a disaster. He thinks ahead—to Bob Ewell’s children, to his own children, concluding, “‘I’d rather it be me than that houseful of children.’” A second example of considering the consequences of one’s actions on another takes place when Scout, a couple years later, reflects on how she treated Arthur “Boo” Radley. At the beginning of chapter 26, Scout is thinking about her life and passes the Radley house, of which she and Jem were always scared, and about which they had always heard rumors. She remembers all the times in the past she and her brother and their friend played outside, acting out what happened at the house. Pensively, she Unknown-1.jpegponders, “I sometimes felt a twinge of remorse when passing by the old place [Radley house], at ever having taken part in what must have been sheer torment to Arthur Radley—what reasonable recluse wants children peeping through his shutters, delivering greetings on the end of a fishing-pole, wandering in his collards at night?” (Lee 324). Lee uses the word “remorse” here to conjure up feelings of guilt, regret, and shame, all associated with the way Scout feels about her actions. To say she feels a “twinge of remorse” is to say she feels compunction; that is, morally, she feels she has wronged the Radleys, and, looking back, that what she did was wrong. She is contrite because she can stand back and objectively evaluate her deeds, deeds she deems unempathetic, considering they were inconsiderate of Arthur. Having become aware of the weight of her choices, Scout experiences regret, an important emotional reaction because it signifies empathy, insofar as it is representative of her taking into account how she affected another person; and, in this case, how it negatively impacted Arthur, which itself requires understanding and relation to him. This regret, this guilt, is caused by the realization that her actions in the past were mean and thus incite moral guilt. Again, Scout puts herself in Arthur’s shoes, imagining what it would reasonably be like to be a “recluse”: Certainly, she affirms, she does not want “children peeping,… delivering greetings,… [or] wandering in [her] collards.” The thought process is supposed to relate to Arthur’s, so Scout is actively relating to and understanding him, ultimately to realize how her conduct impacts him. Her scruples finally notify her that, from the perspective of the solitary Arthur, her behavior had a negative effect. Scout’s awareness of the consequences of her actions makes her empathetic, for she has introjected Arthur’s perspective. In conclusion, Atticus and Scout exhibit empathy because they both consider how their comportment has an effect on others.


Unknown.pngAccording to Lee, empathy is put into practice when one takes time to learn about another person, makes a personal connection with them, and considers how their actions will affect them. We are social animals by nature, which means we desire close relationships; unfortunately, most of us seldom recognize the importance of understanding those with whom we have a relationship, leading to inconsiderateness, ignorance, and stereotypes. For such intimate animals, we all too often neglect the feelings and thoughts of others, even though they are of no less priority than ours. Therefore, empathy is a vital, indispensable tool in social interaction that helps us connect with others. As communication is being revolutionized, worldviews shaken, and identities changed, it is integral that we learn to better understand others and never forget to empathize, lest we lose our humanity.

 


To Kill a Mockingbird by Harper Lee (1982)

Attention and Mindfulness (2 of 2)

Summary of part one: Attention is “the process of focusing conscious awareness, providing heightened sensitivity to a limited range of experience requiring more extensive information processing” and requires an external stimulus. Research by Colin Cherry (1953), Donald Broadbent (1958), and Anne Treisman (1964) found that we can attend to one task at a time, suppressing all other incoming stimuli, based on quality of sound.


Unknown.png“It is easy to eat without tasting,” says Jon Kabat-Zinn in Coming to Our Senses (p. 118). At first glance, this sentence seems random, out-of-nowhere, and completely absurd. Of course we taste our feed when we eat it! However, Kabat-Zinn argues that while we claim to experience and sense things, we do not truly experience them. His message throughout the book is that we have become out of touch with ourselves, with our senses, our bodies, and with the world around us; we fail to experience things for themselves, insofar as we rush through our lives, treating food as “just another meal,” hastily consuming it, not really taking the time to taste each individual flavor. When we eat a hamburger, all we taste is hamburger, not meat, lettuce, tomato, etc., but just hamburger. Our meals are prepared then eaten, but we do not taste them as they should be tasted. Kabat-Zinn states that when attention and intention team up, we are awarded with connection; from connection, regulation; from regulation, order; and from order, we arrive at ease, contentment. There is an effect called sensory adaptation that we seldom recognize yet is always at work. Constant exposure to an external stimulus builds up our tolerance to it, resulting in the numbing of that sense, to the point that we do not notice it. The reason others can smell our body odor but we ourselves cannot is an example of this, because our odor is constantly emanated, and the brain, to avoid distractions, builds up tolerance, to the extent that we no longer smell our own bodies. The purpose of sensory adaptation is to prevent us from becoming entirely distracted. The world is full of smells, sounds, sights, touches, and tastes, but imagine if we were exposed to all of them at once—this is why we need to adapt to our senses. Of course, were we rapt on studying so that all else was ignored, the sound of a car would still interrupt us, considering the intensity of it would overstimulate our senses. While sensory adaptation has helped us biologically, Kabat-Zinn notes that it also works to our disadvantage, particularly the dampening of our Unknown-4.jpegsenses, without which we cannot live. Breathing is of especial importance in meditation. It is necessary to all living things, we must remember; yet we take it for granted, repeatedly neglecting it, forgetting to check how we are doing it. If we took a few minutes every day to attend to our breathing, we could all reduce stress, find composure, and even lower our heart rate through practice. This applies to all sense. As Aristotle keenly reminds us, “[O]ur power of smell is less discriminating and in general inferior to that of many species of animals.”[1] Unlike most animals, humans’ sense of smell is weaker, and so we rely less upon it. Smell and taste are underrated when it comes to senses, although they are of equal merit. Like breathing, both are taken for granted, appreciated only when we are sick, when we can no longer use them—only then do we wish we could taste and smell again. Just as Kabat-Zinn said, we truthfully eat without tasting. Eating our food, we feel pleasure, in the moment; but if we were sick in the same circumstances, we would appreciate our senses that much more; as such, we must live each day as though we were sick.


There are different kinds of meditations, of ways of being mindful. During meditation, you can do a body or sense scan, where you spend a few moments going through your body, focusing on the sensations in a particular part of the body, examining it, then moving on; or you can, for a few minutes at a time, focus on each of your main senses, perhaps using only your ears for a minute, your nose the next. Proprioception is an obscure sense: it is the sensation of each body part in relation to the others. In a body scan, this is most prevalent, when you feel your body in totality, as a whole, yet are able to focus on one body part. William James, writing about boredom, could just have easily been writing about this state of meditation:

The eyes are fixed on vacancy, the sounds of the world melt into confused unity, the attention is dispersed so that the whole body is felt, as it were, at once, and the foreground of consciousness is filled, if by anything, by a solemn sense of surrender to the empty passing of time.[2]

Unknown.pngTypically, when one meditates, one can either close or open their eyes, fixing them at a certain point, listening to the sounds of the world around them, acknowledging every part of their body, paying attention to the breath, overcome by a static sense of stillness, as they are neither in the past nor the future, but the present, simply being, moment to moment. There are two types of attention in meditation: abstract, or inward, and sensory, or outward, attention. The former involves impartial introspection, the clearing of the mind, the decluttering of ideas. “This curious state of inhibition can for a few moments be produced by fixing the eyes on vacancy. Some persons can voluntarily empty their minds and ‘think of nothing,’” wrote James, describing hypnotism, though inadvertently describing meditation as well.[3] Sensory attention, on the other hand, is simply being attentive to the senses and all incoming stimuli. If you are interested in meditation, there are several exercises that can be done to sharpen your attentiveness, like dhāraṇā, jhāna, samādhi, or you can practice some brahmavihāras. In dhāraṇā, the meditator is aware of themselves, as a whole and as meditating, and an object; after dhāraṇā, they move to jhāna, which is awareness of Unknown-5.jpegbeing and of an object; and finally, in samādhi, they find themselves in unity with the object. Samādhi is translated to “one-pointedness” and refers to pure concentration, pure attention. When in this state, the meditator is in what William James calls voluntary attention. This attention occurs when there is a powerful stimulus, yet you focus on something of less intensity. If you are studying and there is noisy construction outside, focusing on the studying, even though the construction is louder and demands your attention, would be an act of voluntary attention. This state, however, cannot be held indefinitely. As James writes, “[S]ustained voluntary attention is a repetition of successive efforts which bring back [a] topic to the mind.”[4] Hence there is no such thing as maintaining voluntary attention, rather coming back to it over and over. Brahmavihāras are like reflections upon Buddhist virtues. There are four traditional brahmavihāras: loving-kindness, compassion, joy, and equanimity. Feel free, too, to make your own meditation, where you reflect on something outside of the given topics—questions in philosophy, like good and evil, justice, and the sort, are some starters.


Unknown-9.jpegI briefly mentioned the idea of clearing the mind, of emptying it of ideas, and to that I shall turn again. Thoughts, in Buddhist writings, are treated like clouds, wispy and flowing; they are temporary; sometimes they are clear, sometimes they clump together; sometimes they are sunny, sometimes they are like a storm. Either way, thoughts are not permanent, nor can they harm you in any way. Generally, we ought to be on the lookout for negative thoughts. When they arise, we must simply dismiss them. Thoughts are the fire to our thinking’s gasoline, for thinking about our thoughts merely propagates more and makes them worse. It is better to let thoughts pass than to intervene through force. Meditation calls for dispelling all thoughts, good or bad. It is misleading to think that we are trying to get rid of them, that we are trying to single some thoughts out from others. This is not the case; rather, we must acknowledge that we are thinking and let them pass. If a positive thought comes, do not perpetuate it, let it pass; if a negative thought comes, do not perpetuate it, let it pass. Another thing to remember is that simply acknowledging that you are thinking is being mindful, and you should not get frustrated with yourself for this reason. An important facet of Buddhist psychology is the distinction between perception and conception. Perception is pure sensation, and conception is labeling, to put it simply. Sitting in peace and silence, you hear a sound, process it, identify it as the rustling of the trees and the singing of birds, and continue meditating—such is an act of conception, for hearing a sound is perception, but classifying it, labeling it, is conception. Unknown-8.jpegLabeling is necessary for living. Without it, there would be no way to comprehend the world. We would be exposed to a chaotic mess, an overwhelming tidal wave of sensations we cannot understand. Almost everything we see and process is conceptualized: this is a tree, that is a plant, this is grass, that is dirt on which I am walking. One is tempted to think of Kant’s categories of the mind and the differentiation between phenomena and noumena. Our mind actively shapes our world, grouping things together, creating causal links, imposing spaciotemporal relations, constantly conceiving things. Perception is to noumena as conception is to phenomena. Rarely do we perceive things as they are, as things-in-themselves, but conceive them imperfectly. We need to carry this to meditation, in thought and in sensation. We must try not to classify things by texture, color, or shape, nor judge thoughts by appearance, nor label anything as “good” or “bad.” Another danger of thinking is daydreaming, to which all meditators are vulnerable, especially if their eyes are closed. When we doze off, finding comfort and relaxation, following our breath, we might accidentally slip into our fantasies, moving from the external to the internal, where we begin to plan for the future or reminisce in the past. No matter which you do, neither is good. William James warns us, “When absorbed in [passive] intellectual Unknown-10.jpegattention we become so inattentive to outer things as to be ‘absent-minded,’’abstracted,’ or ‘distrait.’ All revery or concentrated meditation is apt to throw us into this state.”[5] By meditation, James is not referring to it in our sense, but to the act of pondering. We should not fall into the trap of thinking about the future or ruminating about the past, because as Marcus Aurelius said, “[M]an lives only in the present, in this fleeting instant: all the rest of his life is either past and gone, or not yet revealed.”[6] The past is in the past, and there is nothing we can do to change it, and wishing you could redo something will not help. And the future has not happened yet, so making unrealistic expectations will not help either.


images.jpeg“But we do far more than emphasize things, and unite some, and keep others apart. We actually ignore most of the things before us,” notes William James.[7] For such a formidable tool to which we all have access, the art of attention and how to properly apply it has all but been forgotten by today’s society, to their disadvantage. We live in an age where A.D.D is rampant, and more and more kids are diagnosed with it. Further, our technology strips us of our connection to nature, to the world, to each other. We are no longer in touch with ourselves or our senses. With mindfulness and meditation, however, by living in the present and embracing our senses and life, we can make our lives meaningful.

 


[1] Aristotle, De Anima II.8, 421a9-10
[2] James, The Principles of Psychology, XI.2, p. 261
[3] Ibid.
[4] Id., XI.6, p. 272
[5] Id., p. 271
[6] Aurelius, Meditations, III.10
[7] James, op. cit., IX.5, p. 184

 

For further reading: Buddhist Psychology Vol. 3 by Geshe Tashi Tsering (2006)
The Principles of Psychology by William James (1990)
Coming to Our Senses by Jon Kabat-Zinn (2005)
Mindfulness by Joseph Goldstein (2016)
Meditations by Marcus Aurelius (2014)
Zen Training by Katsuki Sekida (1985)
De Anima by Aristotle (1990)

Attention and Mindfulness (1 of 2)

Attention is vital to our everyday lives. Some of us are better than others at paying attention, but regardless of skill, we all need it, whether we are learning in class or playing out in the field. In a world that values fast, immediate, instantaneous things, attention is slowly fading away, leaving us disoriented and scattered, left in a culture where it is easy to be left behind if you are not fast enough. Not enough of us pay attention in our everyday lives, even in the most simplest of tasks, failing to appreciate the beauty of life, successfully missing the important things, letting life slip out of our grasp. Through a better understanding of what attention is and how it can be used in mindfulness, I believe we can all live more fulfilling lives.

Unknown-1.jpeg


In psychology, attention refers to “the process of focusing conscious awareness, providing heightened sensitivity to a limited range of experience requiring more extensive information processing.”[1] Simply put, attention is the ability to focus your awareness and senses on a particular task, leading to better experience and understanding. In order for this focusing to occur, you need an external stimulus, such as a sound, and an active goal, which is your response to or classification to such a stimulus. For example, if you hear a dog bark, the barking is the external stimulus, and your realizing it is a dog who is barking is the active goal. The act of paying attention is no direct process but a combination of three processes (Posner, 1995): Orienting senses, controlling consciousness and voluntary behavior, and maintaining alertness. The first stage, orienting senses, is what happens when your sensory organs are directed to the source of a stimulation. When you hear a sound coming from the left, it is your left ear that will process it first, as it is oriented to the direction from which the sound came. Similarly, when you touch something, your hand comes into direct contact with the object. Depending on what sense the stimulus activates, your cortex suppress the other sensory organs while focusing on the active one: rarely do you need your eyes to smell something—it is the nose’s job to do that. When you orient your senses, you tend to use your superior colliculus, responsible for eye movement; the thalamus, responsible for activating specific sensory systems; and the parietal lobe, which is usually responsible for giving us a sense of direction. The next stage is controlling consciousness and voluntary behavior, in which your brains decide just how much you want to focus on a particular sense. Your eyes, when paying attention to something, can dilate or constrict depending on light, for example. Therefore, this 250px-Basal_Ganglia_and_Related_Structures.svg.pngsecond stage’s job is to control your response to stimuli and uses the frontal lobe and basal ganglia, known for their relation to controlling thoughts and actions. Third is maintaining alertness, which is indispensable for attention, for its job is to remain focused on a sense and ignore possible distractions. When you maintain alertness, you use different neural patterns in your reticular formation and frontal lobe. A type of attention known as selection is defined as “the essence of attention” (Rees et al., 1997).[2] Selective attention is the ability to focus on something important and ignore others, whereas selective inattention is the ability to ignore something important and focus on others; the latter is used most often, either for good, as in diverting stress, or for bad, as in procrastinating.


Imagine you are at a party. You are sitting at a table with your friends, deep in conversation; the speakers are blasting music; there are people dancing; and there is another conversation across the room. Engrossed in the talk, you block out all other sound beside your own conversation, when all of a sudden, you hear your name being Unknown-2.jpegmentioned in the conversation across the room. The Cocktail Party Phenomenon, as it came to be called, was studied by Colin Cherry (1953), who found, startlingly, that not only is most information unconsciously processed, but some of this information, conscious or not, is prioritized above other information. A contemporary of his, Donald Broadbent, developed the Broadbent Filter Model (1958) to attempt to explain why this is so. Fascinated by air traffic controllers, whose job it is to receive multiple incoming messages at once and in mere seconds make quick judgments about which is most important, Broadbent began to study divided attention, “the capacity to split attention or cognitive resources between two or more tasks”[3] (Craik et al., 1996), by using a method of testing called dichotic listening, where a subject puts on a pair of headphones and is played a different messages in each ear, simultaneously. Broadbent found that only one channel can be understood at a time, while the other is blocked out. He reasoned that there must be a theoretical, Y-shaped divergence in our minds that, when two inputs try to pass, lets one through and blocks access to the other. He said, further, that we have a short-term memory store that keeps track of these channels. The question remained, though: How does the brain decide which channel to let through? In another surprising conclusion, he found that in spoken language, meaning is understood after being processed; as such, content is not the decisive factor but quality of sound, like loudness, harshness, and from what sex it came. A loud, domineering voice, therefore, will be prioritized over a softer, nicer voice, even if the latter is more important in its message. Broadbent later went back and revised his model, stating priority is based on a combination of quality of the voice, content of the words, and prior experience; however, a later psychologist, Anne Treisman, said that during the Y-exchange, the second channel is not ignored, per se, but suppressed—this would explain the Cocktail Party Effect, because although you do not consciously hear your name, you still process it.

 


[1] Westen, Psychology: Brain, Mind, & Culture, 2nd ed., p. 395
[2] Ibid., pp. 395-6
[3] Id., pp. 397-8

 

For further reading: Psychology: Brain, Mind, & Culture 2nd ed. by Drew Westen (1999)
Essentials of Psychology by Kendra Cherry (2010)
The Psychology Book by Wade E. Pickren (2014)
The Psychology Book
by DK (2012)

Some Experiments on the Recognition of Speech, with One and with Two Ears by Colin Cherry (1953)