Concept of Knowledge Revisited
February 5, 2011
What Is Artificial Intelligence?
By RICHARD POWERS
Urbana, Ill.
IN the category “What Do You Know?”, for $1 million: This four-year-old upstart the size of a small R.V. has digested 200 million pages of data about everything in existence and it means to give a couple of the world’s quickest humans a run for their money at their own game.
The question: What is Watson?
I.B.M.’s groundbreaking question-answering system, running on roughly 2,500 parallel processor cores, each able to perform up to 33 billion operations a second, is playing a pair of “Jeopardy!” matches against the show’s top two living players, to be aired on Feb. 14, 15 and 16. Watson is I.B.M.’s latest self-styled Grand Challenge, a follow-up to the 1997 defeat by its computer Deep Blue of Garry Kasparov, the world’s reigning chess champion. (It’s remarkable how much of the digital revolution has been driven by games and entertainment.) Yes, the match is a grandstanding stunt, baldly calculated to capture the public’s imagination. But barring any humiliating stumble by the machine on national television, it should.
Consider the challenge: Watson will have to be ready to identify anything under the sun, answering all manner of coy, sly, slant, esoteric, ambiguous questions ranging from the “Rh factor” of Scarlett’s favorite Butler or the 19th-century painter whose name means “police officer” to the rhyme-time place where Pelé stores his ball or what you get when you cross a typical day in the life of the Beatles with a crazed zombie classic. And he (forgive me) will have to buzz in fast enough and with sufficient confidence to beat Ken Jennings, the holder of the longest unbroken “Jeopardy!” winning streak, and Brad Rutter, an undefeated champion and the game’s biggest money winner. The machine’s one great edge: Watson has no idea that he should be panicking.
Open-domain question answering has long been one of the great holy grails of artificial intelligence. It is considerably harder to formalize than chess. It goes well beyond what search engines like Google do when they comb data for keywords. Google can give you 300,000 page matches for a search of the terms “greyhound,” “origin” and “African country,” which you can then comb through at your leisure to find what you need.
Asked in what African country the greyhound originated, Watson can tell you in a couple of seconds that the authoritative consensus favors Egypt. But to stand a chance of defeating Mr. Jennings and Mr. Rutter, Watson will have to be able to beat them to the buzzer at least half the time and answer with something like 90 percent accuracy.
When I.B.M.’s David Ferrucci and his team of about 20 core researchers began their “Jeopardy!” quest in 2006, their state-of-the-art question-answering system could solve no more than 15 percent of questions from earlier shows. They fed their machine libraries full of documents — books, encyclopedias, dictionaries, thesauri, databases, taxonomies, and even Bibles, movie scripts, novels and plays.
But the real breakthrough came with the extravagant addition of many multiple “expert” analyzers — more than 100 different techniques running concurrently to analyze natural language, appraise sources, propose hypotheses, merge the results and rank the top guesses. Answers, for Watson, are a statistical thing, a matter of frequency and likelihood. If, after a couple of seconds, the countless possibilities produced by the 100-some algorithms converge on a solution whose chances pass Watson’s threshold of confidence, it buzzes in.
This raises the question of whether Watson is really answering questions at all or is just noticing statistical correlations in vast amounts of data. But the mere act of building the machine has been a powerful exploration of just what we mean when we talk about knowing.
Who knows how Mr. Jennings and Mr. Rutter do it — puns cracked, ambiguities resolved, obscurities retrieved, links formed across every domain in creation, all in a few heartbeats. The feats of engineering involved in answering the smallest query about the world are beyond belief. But I.B.M. is betting a fair chunk of its reputation that 2011 will be the year that machines can play along at the game.
Does Watson stand a chance of winning? I would not stake my “Final Jeopardy!” nest egg on it. Not yet. Words are very rascals, and language may still be too slippery for it. But watching films of the machine in sparring matches against lesser human champions, I felt myself choking up at its heroic effort, the size of the undertaking, the centuries of accumulating groundwork, hope and ingenuity that have gone into this next step in the long human drama. I was most moved when the 100-plus parallel algorithms wiped out and the machine came up with some ridiculous answer, calling it out as if it might just be true, its cheerful synthesized voice sounding as vulnerable as that of any bewildered contestant.
It does not matter who will win this $1 million Valentine’s Day contest. We all know who will be champion, eventually. The real showdown is between us and our own future. Information is growing many times faster than anyone’s ability to manage it, and Watson may prove crucial in helping to turn all that noise into knowledge.
Dr. Ferrucci and company plan to sell the system to businesses in need of fast, expert answers drawn from an overwhelming pool of supporting data. The potential client list is endless. A private Watson will cost millions today and requires a room full of hardware. But if what Ray Kurzweil calls the Law of Accelerating Returns keeps holding, before too long, you’ll have an app for that.
Like so many of its precursors, Watson will make us better at some things, worse at others. (Recall Socrates’ warnings about the perils of that most destabilizing technology of all — writing.) Already we rely on Google to deliver to the top of the million-hit list just those pages we are most interested in, and we trust its concealed algorithms with a faith that would be difficult to explain to the smartest computer. Even if we might someday be able to ask some future Watson how fast and how badly we are cooking the earth, and even if it replied (based on the sum of all human knowledge) with 90 percent accuracy, would such an answer convert any of the already convinced or produce the political will we’ll need to survive the reply?
Still, history is the long process of outsourcing human ability in order to leverage more of it. We will concede this trivia game (after a very long run as champions), and find another in which, aided by our compounding prosthetics, we can excel in more powerful and ever more terrifying ways.
Should Watson win next week, the news will be everywhere. We’ll stand in awe of our latest magnificent machine, for a season or two. For a while, we’ll have exactly the gadget we need. Then we’ll get needy again, looking for a newer, stronger, longer lever, for the next larger world to move.
For “Final Jeopardy!”, the category is “Players”: This creature’s three-pound, 100-trillion-connection machine won’t ever stop looking for an answer.
The question: What is a human being?
Richard Powers is the author of the novel “Generosity: An Enhancement.”
http://www.nytimes.com/2011/02/06/opini ... emc=tha212
What Is Artificial Intelligence?
By RICHARD POWERS
Urbana, Ill.
IN the category “What Do You Know?”, for $1 million: This four-year-old upstart the size of a small R.V. has digested 200 million pages of data about everything in existence and it means to give a couple of the world’s quickest humans a run for their money at their own game.
The question: What is Watson?
I.B.M.’s groundbreaking question-answering system, running on roughly 2,500 parallel processor cores, each able to perform up to 33 billion operations a second, is playing a pair of “Jeopardy!” matches against the show’s top two living players, to be aired on Feb. 14, 15 and 16. Watson is I.B.M.’s latest self-styled Grand Challenge, a follow-up to the 1997 defeat by its computer Deep Blue of Garry Kasparov, the world’s reigning chess champion. (It’s remarkable how much of the digital revolution has been driven by games and entertainment.) Yes, the match is a grandstanding stunt, baldly calculated to capture the public’s imagination. But barring any humiliating stumble by the machine on national television, it should.
Consider the challenge: Watson will have to be ready to identify anything under the sun, answering all manner of coy, sly, slant, esoteric, ambiguous questions ranging from the “Rh factor” of Scarlett’s favorite Butler or the 19th-century painter whose name means “police officer” to the rhyme-time place where Pelé stores his ball or what you get when you cross a typical day in the life of the Beatles with a crazed zombie classic. And he (forgive me) will have to buzz in fast enough and with sufficient confidence to beat Ken Jennings, the holder of the longest unbroken “Jeopardy!” winning streak, and Brad Rutter, an undefeated champion and the game’s biggest money winner. The machine’s one great edge: Watson has no idea that he should be panicking.
Open-domain question answering has long been one of the great holy grails of artificial intelligence. It is considerably harder to formalize than chess. It goes well beyond what search engines like Google do when they comb data for keywords. Google can give you 300,000 page matches for a search of the terms “greyhound,” “origin” and “African country,” which you can then comb through at your leisure to find what you need.
Asked in what African country the greyhound originated, Watson can tell you in a couple of seconds that the authoritative consensus favors Egypt. But to stand a chance of defeating Mr. Jennings and Mr. Rutter, Watson will have to be able to beat them to the buzzer at least half the time and answer with something like 90 percent accuracy.
When I.B.M.’s David Ferrucci and his team of about 20 core researchers began their “Jeopardy!” quest in 2006, their state-of-the-art question-answering system could solve no more than 15 percent of questions from earlier shows. They fed their machine libraries full of documents — books, encyclopedias, dictionaries, thesauri, databases, taxonomies, and even Bibles, movie scripts, novels and plays.
But the real breakthrough came with the extravagant addition of many multiple “expert” analyzers — more than 100 different techniques running concurrently to analyze natural language, appraise sources, propose hypotheses, merge the results and rank the top guesses. Answers, for Watson, are a statistical thing, a matter of frequency and likelihood. If, after a couple of seconds, the countless possibilities produced by the 100-some algorithms converge on a solution whose chances pass Watson’s threshold of confidence, it buzzes in.
This raises the question of whether Watson is really answering questions at all or is just noticing statistical correlations in vast amounts of data. But the mere act of building the machine has been a powerful exploration of just what we mean when we talk about knowing.
Who knows how Mr. Jennings and Mr. Rutter do it — puns cracked, ambiguities resolved, obscurities retrieved, links formed across every domain in creation, all in a few heartbeats. The feats of engineering involved in answering the smallest query about the world are beyond belief. But I.B.M. is betting a fair chunk of its reputation that 2011 will be the year that machines can play along at the game.
Does Watson stand a chance of winning? I would not stake my “Final Jeopardy!” nest egg on it. Not yet. Words are very rascals, and language may still be too slippery for it. But watching films of the machine in sparring matches against lesser human champions, I felt myself choking up at its heroic effort, the size of the undertaking, the centuries of accumulating groundwork, hope and ingenuity that have gone into this next step in the long human drama. I was most moved when the 100-plus parallel algorithms wiped out and the machine came up with some ridiculous answer, calling it out as if it might just be true, its cheerful synthesized voice sounding as vulnerable as that of any bewildered contestant.
It does not matter who will win this $1 million Valentine’s Day contest. We all know who will be champion, eventually. The real showdown is between us and our own future. Information is growing many times faster than anyone’s ability to manage it, and Watson may prove crucial in helping to turn all that noise into knowledge.
Dr. Ferrucci and company plan to sell the system to businesses in need of fast, expert answers drawn from an overwhelming pool of supporting data. The potential client list is endless. A private Watson will cost millions today and requires a room full of hardware. But if what Ray Kurzweil calls the Law of Accelerating Returns keeps holding, before too long, you’ll have an app for that.
Like so many of its precursors, Watson will make us better at some things, worse at others. (Recall Socrates’ warnings about the perils of that most destabilizing technology of all — writing.) Already we rely on Google to deliver to the top of the million-hit list just those pages we are most interested in, and we trust its concealed algorithms with a faith that would be difficult to explain to the smartest computer. Even if we might someday be able to ask some future Watson how fast and how badly we are cooking the earth, and even if it replied (based on the sum of all human knowledge) with 90 percent accuracy, would such an answer convert any of the already convinced or produce the political will we’ll need to survive the reply?
Still, history is the long process of outsourcing human ability in order to leverage more of it. We will concede this trivia game (after a very long run as champions), and find another in which, aided by our compounding prosthetics, we can excel in more powerful and ever more terrifying ways.
Should Watson win next week, the news will be everywhere. We’ll stand in awe of our latest magnificent machine, for a season or two. For a while, we’ll have exactly the gadget we need. Then we’ll get needy again, looking for a newer, stronger, longer lever, for the next larger world to move.
For “Final Jeopardy!”, the category is “Players”: This creature’s three-pound, 100-trillion-connection machine won’t ever stop looking for an answer.
The question: What is a human being?
Richard Powers is the author of the novel “Generosity: An Enhancement.”
http://www.nytimes.com/2011/02/06/opini ... emc=tha212
March 7, 2011
The New Humanism
By DAVID BROOKS
Researchers are coming up with a more accurate view of who we are and are beginning to show how the emotional and the rational are intertwined.
Over the course of my career, I’ve covered a number of policy failures. When the Soviet Union fell, we sent in teams of economists, oblivious to the lack of social trust that marred that society. While invading Iraq, the nation’s leaders were unprepared for the cultural complexities of the place and the psychological aftershocks of Saddam’s terror.
We had a financial regime based on the notion that bankers are rational creatures who wouldn’t do anything stupid en masse. For the past 30 years we’ve tried many different ways to restructure our educational system — trying big schools and little schools, charters and vouchers — that, for years, skirted the core issue: the relationship between a teacher and a student.
I’ve come to believe that these failures spring from a single failure: reliance on an overly simplistic view of human nature. We have a prevailing view in our society — not only in the policy world, but in many spheres — that we are divided creatures. Reason, which is trustworthy, is separate from the emotions, which are suspect. Society progresses to the extent that reason can suppress the passions.
This has created a distortion in our culture. We emphasize things that are rational and conscious and are inarticulate about the processes down below. We are really good at talking about material things but bad at talking about emotion.
When we raise our kids, we focus on the traits measured by grades and SAT scores. But when it comes to the most important things like character and how to build relationships, we often have nothing to say. Many of our public policies are proposed by experts who are comfortable only with correlations that can be measured, appropriated and quantified, and ignore everything else.
Yet while we are trapped within this amputated view of human nature, a richer and deeper view is coming back into view. It is being brought to us by researchers across an array of diverse fields: neuroscience, psychology, sociology, behavioral economics and so on.
This growing, dispersed body of research reminds us of a few key insights. First, the unconscious parts of the mind are most of the mind, where many of the most impressive feats of thinking take place. Second, emotion is not opposed to reason; our emotions assign value to things and are the basis of reason. Finally, we are not individuals who form relationships. We are social animals, deeply interpenetrated with one another, who emerge out of relationships.
This body of research suggests the French enlightenment view of human nature, which emphasized individualism and reason, was wrong. The British enlightenment, which emphasized social sentiments, was more accurate about who we are. It suggests we are not divided creatures. We don’t only progress as reason dominates the passions. We also thrive as we educate our emotions.
When you synthesize this research, you get different perspectives on everything from business to family to politics. You pay less attention to how people analyze the world but more to how they perceive and organize it in their minds. You pay a bit less attention to individual traits and more to the quality of relationships between people.
You get a different view of, say, human capital. Over the past few decades, we have tended to define human capital in the narrow way, emphasizing I.Q., degrees, and professional skills. Those are all important, obviously, but this research illuminates a range of deeper talents, which span reason and emotion and make a hash of both categories:
Attunement: the ability to enter other minds and learn what they have to offer.
Equipoise: the ability to serenely monitor the movements of one’s own mind and correct for biases and shortcomings.
Metis: the ability to see patterns in the world and derive a gist from complex situations.
Sympathy: the ability to fall into a rhythm with those around you and thrive in groups.
Limerence: This isn’t a talent as much as a motivation. The conscious mind hungers for money and success, but the unconscious mind hungers for those moments of transcendence when the skull line falls away and we are lost in love for another, the challenge of a task or the love of God. Some people seem to experience this drive more powerfully than others.
When Sigmund Freud came up with his view of the unconscious, it had a huge effect on society and literature. Now hundreds of thousands of researchers are coming up with a more accurate view of who we are. Their work is scientific, but it directs our attention toward a new humanism. It’s beginning to show how the emotional and the rational are intertwined.
I suspect their work will have a giant effect on the culture. It’ll change how we see ourselves. Who knows, it may even someday transform the way our policy makers see the world.
http://www.nytimes.com/2011/03/08/opini ... emc=tha212
The New Humanism
By DAVID BROOKS
Researchers are coming up with a more accurate view of who we are and are beginning to show how the emotional and the rational are intertwined.
Over the course of my career, I’ve covered a number of policy failures. When the Soviet Union fell, we sent in teams of economists, oblivious to the lack of social trust that marred that society. While invading Iraq, the nation’s leaders were unprepared for the cultural complexities of the place and the psychological aftershocks of Saddam’s terror.
We had a financial regime based on the notion that bankers are rational creatures who wouldn’t do anything stupid en masse. For the past 30 years we’ve tried many different ways to restructure our educational system — trying big schools and little schools, charters and vouchers — that, for years, skirted the core issue: the relationship between a teacher and a student.
I’ve come to believe that these failures spring from a single failure: reliance on an overly simplistic view of human nature. We have a prevailing view in our society — not only in the policy world, but in many spheres — that we are divided creatures. Reason, which is trustworthy, is separate from the emotions, which are suspect. Society progresses to the extent that reason can suppress the passions.
This has created a distortion in our culture. We emphasize things that are rational and conscious and are inarticulate about the processes down below. We are really good at talking about material things but bad at talking about emotion.
When we raise our kids, we focus on the traits measured by grades and SAT scores. But when it comes to the most important things like character and how to build relationships, we often have nothing to say. Many of our public policies are proposed by experts who are comfortable only with correlations that can be measured, appropriated and quantified, and ignore everything else.
Yet while we are trapped within this amputated view of human nature, a richer and deeper view is coming back into view. It is being brought to us by researchers across an array of diverse fields: neuroscience, psychology, sociology, behavioral economics and so on.
This growing, dispersed body of research reminds us of a few key insights. First, the unconscious parts of the mind are most of the mind, where many of the most impressive feats of thinking take place. Second, emotion is not opposed to reason; our emotions assign value to things and are the basis of reason. Finally, we are not individuals who form relationships. We are social animals, deeply interpenetrated with one another, who emerge out of relationships.
This body of research suggests the French enlightenment view of human nature, which emphasized individualism and reason, was wrong. The British enlightenment, which emphasized social sentiments, was more accurate about who we are. It suggests we are not divided creatures. We don’t only progress as reason dominates the passions. We also thrive as we educate our emotions.
When you synthesize this research, you get different perspectives on everything from business to family to politics. You pay less attention to how people analyze the world but more to how they perceive and organize it in their minds. You pay a bit less attention to individual traits and more to the quality of relationships between people.
You get a different view of, say, human capital. Over the past few decades, we have tended to define human capital in the narrow way, emphasizing I.Q., degrees, and professional skills. Those are all important, obviously, but this research illuminates a range of deeper talents, which span reason and emotion and make a hash of both categories:
Attunement: the ability to enter other minds and learn what they have to offer.
Equipoise: the ability to serenely monitor the movements of one’s own mind and correct for biases and shortcomings.
Metis: the ability to see patterns in the world and derive a gist from complex situations.
Sympathy: the ability to fall into a rhythm with those around you and thrive in groups.
Limerence: This isn’t a talent as much as a motivation. The conscious mind hungers for money and success, but the unconscious mind hungers for those moments of transcendence when the skull line falls away and we are lost in love for another, the challenge of a task or the love of God. Some people seem to experience this drive more powerfully than others.
When Sigmund Freud came up with his view of the unconscious, it had a huge effect on society and literature. Now hundreds of thousands of researchers are coming up with a more accurate view of who we are. Their work is scientific, but it directs our attention toward a new humanism. It’s beginning to show how the emotional and the rational are intertwined.
I suspect their work will have a giant effect on the culture. It’ll change how we see ourselves. Who knows, it may even someday transform the way our policy makers see the world.
http://www.nytimes.com/2011/03/08/opini ... emc=tha212
Education our Intellect andemotions and our 9 senses (not 5)
I though I will share with you a tranformative educational reform and suggest you also see the Ken Robinson video on the eduction Paradim. In partiicular what Mawlala Hazar Imam is saying on this subject too.
"Ken Robinson a renowned educationalist – He advocates transformative educational reform to enable and empower creativity. There appears to be a misplaced fear to let children think freely and choose what is best for them rather than what our current education and learning mindset imposes and thinks or wishes is best for them. For example we want our children to have good academic knowledge, a degree and a career. These wishes are all motivated more by economic benefits and security. They reflect our thinking and our mindset. Therefore they reflect our childrens mindset too. The present mindset traditionally limits or inhibits creativity. Early learning and many nursery schools today have hour or so a week, where children can pick and choose what class they attend. Communications skills are taught to us after the mindset has developed.
http://www.youtube.com/watch?v=iG9CE55wbtY;
International Baccalaureate (IB) education model acknowledges the need and the challenges but does not specifically address this.
The five IB education essential elements— “concepts, knowledge, skills, attitudes, action—are incorporated into this framework, so that students are given the opportunity to: gain knowledge that is relevant and of global significance. develop an understanding of concepts which allows them to make connections throughout their learning. acquire transdisciplinary and disciplinary skills. develop attitudes that will lead to international-mindedness. take action as a consequence of their learning.” (Bringing them and ex School and IB life learning experiences together coherently is what this proposition offers)
Teaching and giving knowledge and information to students, is a part of the challenges of interaction and constructive engagement in their daily lives, and to do so with respect, understanding and comfort whilst at the same time remaining spontaneous and understandable to their reason.
International Baccalaureate has been operating for 30 years. This niche solution is complimentary and necessary.
His Highness The Aga Khan operates over 500 schools, universities and academies of excellence in over 25 countries. He was invited and gave the 10th annual La Fontaine Baldwin Lecture on 15th October 2010.
He said
“…institutional reforms will have lasting meaning only when there is a social mindset to sustain them. There is a profound reciprocal relationship between institutional and cultural variables. How we think shapes our institutions. And then our institutions shape us. How we see the past is an important part of this mindset. As we go forward, we hope we can discern more predictably and pre-empt more effectively those conditions which lead to conflict among peoples. And we also hope that we can advance those institutions and those mindsets which foster constructive engagement. The world we seek is not a world where difference is erased, but where difference can be a powerful force for good, helping us to fashion a new sense of cooperation and coherence in our world, and to build together a better life for all”
He places significant importance on changing the mindset to address the critical challenges of cretivitiy, moral authority, pluralism and conflict between people coming from a clash of ignorance. The Aga Khan development network have entered into an agreement with International Baccalaureate as a part of their education strategy for the next decade or two
The link to the Aga Khan's visionary Lecture. http://www.akdn.org/Content/1018/His-Hi ... in-Lecture
We are transitioning as we speak from the knowledge society into the design age where access to information and knowledge is both excessive and instant. Developing and fostering a new mindset is critical to successfully negotiate the evolving challenges of the design age.
"Ken Robinson a renowned educationalist – He advocates transformative educational reform to enable and empower creativity. There appears to be a misplaced fear to let children think freely and choose what is best for them rather than what our current education and learning mindset imposes and thinks or wishes is best for them. For example we want our children to have good academic knowledge, a degree and a career. These wishes are all motivated more by economic benefits and security. They reflect our thinking and our mindset. Therefore they reflect our childrens mindset too. The present mindset traditionally limits or inhibits creativity. Early learning and many nursery schools today have hour or so a week, where children can pick and choose what class they attend. Communications skills are taught to us after the mindset has developed.
http://www.youtube.com/watch?v=iG9CE55wbtY;
International Baccalaureate (IB) education model acknowledges the need and the challenges but does not specifically address this.
The five IB education essential elements— “concepts, knowledge, skills, attitudes, action—are incorporated into this framework, so that students are given the opportunity to: gain knowledge that is relevant and of global significance. develop an understanding of concepts which allows them to make connections throughout their learning. acquire transdisciplinary and disciplinary skills. develop attitudes that will lead to international-mindedness. take action as a consequence of their learning.” (Bringing them and ex School and IB life learning experiences together coherently is what this proposition offers)
Teaching and giving knowledge and information to students, is a part of the challenges of interaction and constructive engagement in their daily lives, and to do so with respect, understanding and comfort whilst at the same time remaining spontaneous and understandable to their reason.
International Baccalaureate has been operating for 30 years. This niche solution is complimentary and necessary.
His Highness The Aga Khan operates over 500 schools, universities and academies of excellence in over 25 countries. He was invited and gave the 10th annual La Fontaine Baldwin Lecture on 15th October 2010.
He said
“…institutional reforms will have lasting meaning only when there is a social mindset to sustain them. There is a profound reciprocal relationship between institutional and cultural variables. How we think shapes our institutions. And then our institutions shape us. How we see the past is an important part of this mindset. As we go forward, we hope we can discern more predictably and pre-empt more effectively those conditions which lead to conflict among peoples. And we also hope that we can advance those institutions and those mindsets which foster constructive engagement. The world we seek is not a world where difference is erased, but where difference can be a powerful force for good, helping us to fashion a new sense of cooperation and coherence in our world, and to build together a better life for all”
He places significant importance on changing the mindset to address the critical challenges of cretivitiy, moral authority, pluralism and conflict between people coming from a clash of ignorance. The Aga Khan development network have entered into an agreement with International Baccalaureate as a part of their education strategy for the next decade or two
The link to the Aga Khan's visionary Lecture. http://www.akdn.org/Content/1018/His-Hi ... in-Lecture
We are transitioning as we speak from the knowledge society into the design age where access to information and knowledge is both excessive and instant. Developing and fostering a new mindset is critical to successfully negotiate the evolving challenges of the design age.
Education our Intellect andemotions and our 9 senses (not 5)
I though I will share with you a tranformative educational reform and suggest you also see the Ken Robinson video on the eduction Paradim. In partiicular what Mawlala Hazar Imam is saying on this subject too.
"Ken Robinson a renowned educationalist – He advocates transformative educational reform to enable and empower creativity. There appears to be a misplaced fear to let children think freely and choose what is best for them rather than what our current education and learning mindset imposes and thinks or wishes is best for them. For example we want our children to have good academic knowledge, a degree and a career. These wishes are all motivated more by economic benefits and security. They reflect our thinking and our mindset. Therefore they reflect our childrens mindset too. The present mindset traditionally limits or inhibits creativity. Early learning and many nursery schools today have hour or so a week, where children can pick and choose what class they attend. Communications skills are taught to us after the mindset has developed.
http://www.youtube.com/watch?v=iG9CE55wbtY;
International Baccalaureate (IB) education model acknowledges the need and the challenges but does not specifically address this.
The five IB education essential elements— “concepts, knowledge, skills, attitudes, action—are incorporated into this framework, so that students are given the opportunity to: gain knowledge that is relevant and of global significance. develop an understanding of concepts which allows them to make connections throughout their learning. acquire transdisciplinary and disciplinary skills. develop attitudes that will lead to international-mindedness. take action as a consequence of their learning.” (Bringing them and ex School and IB life learning experiences together coherently is what this proposition offers)
Teaching and giving knowledge and information to students, is a part of the challenges of interaction and constructive engagement in their daily lives, and to do so with respect, understanding and comfort whilst at the same time remaining spontaneous and understandable to their reason.
International Baccalaureate has been operating for 30 years. This niche solution is complimentary and necessary.
His Highness The Aga Khan operates over 500 schools, universities and academies of excellence in over 25 countries. He was invited and gave the 10th annual La Fontaine Baldwin Lecture on 15th October 2010.
He said
“…institutional reforms will have lasting meaning only when there is a social mindset to sustain them. There is a profound reciprocal relationship between institutional and cultural variables. How we think shapes our institutions. And then our institutions shape us. How we see the past is an important part of this mindset. As we go forward, we hope we can discern more predictably and pre-empt more effectively those conditions which lead to conflict among peoples. And we also hope that we can advance those institutions and those mindsets which foster constructive engagement. The world we seek is not a world where difference is erased, but where difference can be a powerful force for good, helping us to fashion a new sense of cooperation and coherence in our world, and to build together a better life for all”
He places significant importance on changing the mindset to address the critical challenges of cretivitiy, moral authority, pluralism and conflict between people coming from a clash of ignorance. The Aga Khan development network have entered into an agreement with International Baccalaureate as a part of their education strategy for the next decade or two
The link to the Aga Khan's visionary Lecture. http://www.akdn.org/Content/1018/His-Hi ... in-Lecture
We are transitioning as we speak from the knowledge society into the design age where access to information and knowledge is both excessive and instant. Developing and fostering a new mindset is critical to successfully negotiate the evolving challenges of the design age.
"Ken Robinson a renowned educationalist – He advocates transformative educational reform to enable and empower creativity. There appears to be a misplaced fear to let children think freely and choose what is best for them rather than what our current education and learning mindset imposes and thinks or wishes is best for them. For example we want our children to have good academic knowledge, a degree and a career. These wishes are all motivated more by economic benefits and security. They reflect our thinking and our mindset. Therefore they reflect our childrens mindset too. The present mindset traditionally limits or inhibits creativity. Early learning and many nursery schools today have hour or so a week, where children can pick and choose what class they attend. Communications skills are taught to us after the mindset has developed.
http://www.youtube.com/watch?v=iG9CE55wbtY;
International Baccalaureate (IB) education model acknowledges the need and the challenges but does not specifically address this.
The five IB education essential elements— “concepts, knowledge, skills, attitudes, action—are incorporated into this framework, so that students are given the opportunity to: gain knowledge that is relevant and of global significance. develop an understanding of concepts which allows them to make connections throughout their learning. acquire transdisciplinary and disciplinary skills. develop attitudes that will lead to international-mindedness. take action as a consequence of their learning.” (Bringing them and ex School and IB life learning experiences together coherently is what this proposition offers)
Teaching and giving knowledge and information to students, is a part of the challenges of interaction and constructive engagement in their daily lives, and to do so with respect, understanding and comfort whilst at the same time remaining spontaneous and understandable to their reason.
International Baccalaureate has been operating for 30 years. This niche solution is complimentary and necessary.
His Highness The Aga Khan operates over 500 schools, universities and academies of excellence in over 25 countries. He was invited and gave the 10th annual La Fontaine Baldwin Lecture on 15th October 2010.
He said
“…institutional reforms will have lasting meaning only when there is a social mindset to sustain them. There is a profound reciprocal relationship between institutional and cultural variables. How we think shapes our institutions. And then our institutions shape us. How we see the past is an important part of this mindset. As we go forward, we hope we can discern more predictably and pre-empt more effectively those conditions which lead to conflict among peoples. And we also hope that we can advance those institutions and those mindsets which foster constructive engagement. The world we seek is not a world where difference is erased, but where difference can be a powerful force for good, helping us to fashion a new sense of cooperation and coherence in our world, and to build together a better life for all”
He places significant importance on changing the mindset to address the critical challenges of cretivitiy, moral authority, pluralism and conflict between people coming from a clash of ignorance. The Aga Khan development network have entered into an agreement with International Baccalaureate as a part of their education strategy for the next decade or two
The link to the Aga Khan's visionary Lecture. http://www.akdn.org/Content/1018/His-Hi ... in-Lecture
We are transitioning as we speak from the knowledge society into the design age where access to information and knowledge is both excessive and instant. Developing and fostering a new mindset is critical to successfully negotiate the evolving challenges of the design age.
March 28, 2011
Tools for Thinking
By DAVID BROOKS
A few months ago, Steven Pinker of Harvard asked a smart question: What scientific concept would improve everybody’s cognitive toolkit?
The good folks at Edge.org organized a symposium, and 164 thinkers contributed suggestions. John McWhorter, a linguist at Columbia University, wrote that people should be more aware of path dependence. This refers to the notion that often “something that seems normal or inevitable today began with a choice that made sense at a particular time in the past, but survived despite the eclipse of the justification for that choice.”
For instance, typewriters used to jam if people typed too fast, so the manufacturers designed a keyboard that would slow typists. We no longer have typewriters, but we are stuck with the letter arrangements of the qwerty keyboard.
Path dependence explains many linguistic patterns and mental categories, McWhorter continues. Many people worry about the way e-mail seems to degrade writing skills. But there is nothing about e-mail that forbids people from using the literary style of 19th-century letter writers. In the 1960s, language became less formal, and now anybody who uses the old manner is regarded as an eccentric.
Evgeny Morozov, the author of “The Net Delusion,” nominated the Einstellung Effect, the idea that we often try to solve problems by using solutions that worked in the past instead of looking at each situation on its own terms. This effect is especially powerful in foreign affairs, where each new conflict is viewed through the prism of Vietnam or Munich or the cold war or Iraq.
Daniel Kahneman of Princeton University writes about the Focusing Illusion, which holds that “nothing in life is as important as you think it is while you are thinking about it.” He continues: “Education is an important determinant of income — one of the most important — but it is less important than most people think. If everyone had the same education, the inequality of income would be reduced by less than 10 percent. When you focus on education you neglect the myriad of other factors that determine income. The differences of income among people who have the same education are huge.”
Joshua Greene, a philosopher and neuroscientist at Harvard University, has a brilliant entry on Supervenience. Imagine a picture on a computer screen of a dog sitting in a rowboat. It can be described as a picture of a dog, but at a different level it can be described as an arrangement of pixels and colors. The relationship between the two levels is asymmetric. The same image can be displayed at different sizes with different pixels. The high-level properties (dogness) supervene the low-level properties (pixels).
Supervenience, Greene continues, helps explain things like the relationship between science and the humanities. Humanists fear that scientists are taking over their territory and trying to explain everything. But new discoveries about the brain don’t explain Macbeth. The products of the mind supervene the mechanisms of the brain. The humanities can be informed by the cognitive sciences even as they supervene them.
If I were presumptuous enough to nominate a few entries, I’d suggest the Fundamental Attribution Error: Don’t try to explain by character traits behavior that is better explained by context.
I’d also nominate the distinction between emotion and arousal. There’s a general assumption that emotional people are always flying off the handle. That’s not true. We would also say that Emily Dickinson was emotionally astute. As far as I know, she did not go around screaming all the time. It would be useful if we could distinguish between the emotionality of Dickinson and the arousal of the talk-show jock.
Public life would be vastly improved if people relied more on the concept of emergence. Many contributors to the Edge symposium hit on this point.
We often try to understand problems by taking apart and studying their constituent parts. But emergent problems can’t be understood this way. Emergent systems are ones in which many different elements interact. The pattern of interaction then produces a new element that is greater than the sum of the parts, which then exercises a top-down influence on the constituent elements.
Culture is an emergent system. A group of people establishes a pattern of interaction. And once that culture exists, it influences how the individuals in it behave. An economy is an emergent system. So is political polarization, rising health care costs and a bad marriage.
Emergent systems are bottom-up and top-down simultaneously. They have to be studied differently, as wholes and as nested networks of relationships. We still try to address problems like poverty and Islamic extremism by trying to tease out individual causes. We might make more headway if we thought emergently.
We’d certainly be better off if everyone sampled the fabulous Edge symposium, which, like the best in science, is modest and daring all at once. 
http://www.nytimes.com/2011/03/29/opini ... emc=tha212
*****
March 29, 2011, 1:40 pm
More Tools For Thinking
In Tuesday’s column I describe a symposium over at Edge.org on what scientific concepts everyone’s cognitive toolbox should hold. There were many superb entries in that symposium, and I only had space to highlight a few, so I’d like to mention a few more here.
Before I do, let me just recommend that symposium for the following reasons. First, it will give you a good survey of what many leading scientists, especially those who study the mind and society, are thinking about right now. You’ll also be struck by the tone. There is an acute awareness, in entry after entry, of how little we know and how complicated things are. You’ll come away with a favorable impression of the epistemological climate in this subculture.
Here though, are a few more concepts worth using in everyday life:
Clay Shirkey nominates the Pareto Principle. We have the idea in our heads that most distributions fall along a bell curve (most people are in the middle). But this is not how the world is organized in sphere after sphere. The top 1 percent of the population control 35 percent of the wealth. The top two percent of Twitter users send 60 percent of the messages. The top 20 percent of workers in any company will produce a disproportionate share of the value. Shirkey points out that these distributions are regarded as anomalies. They are not.
Jonathan Haidt writes that “humans are the giraffes of altruism.” We think of evolution as a contest for survival among the fittest. Too often, “any human or animal act that appears altruistic has been explained away as selfishness in disguise.” But evolution operates on multiple levels. We survive because we struggle to be the fittest and also because we are really good at cooperation.
A few of the physicists mention the concept of duality, the idea that it is possible to describe the same phenomenon truthfully from two different perspectives. The most famous duality in physics is the wave-particle duality. This one states that matter has both wave-like and particle-like properties. Stephon Alexander of Haverford says that these sorts of dualities are more common than you think, beyond, say the world of quantum physics.
Douglas T. Kenrick nominates “subselves.” This is the idea that we are not just one personality, but we have many subselves that get aroused by different cues. We use very different mental processes to learn different things and, I’d add, we have many different learning styles that change minute by minute.
Helen Fisher, the great researcher into love and romance, has a provocative entry on “temperament dimensions.” She writes that we have four broad temperament constellations. One, built around the dopamine system, regulates enthusiasm for risk. A second, structured around the serotonin system, regulates sociability. A third, organized around the prenatal testosterone system, regulates attention to detail and aggressiveness. A fourth, organized around the estrogen and oxytocin systems, regulates empathy and verbal fluency.
This is an interesting schema to explain temperament. It would be interesting to see others in the field evaluate whether this is the best way to organize our thinking about our permanent natures.
Finally, Paul Kedrosky of the Kauffman Foundation nominates “Shifting Baseline Syndrome.” This one hit home for me because I was just at a McDonald’s and guiltily ordered a Quarter Pounder With Cheese. I remember when these sandwiches were first introduced and they looked huge at the time. A quarter pound of meat on one sandwich seemed gargantuan. But when my burger arrived and I opened the box, the thing looked puny. That’s because all the other sandwiches on the menu were things like double quarter pounders. My baseline of a normal burger had shifted. Kedrosky shows how these shifts distort our perceptions in all sorts of spheres.
There are interesting stray sentences throughout the Edge symposium. For example, one writer notes, “Who would be crazy enough to forecast in 2000 that by 2010 almost twice as many people in India would have access to cell phones than latrines?”
http://brooks.blogs.nytimes.com/2011/03 ... n&emc=tyb1
Tools for Thinking
By DAVID BROOKS
A few months ago, Steven Pinker of Harvard asked a smart question: What scientific concept would improve everybody’s cognitive toolkit?
The good folks at Edge.org organized a symposium, and 164 thinkers contributed suggestions. John McWhorter, a linguist at Columbia University, wrote that people should be more aware of path dependence. This refers to the notion that often “something that seems normal or inevitable today began with a choice that made sense at a particular time in the past, but survived despite the eclipse of the justification for that choice.”
For instance, typewriters used to jam if people typed too fast, so the manufacturers designed a keyboard that would slow typists. We no longer have typewriters, but we are stuck with the letter arrangements of the qwerty keyboard.
Path dependence explains many linguistic patterns and mental categories, McWhorter continues. Many people worry about the way e-mail seems to degrade writing skills. But there is nothing about e-mail that forbids people from using the literary style of 19th-century letter writers. In the 1960s, language became less formal, and now anybody who uses the old manner is regarded as an eccentric.
Evgeny Morozov, the author of “The Net Delusion,” nominated the Einstellung Effect, the idea that we often try to solve problems by using solutions that worked in the past instead of looking at each situation on its own terms. This effect is especially powerful in foreign affairs, where each new conflict is viewed through the prism of Vietnam or Munich or the cold war or Iraq.
Daniel Kahneman of Princeton University writes about the Focusing Illusion, which holds that “nothing in life is as important as you think it is while you are thinking about it.” He continues: “Education is an important determinant of income — one of the most important — but it is less important than most people think. If everyone had the same education, the inequality of income would be reduced by less than 10 percent. When you focus on education you neglect the myriad of other factors that determine income. The differences of income among people who have the same education are huge.”
Joshua Greene, a philosopher and neuroscientist at Harvard University, has a brilliant entry on Supervenience. Imagine a picture on a computer screen of a dog sitting in a rowboat. It can be described as a picture of a dog, but at a different level it can be described as an arrangement of pixels and colors. The relationship between the two levels is asymmetric. The same image can be displayed at different sizes with different pixels. The high-level properties (dogness) supervene the low-level properties (pixels).
Supervenience, Greene continues, helps explain things like the relationship between science and the humanities. Humanists fear that scientists are taking over their territory and trying to explain everything. But new discoveries about the brain don’t explain Macbeth. The products of the mind supervene the mechanisms of the brain. The humanities can be informed by the cognitive sciences even as they supervene them.
If I were presumptuous enough to nominate a few entries, I’d suggest the Fundamental Attribution Error: Don’t try to explain by character traits behavior that is better explained by context.
I’d also nominate the distinction between emotion and arousal. There’s a general assumption that emotional people are always flying off the handle. That’s not true. We would also say that Emily Dickinson was emotionally astute. As far as I know, she did not go around screaming all the time. It would be useful if we could distinguish between the emotionality of Dickinson and the arousal of the talk-show jock.
Public life would be vastly improved if people relied more on the concept of emergence. Many contributors to the Edge symposium hit on this point.
We often try to understand problems by taking apart and studying their constituent parts. But emergent problems can’t be understood this way. Emergent systems are ones in which many different elements interact. The pattern of interaction then produces a new element that is greater than the sum of the parts, which then exercises a top-down influence on the constituent elements.
Culture is an emergent system. A group of people establishes a pattern of interaction. And once that culture exists, it influences how the individuals in it behave. An economy is an emergent system. So is political polarization, rising health care costs and a bad marriage.
Emergent systems are bottom-up and top-down simultaneously. They have to be studied differently, as wholes and as nested networks of relationships. We still try to address problems like poverty and Islamic extremism by trying to tease out individual causes. We might make more headway if we thought emergently.
We’d certainly be better off if everyone sampled the fabulous Edge symposium, which, like the best in science, is modest and daring all at once. 
http://www.nytimes.com/2011/03/29/opini ... emc=tha212
*****
March 29, 2011, 1:40 pm
More Tools For Thinking
In Tuesday’s column I describe a symposium over at Edge.org on what scientific concepts everyone’s cognitive toolbox should hold. There were many superb entries in that symposium, and I only had space to highlight a few, so I’d like to mention a few more here.
Before I do, let me just recommend that symposium for the following reasons. First, it will give you a good survey of what many leading scientists, especially those who study the mind and society, are thinking about right now. You’ll also be struck by the tone. There is an acute awareness, in entry after entry, of how little we know and how complicated things are. You’ll come away with a favorable impression of the epistemological climate in this subculture.
Here though, are a few more concepts worth using in everyday life:
Clay Shirkey nominates the Pareto Principle. We have the idea in our heads that most distributions fall along a bell curve (most people are in the middle). But this is not how the world is organized in sphere after sphere. The top 1 percent of the population control 35 percent of the wealth. The top two percent of Twitter users send 60 percent of the messages. The top 20 percent of workers in any company will produce a disproportionate share of the value. Shirkey points out that these distributions are regarded as anomalies. They are not.
Jonathan Haidt writes that “humans are the giraffes of altruism.” We think of evolution as a contest for survival among the fittest. Too often, “any human or animal act that appears altruistic has been explained away as selfishness in disguise.” But evolution operates on multiple levels. We survive because we struggle to be the fittest and also because we are really good at cooperation.
A few of the physicists mention the concept of duality, the idea that it is possible to describe the same phenomenon truthfully from two different perspectives. The most famous duality in physics is the wave-particle duality. This one states that matter has both wave-like and particle-like properties. Stephon Alexander of Haverford says that these sorts of dualities are more common than you think, beyond, say the world of quantum physics.
Douglas T. Kenrick nominates “subselves.” This is the idea that we are not just one personality, but we have many subselves that get aroused by different cues. We use very different mental processes to learn different things and, I’d add, we have many different learning styles that change minute by minute.
Helen Fisher, the great researcher into love and romance, has a provocative entry on “temperament dimensions.” She writes that we have four broad temperament constellations. One, built around the dopamine system, regulates enthusiasm for risk. A second, structured around the serotonin system, regulates sociability. A third, organized around the prenatal testosterone system, regulates attention to detail and aggressiveness. A fourth, organized around the estrogen and oxytocin systems, regulates empathy and verbal fluency.
This is an interesting schema to explain temperament. It would be interesting to see others in the field evaluate whether this is the best way to organize our thinking about our permanent natures.
Finally, Paul Kedrosky of the Kauffman Foundation nominates “Shifting Baseline Syndrome.” This one hit home for me because I was just at a McDonald’s and guiltily ordered a Quarter Pounder With Cheese. I remember when these sandwiches were first introduced and they looked huge at the time. A quarter pound of meat on one sandwich seemed gargantuan. But when my burger arrived and I opened the box, the thing looked puny. That’s because all the other sandwiches on the menu were things like double quarter pounders. My baseline of a normal burger had shifted. Kedrosky shows how these shifts distort our perceptions in all sorts of spheres.
There are interesting stray sentences throughout the Edge symposium. For example, one writer notes, “Who would be crazy enough to forecast in 2000 that by 2010 almost twice as many people in India would have access to cell phones than latrines?”
http://brooks.blogs.nytimes.com/2011/03 ... n&emc=tyb1
Metaphors are central to our thought life, and thus are worth a closer look.
April 11, 2011
Poetry for Everyday Life
By DAVID BROOKS
Here’s a clunky but unremarkable sentence that appeared in the British press before the last national election: “Britain’s recovery from the worst recession in decades is gaining traction, but confused economic data and the high risk of hung Parliament could yet snuff out its momentum.”
The sentence is only worth quoting because in 28 words it contains four metaphors. Economies don’t really gain traction, like a tractor. Momentum doesn’t literally get snuffed out, like a cigarette. We just use those metaphors, without even thinking about it, as a way to capture what is going on.
In his fine new book, “I Is an Other,” James Geary reports on linguistic research suggesting that people use a metaphor every 10 to 25 words. Metaphors are not rhetorical frills at the edge of how we think, Geary writes. They are at the very heart of it.
George Lakoff and Mark Johnson, two of the leading researchers in this field, have pointed out that we often use food metaphors to describe the world of ideas. We devour a book, try to digest raw facts and attempt to regurgitate other people’s ideas, even though they might be half-baked.
When talking about relationships, we often use health metaphors. A friend might be involved in a sick relationship. Another might have a healthy marriage.
When talking about argument, we use war metaphors. When talking about time, we often use money metaphors. But when talking about money, we rely on liquid metaphors. We dip into savings, sponge off friends or skim funds off the top. Even the job title stockbroker derives from the French word brocheur, the tavern worker who tapped the kegs of beer to get the liquidity flowing.
The psychologist Michael Morris points out that when the stock market is going up, we tend to use agent metaphors, implying the market is a living thing with clear intentions. We say the market climbs or soars or fights its way upward. When the market goes down, on the other hand, we use object metaphors, implying it is inanimate. The market falls, plummets or slides.
Most of us, when asked to stop and think about it, are by now aware of the pervasiveness of metaphorical thinking. But in the normal rush of events. we often see straight through metaphors, unaware of how they refract perceptions. So it’s probably important to pause once a month or so to pierce the illusion that we see the world directly. It’s good to pause to appreciate how flexible and tenuous our grip on reality actually is.
Metaphors help compensate for our natural weaknesses. Most of us are not very good at thinking about abstractions or spiritual states, so we rely on concrete or spatial metaphors to (imperfectly) do the job. A lifetime is pictured as a journey across a landscape. A person who is sad is down in the dumps, while a happy fellow is riding high.
Most of us are not good at understanding new things, so we grasp them imperfectly by relating them metaphorically to things that already exist. That’s a “desktop” on your computer screen.
Metaphors are things we pass down from generation to generation, which transmit a culture’s distinct way of seeing and being in the world. In his superb book “Judaism: A Way of Being,” David Gelernter notes that Jewish thought uses the image of a veil to describe how Jews perceive God — as a presence to be sensed but not seen, which is intimate and yet apart.
Judaism also emphasizes the metaphor of separateness as a path to sanctification. The Israelites had to separate themselves from Egypt. The Sabbath is separate from the week. Kosher food is separate from the nonkosher. The metaphor describes a life in which one moves from nature and conventional society to the sacred realm.
To be aware of the central role metaphors play is to be aware of how imprecise our most important thinking is. It’s to be aware of the constant need to question metaphors with data — to separate the living from the dead ones, and the authentic metaphors that seek to illuminate the world from the tinny advertising and political metaphors that seek to manipulate it.
Most important, being aware of metaphors reminds you of the central role that poetic skills play in our thought. If much of our thinking is shaped and driven by metaphor, then the skilled thinker will be able to recognize patterns, blend patterns, apprehend the relationships and pursue unexpected likenesses.
Even the hardest of the sciences depend on a foundation of metaphors. To be aware of metaphors is to be humbled by the complexity of the world, to realize that deep in the undercurrents of thought there are thousands of lenses popping up between us and the world, and that we’re surrounded at all times by what Steven Pinker of Harvard once called “pedestrian poetry.” 
http://www.nytimes.com/2011/04/12/opini ... emc=tha212
April 11, 2011
Poetry for Everyday Life
By DAVID BROOKS
Here’s a clunky but unremarkable sentence that appeared in the British press before the last national election: “Britain’s recovery from the worst recession in decades is gaining traction, but confused economic data and the high risk of hung Parliament could yet snuff out its momentum.”
The sentence is only worth quoting because in 28 words it contains four metaphors. Economies don’t really gain traction, like a tractor. Momentum doesn’t literally get snuffed out, like a cigarette. We just use those metaphors, without even thinking about it, as a way to capture what is going on.
In his fine new book, “I Is an Other,” James Geary reports on linguistic research suggesting that people use a metaphor every 10 to 25 words. Metaphors are not rhetorical frills at the edge of how we think, Geary writes. They are at the very heart of it.
George Lakoff and Mark Johnson, two of the leading researchers in this field, have pointed out that we often use food metaphors to describe the world of ideas. We devour a book, try to digest raw facts and attempt to regurgitate other people’s ideas, even though they might be half-baked.
When talking about relationships, we often use health metaphors. A friend might be involved in a sick relationship. Another might have a healthy marriage.
When talking about argument, we use war metaphors. When talking about time, we often use money metaphors. But when talking about money, we rely on liquid metaphors. We dip into savings, sponge off friends or skim funds off the top. Even the job title stockbroker derives from the French word brocheur, the tavern worker who tapped the kegs of beer to get the liquidity flowing.
The psychologist Michael Morris points out that when the stock market is going up, we tend to use agent metaphors, implying the market is a living thing with clear intentions. We say the market climbs or soars or fights its way upward. When the market goes down, on the other hand, we use object metaphors, implying it is inanimate. The market falls, plummets or slides.
Most of us, when asked to stop and think about it, are by now aware of the pervasiveness of metaphorical thinking. But in the normal rush of events. we often see straight through metaphors, unaware of how they refract perceptions. So it’s probably important to pause once a month or so to pierce the illusion that we see the world directly. It’s good to pause to appreciate how flexible and tenuous our grip on reality actually is.
Metaphors help compensate for our natural weaknesses. Most of us are not very good at thinking about abstractions or spiritual states, so we rely on concrete or spatial metaphors to (imperfectly) do the job. A lifetime is pictured as a journey across a landscape. A person who is sad is down in the dumps, while a happy fellow is riding high.
Most of us are not good at understanding new things, so we grasp them imperfectly by relating them metaphorically to things that already exist. That’s a “desktop” on your computer screen.
Metaphors are things we pass down from generation to generation, which transmit a culture’s distinct way of seeing and being in the world. In his superb book “Judaism: A Way of Being,” David Gelernter notes that Jewish thought uses the image of a veil to describe how Jews perceive God — as a presence to be sensed but not seen, which is intimate and yet apart.
Judaism also emphasizes the metaphor of separateness as a path to sanctification. The Israelites had to separate themselves from Egypt. The Sabbath is separate from the week. Kosher food is separate from the nonkosher. The metaphor describes a life in which one moves from nature and conventional society to the sacred realm.
To be aware of the central role metaphors play is to be aware of how imprecise our most important thinking is. It’s to be aware of the constant need to question metaphors with data — to separate the living from the dead ones, and the authentic metaphors that seek to illuminate the world from the tinny advertising and political metaphors that seek to manipulate it.
Most important, being aware of metaphors reminds you of the central role that poetic skills play in our thought. If much of our thinking is shaped and driven by metaphor, then the skilled thinker will be able to recognize patterns, blend patterns, apprehend the relationships and pursue unexpected likenesses.
Even the hardest of the sciences depend on a foundation of metaphors. To be aware of metaphors is to be humbled by the complexity of the world, to realize that deep in the undercurrents of thought there are thousands of lenses popping up between us and the world, and that we’re surrounded at all times by what Steven Pinker of Harvard once called “pedestrian poetry.” 
http://www.nytimes.com/2011/04/12/opini ... emc=tha212
May 16, 2011
Nice Guys Finish First
By DAVID BROOKS
The story of evolution, we have been told, is the story of the survival of the fittest. The strong eat the weak. The creatures that adapt to the environment pass on their selfish genes. Those that do not become extinct.
In this telling, we humans are like all other animals — deeply and thoroughly selfish. We spend our time trying to maximize our outcomes — competing for status, wealth and mating opportunities. Behavior that seems altruistic is really self-interest in disguise. Charity and fellowship are the cultural drapery atop the iron logic of nature.
All this is partially true, of course. Yet every day, it seems, a book crosses my desk, emphasizing a different side of the story. These are books about sympathy, empathy, cooperation and collaboration, written by scientists, evolutionary psychologists, neuroscientists and others. It seems there’s been a shift among those who study this ground, yielding a more nuanced, and often gentler picture of our nature.
The most modest of these is “SuperCooperators” by Martin Nowak with Roger Highfield. Nowak uses higher math to demonstrate that “cooperation and competition are forever entwined in a tight embrace.”
In pursuing our self-interested goals, we often have an incentive to repay kindness with kindness, so others will do us favors when we’re in need. We have an incentive to establish a reputation for niceness, so people will want to work with us. We have an incentive to work in teams, even against our short-term self-interest because cohesive groups thrive. Cooperation is as central to evolution as mutation and selection, Nowak argues.
But much of the new work moves beyond incentives, narrowly understood. Michael Tomasello, the author of “Why We Cooperate,” devised a series of tests that he could give to chimps and toddlers in nearly identical form. He found that at an astonishingly early age kids begin to help others, and to share information, in ways that adult chimps hardly ever do.
An infant of 12 months will inform others about something by pointing. Chimpanzees and other apes do not helpfully inform each other about things. Infants share food readily with strangers. Chimpanzees rarely even offer food to their own offspring. If a 14-month-old child sees an adult having difficulty — like being unable to open a door because her hands are full — the child will try to help.
Tomasello’s point is that the human mind veered away from that of the other primates. We are born ready to cooperate, and then we build cultures to magnify this trait.
In “Born to Be Good,” Dacher Keltner describes the work he and others are doing on the mechanisms of empathy and connection, involving things like smiles, blushes, laughter and touch. When friends laugh together, their laughs start out as separate vocalizations, but they merge and become intertwined sounds. It now seems as though laughter evolved millions of years ago, long before vowels and consonants, as a mechanism to build cooperation. It is one of the many tools in our inborn toolbox of collaboration.
In one essay, Keltner cites the work of the Emory University neuroscientists James Rilling and Gregory Berns. They found that the act of helping another person triggers activity in the caudate nucleus and anterior cingulate cortex regions of the brain, the parts involved in pleasure and reward. That is, serving others may produce the same sort of pleasure as gratifying a personal desire.
In his book, “The Righteous Mind,” to be published early next year, Jonathan Haidt joins Edward O. Wilson, David Sloan Wilson, and others who argue that natural selection takes place not only when individuals compete with other individuals, but also when groups compete with other groups. Both competitions are examples of the survival of the fittest, but when groups compete, it’s the cohesive, cooperative, internally altruistic groups that win and pass on their genes. The idea of “group selection” was heresy a few years ago, but there is momentum behind it now.
Human beings, Haidt argues, are “the giraffes of altruism.” Just as giraffes got long necks to help them survive, humans developed moral minds that help them and their groups succeed. Humans build moral communities out of shared norms, habits, emotions and gods, and then will fight and even sometimes die to defend their communities.
Different interpretations of evolution produce different ways of analyzing the world. The selfish-competitor model fostered the utility-maximizing model that is so prevalent in the social sciences, particularly economics. The new, more cooperative view will complicate all that.
But the big upshot is this: For decades, people tried to devise a rigorous “scientific” system to analyze behavior that would be divorced from morality. But if cooperation permeates our nature, then so does morality, and there is no escaping ethics, emotion and religion in our quest to understand who we are and how we got this way.
http://www.nytimes.com/2011/05/17/opini ... emc=tha212
Nice Guys Finish First
By DAVID BROOKS
The story of evolution, we have been told, is the story of the survival of the fittest. The strong eat the weak. The creatures that adapt to the environment pass on their selfish genes. Those that do not become extinct.
In this telling, we humans are like all other animals — deeply and thoroughly selfish. We spend our time trying to maximize our outcomes — competing for status, wealth and mating opportunities. Behavior that seems altruistic is really self-interest in disguise. Charity and fellowship are the cultural drapery atop the iron logic of nature.
All this is partially true, of course. Yet every day, it seems, a book crosses my desk, emphasizing a different side of the story. These are books about sympathy, empathy, cooperation and collaboration, written by scientists, evolutionary psychologists, neuroscientists and others. It seems there’s been a shift among those who study this ground, yielding a more nuanced, and often gentler picture of our nature.
The most modest of these is “SuperCooperators” by Martin Nowak with Roger Highfield. Nowak uses higher math to demonstrate that “cooperation and competition are forever entwined in a tight embrace.”
In pursuing our self-interested goals, we often have an incentive to repay kindness with kindness, so others will do us favors when we’re in need. We have an incentive to establish a reputation for niceness, so people will want to work with us. We have an incentive to work in teams, even against our short-term self-interest because cohesive groups thrive. Cooperation is as central to evolution as mutation and selection, Nowak argues.
But much of the new work moves beyond incentives, narrowly understood. Michael Tomasello, the author of “Why We Cooperate,” devised a series of tests that he could give to chimps and toddlers in nearly identical form. He found that at an astonishingly early age kids begin to help others, and to share information, in ways that adult chimps hardly ever do.
An infant of 12 months will inform others about something by pointing. Chimpanzees and other apes do not helpfully inform each other about things. Infants share food readily with strangers. Chimpanzees rarely even offer food to their own offspring. If a 14-month-old child sees an adult having difficulty — like being unable to open a door because her hands are full — the child will try to help.
Tomasello’s point is that the human mind veered away from that of the other primates. We are born ready to cooperate, and then we build cultures to magnify this trait.
In “Born to Be Good,” Dacher Keltner describes the work he and others are doing on the mechanisms of empathy and connection, involving things like smiles, blushes, laughter and touch. When friends laugh together, their laughs start out as separate vocalizations, but they merge and become intertwined sounds. It now seems as though laughter evolved millions of years ago, long before vowels and consonants, as a mechanism to build cooperation. It is one of the many tools in our inborn toolbox of collaboration.
In one essay, Keltner cites the work of the Emory University neuroscientists James Rilling and Gregory Berns. They found that the act of helping another person triggers activity in the caudate nucleus and anterior cingulate cortex regions of the brain, the parts involved in pleasure and reward. That is, serving others may produce the same sort of pleasure as gratifying a personal desire.
In his book, “The Righteous Mind,” to be published early next year, Jonathan Haidt joins Edward O. Wilson, David Sloan Wilson, and others who argue that natural selection takes place not only when individuals compete with other individuals, but also when groups compete with other groups. Both competitions are examples of the survival of the fittest, but when groups compete, it’s the cohesive, cooperative, internally altruistic groups that win and pass on their genes. The idea of “group selection” was heresy a few years ago, but there is momentum behind it now.
Human beings, Haidt argues, are “the giraffes of altruism.” Just as giraffes got long necks to help them survive, humans developed moral minds that help them and their groups succeed. Humans build moral communities out of shared norms, habits, emotions and gods, and then will fight and even sometimes die to defend their communities.
Different interpretations of evolution produce different ways of analyzing the world. The selfish-competitor model fostered the utility-maximizing model that is so prevalent in the social sciences, particularly economics. The new, more cooperative view will complicate all that.
But the big upshot is this: For decades, people tried to devise a rigorous “scientific” system to analyze behavior that would be divorced from morality. But if cooperation permeates our nature, then so does morality, and there is no escaping ethics, emotion and religion in our quest to understand who we are and how we got this way.
http://www.nytimes.com/2011/05/17/opini ... emc=tha212
Could Conjoined Twins Share a Mind?
By SUSAN DOMINUS
Published: May 25, 2011
This is a lengthy article but has a multimedia which summarizes the whole and contains the essential ideas...
http://www.nytimes.com/2011/05/29/magaz ... -mind.html
By SUSAN DOMINUS
Published: May 25, 2011
This is a lengthy article but has a multimedia which summarizes the whole and contains the essential ideas...
http://www.nytimes.com/2011/05/29/magaz ... -mind.html
The article below illuminates the notion of the role of intellect in relationship to faith which MHI so often mentions....
June 16, 2011, 9:07 pm
Epistemology and the End of the World
By GARY GUTTING
In the coming weeks, The Stone will feature occasional posts by Gary Gutting, a professor of philosophy at the University of Notre Dame, that apply critical thinking to information and events that have appeared in the news.
Apart from its entertainment value, Harold Camping’s ill-advised prediction of the rapture last month attracted me as a philosopher for its epistemological interest. Epistemology is the study of knowledge, its nature, scope and limits. Camping claimed to know, with certainty and precision, that on May 21, 2011, a series of huge earthquakes would devastate the Earth and be followed by the taking up (rapture) of the saved into heaven. No sensible person could have thought that he knew this. Knowledge requires justification; that is, some rationally persuasive account of why we know what we claim to know. Camping’s confused efforts at Biblical interpretation provided no justification for his prediction. Even if, by some astonishing fluke, he had turned out to be right, he still would not have known the rapture was coming.
The recent failed prediction of the rapture has done nothing to shake the certainty of believers.
Of particular epistemological interest was the rush of Christians who believe that the rapture will occur but specify no date for it to dissociate themselves from Camping. Quoting Jesus’s saying that “of that day and hour no one knows,” they rightly saw their view as unrefuted by Camping’s failed prediction. What they did not notice is that the reasons for rejecting Camping’s prediction also call into question their claim that the rapture will occur at some unspecified future time.
What was most disturbing about Camping was his claim to be certain that the rapture would occur on May 21. Perhaps he had a subjective feeling of certainty about his prediction, but he had no good reasons to think that this feeling was reliable. Similarly, you may feel certain that you will get the job, but this does not make it (objectively) certain that you will. For that you need reasons that justify your feeling.
There are many Christians who are as subjectively certain as Camping about the rapture, except that they do not specify a date. They have a feeling of total confidence that the rapture will someday occur. But do they, unlike Camping, have good reasons behind their feeling of certainty? Does the fact that they leave the date of the rapture unspecified somehow give them the good reason for their certainty that Camping lacked?
An entirely unspecified date has the advantage of making their prediction immune to refutation. The fact that the rapture hasn’t occurred will never prove that it won’t occur in the future. A sense that they will never be refuted may well increase the subjective certainty of those who believe in the rapture, but this does nothing to provide the good reasons needed for objective certainty. Camping, after the fact, himself moved toward making his prediction unrefutable, saying that May 21 had been an “invisible judgment day,” a spiritual rather than a physical rapture. He kept to his prediction of a final, physical end of the world on October 21, 2011, but no doubt this prediction will also be open to reinterpretation.
Believers in the rapture will likely respond that talk of good reasons and objective certainty assumes a context of empirical (scientific) truth, and ignores the fact that their beliefs are based not on science but on faith. They are certain in their belief that the rapture will occur, even though they don’t know it in the scientific sense.
But Camping too would claim that his certainty that the rapture would occur on May 21, 2011, was a matter of faith. He had no scientific justification for his prediction, so what could have grounded his certainty if not his faith? But the certainty of his faith, we all agree, was merely subjective. Objective certainty about a future event requires good reasons.
Given their faith in the Bible, believers in the rapture do offer what they see as good reasons for their view as opposed to Camping’s. They argue that the Bible clearly predicts a temporally unspecified rapture, whereas Camping’s specific date requires highly questionable numerological reasoning. But many Christians—including many of the best Biblical scholars—do not believe that the Bible predicts a historical rapture. Even those who accept the traditional doctrine of a Second Coming of Christ, preceding the end of the world, often reject the idea of a taking up of the saved into heaven, followed by a period of dreadful tribulations on Earth for those who are left behind. Among believers themselves, a historical rapture is at best a highly controversial interpretation, not an objectively established certainty.
The case against Camping was this: His subjective certainty about the rapture required objectively good reasons to expect its occurrence; he provided no such reasons, so his claim was not worthy of belief. Christians who believe in a temporally unspecified rapture agree with this argument. But the same argument undermines their own belief in the rapture. It’s not just that “no one knows the day and hour” of the rapture. No one knows that it is going to happen at all.
http://opinionator.blogs.nytimes.com/20 ... ndex.jsonp
June 16, 2011, 9:07 pm
Epistemology and the End of the World
By GARY GUTTING
In the coming weeks, The Stone will feature occasional posts by Gary Gutting, a professor of philosophy at the University of Notre Dame, that apply critical thinking to information and events that have appeared in the news.
Apart from its entertainment value, Harold Camping’s ill-advised prediction of the rapture last month attracted me as a philosopher for its epistemological interest. Epistemology is the study of knowledge, its nature, scope and limits. Camping claimed to know, with certainty and precision, that on May 21, 2011, a series of huge earthquakes would devastate the Earth and be followed by the taking up (rapture) of the saved into heaven. No sensible person could have thought that he knew this. Knowledge requires justification; that is, some rationally persuasive account of why we know what we claim to know. Camping’s confused efforts at Biblical interpretation provided no justification for his prediction. Even if, by some astonishing fluke, he had turned out to be right, he still would not have known the rapture was coming.
The recent failed prediction of the rapture has done nothing to shake the certainty of believers.
Of particular epistemological interest was the rush of Christians who believe that the rapture will occur but specify no date for it to dissociate themselves from Camping. Quoting Jesus’s saying that “of that day and hour no one knows,” they rightly saw their view as unrefuted by Camping’s failed prediction. What they did not notice is that the reasons for rejecting Camping’s prediction also call into question their claim that the rapture will occur at some unspecified future time.
What was most disturbing about Camping was his claim to be certain that the rapture would occur on May 21. Perhaps he had a subjective feeling of certainty about his prediction, but he had no good reasons to think that this feeling was reliable. Similarly, you may feel certain that you will get the job, but this does not make it (objectively) certain that you will. For that you need reasons that justify your feeling.
There are many Christians who are as subjectively certain as Camping about the rapture, except that they do not specify a date. They have a feeling of total confidence that the rapture will someday occur. But do they, unlike Camping, have good reasons behind their feeling of certainty? Does the fact that they leave the date of the rapture unspecified somehow give them the good reason for their certainty that Camping lacked?
An entirely unspecified date has the advantage of making their prediction immune to refutation. The fact that the rapture hasn’t occurred will never prove that it won’t occur in the future. A sense that they will never be refuted may well increase the subjective certainty of those who believe in the rapture, but this does nothing to provide the good reasons needed for objective certainty. Camping, after the fact, himself moved toward making his prediction unrefutable, saying that May 21 had been an “invisible judgment day,” a spiritual rather than a physical rapture. He kept to his prediction of a final, physical end of the world on October 21, 2011, but no doubt this prediction will also be open to reinterpretation.
Believers in the rapture will likely respond that talk of good reasons and objective certainty assumes a context of empirical (scientific) truth, and ignores the fact that their beliefs are based not on science but on faith. They are certain in their belief that the rapture will occur, even though they don’t know it in the scientific sense.
But Camping too would claim that his certainty that the rapture would occur on May 21, 2011, was a matter of faith. He had no scientific justification for his prediction, so what could have grounded his certainty if not his faith? But the certainty of his faith, we all agree, was merely subjective. Objective certainty about a future event requires good reasons.
Given their faith in the Bible, believers in the rapture do offer what they see as good reasons for their view as opposed to Camping’s. They argue that the Bible clearly predicts a temporally unspecified rapture, whereas Camping’s specific date requires highly questionable numerological reasoning. But many Christians—including many of the best Biblical scholars—do not believe that the Bible predicts a historical rapture. Even those who accept the traditional doctrine of a Second Coming of Christ, preceding the end of the world, often reject the idea of a taking up of the saved into heaven, followed by a period of dreadful tribulations on Earth for those who are left behind. Among believers themselves, a historical rapture is at best a highly controversial interpretation, not an objectively established certainty.
The case against Camping was this: His subjective certainty about the rapture required objectively good reasons to expect its occurrence; he provided no such reasons, so his claim was not worthy of belief. Christians who believe in a temporally unspecified rapture agree with this argument. But the same argument undermines their own belief in the rapture. It’s not just that “no one knows the day and hour” of the rapture. No one knows that it is going to happen at all.
http://opinionator.blogs.nytimes.com/20 ... ndex.jsonp
June 29, 2011, 2:31 pm
Argument, Truth and the Social Side of Reasoning
By GARY GUTTING
Philosophers rightly think of themselves as experts on reasoning. After all, it was a philosopher, Aristotle, who developed the science of logic. But psychologists have also had some interesting things to say about the subject. A fascinating paper by Dan Sperber and Hugo Mercier has recently generated a lot of discussion.
Reasoning is most problematic when carried out by isolated individuals and most effective when carried out in social groups.
The headline of an article in The Times about the paper— echoed on blogs and other sites — was, “Reason Seen More as Weapon than Path to Truth, ” a description, that implied that reason is not, as we generally think, directed to attaining truth, but rather to winning arguments. Many readers of the Times article thought that this position amounted to a self-destructive denial of truth. The article itself (though perhaps not the abstract) suggests a more nuanced view, as the authors tried to explain in replies to criticism. In any case, we can develop an interesting view of the relation between argument and truth by starting from the popular reading and criticism of the article.
Sperber and Mercier begin from well-established facts about our deep-rooted tendencies to make mistakes in our reasoning. We have a very hard time sticking to rules of deductive logic, and we constantly make basic errors in statistical reasoning. Most importantly, we are strongly inclined to “confirmation-bias”: we systematically focus on data that support a view we hold and ignore data that count against it.
These facts suggest that our evolutionary development has not done an especially good job of making us competent reasoners. Sperber and Mercier, however, point out that this is true only if the point of reasoning is to draw true conclusions. Fallacious reasoning, especially reasoning that focuses on what supports our views and ignores what counts against them, is very effective for the purpose of winning arguments with other people. So, they suggest, it makes sense to think that the evolutionary point of human reasoning is to win arguments, not to reach the truth.
This formulation led critics to objections that echo traditional philosophical arguments against the skeptical rejection of truth. Do Sperber and Mercier think that the point of their own reasoning is not truth but winning an argument? If not, then their theory is falsified by their own reasoning. If so, they are merely trying to win an argument, and there’s no reason why scientists — who are interested in truth, not just winning arguments—should pay any attention to what they say. Sperber and Mercier seem caught in a destructive dilemma, logically damned if they do and damned if they don’t.
Philosophical thinking has led to this dilemma, but a bit more philosophy shows a way out. The root of the dilemma is the distinction between seeking the truth and winning an argument. The distinction makes sense for cases where someone does not care about knowing the truth and argues only to convince other people of something, whether or not it’s true. But, suppose my goal is simply to know the truth. How do I go about achieving this knowledge? Plato long ago pointed out that it is not enough just to believe what is true. Suppose I believe that there are an odd number of galaxies in the universe and in fact there are. Still, unless I have adequate support for my belief, I cannot be said to know it. It’s just an unsupported opinion. Knowing the truth requires not just true belief but also justification for the belief.
Rational agreement, properly arrived at, is the best possible justification of a claim to truth.
But how do I justify a belief and so come to know that it’s true? There are competing philosophical answers to this question, but one fits particularly well with Sperber and Mercier’s approach. This is the view that justification is a matter of being able to convince other people that a claim is correct, a view held in various ways by the classic American pragmatists (Peirce, James and Dewey) and, in recent years, by Richard Rorty and Jürgen Habermas.
The key point is that justification — and therefore knowledge of the truth — is a social process. This need not mean that claims are true because we come to rational agreement about them. But such agreement, properly arrived at, is the best possible justification of a claim to truth. For example, our best guarantee that stars are gigantic masses of hot gas is that scientists have developed arguments for this claim that almost anyone who looks into the matter will accept.
This pragmatic view understands seeking the truth as a special case of trying to win an argument: not winning by coercing or tricking people into agreement, but by achieving agreement through honest arguments. The important practical conclusion is that finding the truth does require winning arguments, but not in the sense that my argument defeats yours. Rather, we find an argument that defeats all contrary arguments. Sperber and Mercier in fact approach this philosophical view when they argue that, on their account, reasoning is most problematic when carried out by isolated individuals and is most effective when carried out in social groups.
The pragmatic philosophy of justification makes it clear why, even if we start from the popular reading of their article, Sperber and Mercier’s psychological account of reasoning need not fall victim to the claim that it is a self-destructive skepticism. Conversely, the philosophical view gains plausibility from its convergence with the psychological account. This symbiosis is an instructive example of how philosophy and empirical psychology can fruitfully interact.
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya3
Argument, Truth and the Social Side of Reasoning
By GARY GUTTING
Philosophers rightly think of themselves as experts on reasoning. After all, it was a philosopher, Aristotle, who developed the science of logic. But psychologists have also had some interesting things to say about the subject. A fascinating paper by Dan Sperber and Hugo Mercier has recently generated a lot of discussion.
Reasoning is most problematic when carried out by isolated individuals and most effective when carried out in social groups.
The headline of an article in The Times about the paper— echoed on blogs and other sites — was, “Reason Seen More as Weapon than Path to Truth, ” a description, that implied that reason is not, as we generally think, directed to attaining truth, but rather to winning arguments. Many readers of the Times article thought that this position amounted to a self-destructive denial of truth. The article itself (though perhaps not the abstract) suggests a more nuanced view, as the authors tried to explain in replies to criticism. In any case, we can develop an interesting view of the relation between argument and truth by starting from the popular reading and criticism of the article.
Sperber and Mercier begin from well-established facts about our deep-rooted tendencies to make mistakes in our reasoning. We have a very hard time sticking to rules of deductive logic, and we constantly make basic errors in statistical reasoning. Most importantly, we are strongly inclined to “confirmation-bias”: we systematically focus on data that support a view we hold and ignore data that count against it.
These facts suggest that our evolutionary development has not done an especially good job of making us competent reasoners. Sperber and Mercier, however, point out that this is true only if the point of reasoning is to draw true conclusions. Fallacious reasoning, especially reasoning that focuses on what supports our views and ignores what counts against them, is very effective for the purpose of winning arguments with other people. So, they suggest, it makes sense to think that the evolutionary point of human reasoning is to win arguments, not to reach the truth.
This formulation led critics to objections that echo traditional philosophical arguments against the skeptical rejection of truth. Do Sperber and Mercier think that the point of their own reasoning is not truth but winning an argument? If not, then their theory is falsified by their own reasoning. If so, they are merely trying to win an argument, and there’s no reason why scientists — who are interested in truth, not just winning arguments—should pay any attention to what they say. Sperber and Mercier seem caught in a destructive dilemma, logically damned if they do and damned if they don’t.
Philosophical thinking has led to this dilemma, but a bit more philosophy shows a way out. The root of the dilemma is the distinction between seeking the truth and winning an argument. The distinction makes sense for cases where someone does not care about knowing the truth and argues only to convince other people of something, whether or not it’s true. But, suppose my goal is simply to know the truth. How do I go about achieving this knowledge? Plato long ago pointed out that it is not enough just to believe what is true. Suppose I believe that there are an odd number of galaxies in the universe and in fact there are. Still, unless I have adequate support for my belief, I cannot be said to know it. It’s just an unsupported opinion. Knowing the truth requires not just true belief but also justification for the belief.
Rational agreement, properly arrived at, is the best possible justification of a claim to truth.
But how do I justify a belief and so come to know that it’s true? There are competing philosophical answers to this question, but one fits particularly well with Sperber and Mercier’s approach. This is the view that justification is a matter of being able to convince other people that a claim is correct, a view held in various ways by the classic American pragmatists (Peirce, James and Dewey) and, in recent years, by Richard Rorty and Jürgen Habermas.
The key point is that justification — and therefore knowledge of the truth — is a social process. This need not mean that claims are true because we come to rational agreement about them. But such agreement, properly arrived at, is the best possible justification of a claim to truth. For example, our best guarantee that stars are gigantic masses of hot gas is that scientists have developed arguments for this claim that almost anyone who looks into the matter will accept.
This pragmatic view understands seeking the truth as a special case of trying to win an argument: not winning by coercing or tricking people into agreement, but by achieving agreement through honest arguments. The important practical conclusion is that finding the truth does require winning arguments, but not in the sense that my argument defeats yours. Rather, we find an argument that defeats all contrary arguments. Sperber and Mercier in fact approach this philosophical view when they argue that, on their account, reasoning is most problematic when carried out by isolated individuals and is most effective when carried out in social groups.
The pragmatic philosophy of justification makes it clear why, even if we start from the popular reading of their article, Sperber and Mercier’s psychological account of reasoning need not fall victim to the claim that it is a self-destructive skepticism. Conversely, the philosophical view gains plausibility from its convergence with the psychological account. This symbiosis is an instructive example of how philosophy and empirical psychology can fruitfully interact.
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya3
July 16, 2011
Books and Other Fetish Objects
By JAMES GLEICK
I GOT a real thrill in December 1999 in the Reading Room of the Morgan Library in New York when the librarian, Sylvie Merian, brought me, after I had completed an application with a letter of reference and a photo ID, the first, oldest notebook of Isaac Newton. First I was required to study a microfilm version. There followed a certain amount of appropriate pomp. The notebook was lifted from a blue cloth drop-spine box and laid on a special padded stand. I was struck by how impossibly tiny it was — 58 leaves bound in vellum, just 2 3/4 inches wide, half the size I would have guessed from the enlarged microfilm images. There was his name, “Isacus Newton,” proudly inscribed by the 17-year-old with his quill, and the date, 1659.
“He filled the pages with meticulous script, the letters and numerals often less than one-sixteenth of an inch high,” I wrote in my book “Isaac Newton” a few years later. “He began at both ends and worked toward the middle.”
Apparently historians know the feeling well — the exhilaration that comes from handling the venerable original. It’s a contact high. In this time of digitization, it is said to be endangered. The Morgan Notebook of Isaac Newton is online now (thanks to the Newton Project at the University of Sussex). You can surf it.
The raw material of history appears to be heading for the cloud. What once was hard is now easy. What was slow is now fast.
Is this a case of “be careful what you wish for”?
Last month the British Library announced a project with Google to digitize 40 million pages of books, pamphlets and periodicals dating to the French Revolution. The European Digital Library, Europeana.eu, well surpassed its initial goal of 10 million “objects” last year, including a Bulgarian parchment manuscript from 1221 and the Rok runestone from Sweden, circa 800, which will save you trips to, respectively, the St. Cyril and St. Methodius National Library in Sofia and a church in Ostergotland.
Reporting to the European Union in Brussels, the Comité des Sages (sounds better than “Reflection Group”) urged in January that essentially everything — all the out-of-copyright cultural heritage of all the member states — should be digitized and made freely available online. It put the cost at approximately $140 billion and called this vision “The New Renaissance.”
Inevitably comes the backlash. Where some see enrichment, others see impoverishment. Tristram Hunt, an English historian and member of Parliament, complained in The Observer this month that “techno-enthusiasm” threatens to cheapen scholarship. “When everything is downloadable, the mystery of history can be lost,” he wrote. “It is only with MS in hand that the real meaning of the text becomes apparent: its rhythms and cadences, the relationship of image to word, the passion of the argument or cold logic of the case.”
I’m not buying this. I think it’s sentimentalism, and even fetishization. It’s related to the fancy that what one loves about books is the grain of paper and the scent of glue.
Some of the qualms about digital research reflect a feeling that anything obtained too easily loses its value. What we work for, we better appreciate. If an amateur can be beamed to the top of Mount Everest, will the view be as magnificent as for someone who has accomplished the climb? Maybe not, because magnificence is subjective. But it’s the same view.
Another worry is the loss of serendipity — as Mr. Hunt says, “the scholar’s eternal hope that something will catch his eye.” When you open a book Newton once owned, which you can do (by appointment) in the library of Trinity College, Cambridge, you may see notes he scribbled in the margins. But marginalia are being digitized, too. And I find that online discovery leads to unexpected twists and turns of research at least as often as the same time spent in archives.
“New Renaissance” may be a bit of hype, but a profound transformation lies ahead for the practice of history. Europeans seem to have taken the lead in creating digital showcases; maybe they just have more history to work with than Americans do. One brilliant new resource among many is the London Lives project: 240,000 manuscript and printed pages dating to 1690, focusing on the poor, including parish archives, records from workhouses and hospitals, and trial proceedings from the Old Bailey.
Storehouses like these, open to anyone, will surely inspire new scholarship. They enrich cyberspace, particularly because without them the online perspective is so foreshortened, so locked into the present day. Not that historians should retire to their computer terminals; the sights and smells of history, where we can still find them, are to be cherished. But the artifact is hardly a clear window onto the past; a window, yes, clouded and smudged like all the rest.
It’s a mistake to deprecate digital images just because they are suddenly everywhere, reproduced so effortlessly. We’re in the habit of associating value with scarcity, but the digital world unlinks them. You can be the sole owner of a Jackson Pollock or a Blue Mauritius but not of a piece of information — not for long, anyway. Nor is obscurity a virtue. A hidden parchment page enters the light when it molts into a digital simulacrum. It was never the parchment that mattered.
Oddly, for collectors of antiquities, the pricing of informational relics seems undiminished by cheap reproduction — maybe just the opposite. In a Sotheby’s auction three years ago, Magna Carta fetched a record $21 million. To be exact, the venerable item was a copy of Magna Carta, made 82 years after the first version was written and sealed at Runnymede. Why is this tattered parchment valuable? Magical thinking. It is a talisman. The precious item is a trick of the eye. The real Magna Carta, the great charter of human rights and liberty, is available free online, where it is safely preserved. It cannot be lost or destroyed.
An object like this — a talisman — is like the coffin at a funeral. It deserves to be honored, but the soul has moved on.
James Gleick is the author of “The Information: A History, a Theory, a Flood.”
http://www.nytimes.com/2011/07/17/opini ... emc=tha212
Books and Other Fetish Objects
By JAMES GLEICK
I GOT a real thrill in December 1999 in the Reading Room of the Morgan Library in New York when the librarian, Sylvie Merian, brought me, after I had completed an application with a letter of reference and a photo ID, the first, oldest notebook of Isaac Newton. First I was required to study a microfilm version. There followed a certain amount of appropriate pomp. The notebook was lifted from a blue cloth drop-spine box and laid on a special padded stand. I was struck by how impossibly tiny it was — 58 leaves bound in vellum, just 2 3/4 inches wide, half the size I would have guessed from the enlarged microfilm images. There was his name, “Isacus Newton,” proudly inscribed by the 17-year-old with his quill, and the date, 1659.
“He filled the pages with meticulous script, the letters and numerals often less than one-sixteenth of an inch high,” I wrote in my book “Isaac Newton” a few years later. “He began at both ends and worked toward the middle.”
Apparently historians know the feeling well — the exhilaration that comes from handling the venerable original. It’s a contact high. In this time of digitization, it is said to be endangered. The Morgan Notebook of Isaac Newton is online now (thanks to the Newton Project at the University of Sussex). You can surf it.
The raw material of history appears to be heading for the cloud. What once was hard is now easy. What was slow is now fast.
Is this a case of “be careful what you wish for”?
Last month the British Library announced a project with Google to digitize 40 million pages of books, pamphlets and periodicals dating to the French Revolution. The European Digital Library, Europeana.eu, well surpassed its initial goal of 10 million “objects” last year, including a Bulgarian parchment manuscript from 1221 and the Rok runestone from Sweden, circa 800, which will save you trips to, respectively, the St. Cyril and St. Methodius National Library in Sofia and a church in Ostergotland.
Reporting to the European Union in Brussels, the Comité des Sages (sounds better than “Reflection Group”) urged in January that essentially everything — all the out-of-copyright cultural heritage of all the member states — should be digitized and made freely available online. It put the cost at approximately $140 billion and called this vision “The New Renaissance.”
Inevitably comes the backlash. Where some see enrichment, others see impoverishment. Tristram Hunt, an English historian and member of Parliament, complained in The Observer this month that “techno-enthusiasm” threatens to cheapen scholarship. “When everything is downloadable, the mystery of history can be lost,” he wrote. “It is only with MS in hand that the real meaning of the text becomes apparent: its rhythms and cadences, the relationship of image to word, the passion of the argument or cold logic of the case.”
I’m not buying this. I think it’s sentimentalism, and even fetishization. It’s related to the fancy that what one loves about books is the grain of paper and the scent of glue.
Some of the qualms about digital research reflect a feeling that anything obtained too easily loses its value. What we work for, we better appreciate. If an amateur can be beamed to the top of Mount Everest, will the view be as magnificent as for someone who has accomplished the climb? Maybe not, because magnificence is subjective. But it’s the same view.
Another worry is the loss of serendipity — as Mr. Hunt says, “the scholar’s eternal hope that something will catch his eye.” When you open a book Newton once owned, which you can do (by appointment) in the library of Trinity College, Cambridge, you may see notes he scribbled in the margins. But marginalia are being digitized, too. And I find that online discovery leads to unexpected twists and turns of research at least as often as the same time spent in archives.
“New Renaissance” may be a bit of hype, but a profound transformation lies ahead for the practice of history. Europeans seem to have taken the lead in creating digital showcases; maybe they just have more history to work with than Americans do. One brilliant new resource among many is the London Lives project: 240,000 manuscript and printed pages dating to 1690, focusing on the poor, including parish archives, records from workhouses and hospitals, and trial proceedings from the Old Bailey.
Storehouses like these, open to anyone, will surely inspire new scholarship. They enrich cyberspace, particularly because without them the online perspective is so foreshortened, so locked into the present day. Not that historians should retire to their computer terminals; the sights and smells of history, where we can still find them, are to be cherished. But the artifact is hardly a clear window onto the past; a window, yes, clouded and smudged like all the rest.
It’s a mistake to deprecate digital images just because they are suddenly everywhere, reproduced so effortlessly. We’re in the habit of associating value with scarcity, but the digital world unlinks them. You can be the sole owner of a Jackson Pollock or a Blue Mauritius but not of a piece of information — not for long, anyway. Nor is obscurity a virtue. A hidden parchment page enters the light when it molts into a digital simulacrum. It was never the parchment that mattered.
Oddly, for collectors of antiquities, the pricing of informational relics seems undiminished by cheap reproduction — maybe just the opposite. In a Sotheby’s auction three years ago, Magna Carta fetched a record $21 million. To be exact, the venerable item was a copy of Magna Carta, made 82 years after the first version was written and sealed at Runnymede. Why is this tattered parchment valuable? Magical thinking. It is a talisman. The precious item is a trick of the eye. The real Magna Carta, the great charter of human rights and liberty, is available free online, where it is safely preserved. It cannot be lost or destroyed.
An object like this — a talisman — is like the coffin at a funeral. It deserves to be honored, but the soul has moved on.
James Gleick is the author of “The Information: A History, a Theory, a Flood.”
http://www.nytimes.com/2011/07/17/opini ... emc=tha212
August 18, 2011
The Question-Driven Life
By DAVID BROOKS
Rift Valley, Kenya
We are born with what some psychologists call an “explanatory drive.” You give a baby a strange object or something that doesn’t make sense and she will become instantly absorbed; using all her abilities — taste, smell, force — to figure out how it fits in with the world.
I recently met someone who, though in his seventh decade, still seems to be gripped by this sort of compulsive curiosity. His name is Philip Leakey.
He is the third son of the famed paleoanthropologists Louis and Mary Leakey and the brother of the equally renowned scholar, Richard Leakey. Philip was raised by people whose lives were driven by questions. Parts of his childhood were organized around expeditions to places like Olduvai Gorge where Louis and most especially Mary searched for bones, footprints and artifacts of early man. The Leakeys also tend to have large personalities. Strains of adventurousness, contentiousness, impulsivity and romance run through the family, producing spellbinding people who are sometimes hard to deal with.
Philip was also reared in the Kenyan bush. There are certain people whose lives are permanently shaped by their frontier childhoods. They grew up out in nature, adventuring alone for long stretches, befriending strange animals and snakes, studying bugs and rock formations, learning to fend for themselves. (The Leakeys are the sort of people who, when their car breaks down in the middle of nowhere, manage to fix the engine with the innards of a cow.)
This sort of childhood seems to have imprinted Philip with a certain definition of happiness — out there in the bush, lost in some experiment. Naturally, he wasn’t going to fit in at boarding school.
At 16, he decided to drop out and made a deal with his parents. He would fend for himself if they would hire a tutor to teach him Swahili. Kenya has 42 native tribes, and over the next years Phillip moved in with several. He started a series of small businesses — mining, safari, fertilizer manufacturing and so on. As one Kenyan told me, it’s quicker to list the jobs he didn’t hold than the ones he did.
The Leakey family has been prolifically chronicled, and in some of the memoirs Philip comes off as something of a black sheep, who could never focus on one thing. But he became the first white Kenyan to win election to Parliament after independence, serving there for 15 years.
I met him at the remote mountain camp where he now lives, a bumpy 4-hour ride south of Nairobi near the Rift Valley. Leakey and his wife Katy — an artist who baby-sat for Jane Goodall and led a cultural expedition up the Amazon — have created an enterprise called the Leakey Collection, which employs up to 1,200 of the local Maasai, and sells designer jewelry and household items around the world.
The Leakeys live in a mountaintop tent. Their kitchen and dining room is a lean-to with endless views across the valley. The workers sit out under the trees gossiping and making jewelry. Getting a tour of the facilities is like walking through “Swiss Family Robinson” or “Dr. Dolittle.”
Philip has experiments running up and down the mountainside. He’s trying to build an irrigation system that doubles as a tilapia farm. He’s trying to graft fruit trees onto native trees so they can survive in rocky soil. He’s completing a pit to turn cow manure into electricity and plans to build a micro-hyrdroelectric generator in a local stream.
Leakey and his workers devise and build their own lathes and saws, tough enough to carve into the hard acacia wood. They’re inventing their own dyes for the Leakey Collection’s Zulugrass jewelry, planning to use Marula trees to make body lotion, designing cement beehives to foil the honey badgers. They have also started a midwife training program and a women’s health initiative.
Philip guides you like an eager kid at his own personal science fair, pausing to scratch into the earth where Iron Age settlers once built a forge. He says that about one in seven of his experiments pans out, noting there is no such thing as a free education.
Some people center their lives around money or status or community or service to God, but this seems to be a learning-centered life, where little bits of practical knowledge are the daily currency, where the main vocation is to be preoccupied with some exciting little project or maybe a dozen.
Some people specialize, and certainly the modern economy encourages that. But there are still people, even if only out in the African wilderness, with a wandering curiosity, alighting on every interesting part of their environment.
The late Richard Holbrooke used to give the essential piece of advice for a question-driven life: Know something about something. Don’t just present your wonderful self to the world. Constantly amass knowledge and offer it around.
Paul Krugman is off today.
http://www.nytimes.com/2011/08/19/opini ... emc=tha212
The Question-Driven Life
By DAVID BROOKS
Rift Valley, Kenya
We are born with what some psychologists call an “explanatory drive.” You give a baby a strange object or something that doesn’t make sense and she will become instantly absorbed; using all her abilities — taste, smell, force — to figure out how it fits in with the world.
I recently met someone who, though in his seventh decade, still seems to be gripped by this sort of compulsive curiosity. His name is Philip Leakey.
He is the third son of the famed paleoanthropologists Louis and Mary Leakey and the brother of the equally renowned scholar, Richard Leakey. Philip was raised by people whose lives were driven by questions. Parts of his childhood were organized around expeditions to places like Olduvai Gorge where Louis and most especially Mary searched for bones, footprints and artifacts of early man. The Leakeys also tend to have large personalities. Strains of adventurousness, contentiousness, impulsivity and romance run through the family, producing spellbinding people who are sometimes hard to deal with.
Philip was also reared in the Kenyan bush. There are certain people whose lives are permanently shaped by their frontier childhoods. They grew up out in nature, adventuring alone for long stretches, befriending strange animals and snakes, studying bugs and rock formations, learning to fend for themselves. (The Leakeys are the sort of people who, when their car breaks down in the middle of nowhere, manage to fix the engine with the innards of a cow.)
This sort of childhood seems to have imprinted Philip with a certain definition of happiness — out there in the bush, lost in some experiment. Naturally, he wasn’t going to fit in at boarding school.
At 16, he decided to drop out and made a deal with his parents. He would fend for himself if they would hire a tutor to teach him Swahili. Kenya has 42 native tribes, and over the next years Phillip moved in with several. He started a series of small businesses — mining, safari, fertilizer manufacturing and so on. As one Kenyan told me, it’s quicker to list the jobs he didn’t hold than the ones he did.
The Leakey family has been prolifically chronicled, and in some of the memoirs Philip comes off as something of a black sheep, who could never focus on one thing. But he became the first white Kenyan to win election to Parliament after independence, serving there for 15 years.
I met him at the remote mountain camp where he now lives, a bumpy 4-hour ride south of Nairobi near the Rift Valley. Leakey and his wife Katy — an artist who baby-sat for Jane Goodall and led a cultural expedition up the Amazon — have created an enterprise called the Leakey Collection, which employs up to 1,200 of the local Maasai, and sells designer jewelry and household items around the world.
The Leakeys live in a mountaintop tent. Their kitchen and dining room is a lean-to with endless views across the valley. The workers sit out under the trees gossiping and making jewelry. Getting a tour of the facilities is like walking through “Swiss Family Robinson” or “Dr. Dolittle.”
Philip has experiments running up and down the mountainside. He’s trying to build an irrigation system that doubles as a tilapia farm. He’s trying to graft fruit trees onto native trees so they can survive in rocky soil. He’s completing a pit to turn cow manure into electricity and plans to build a micro-hyrdroelectric generator in a local stream.
Leakey and his workers devise and build their own lathes and saws, tough enough to carve into the hard acacia wood. They’re inventing their own dyes for the Leakey Collection’s Zulugrass jewelry, planning to use Marula trees to make body lotion, designing cement beehives to foil the honey badgers. They have also started a midwife training program and a women’s health initiative.
Philip guides you like an eager kid at his own personal science fair, pausing to scratch into the earth where Iron Age settlers once built a forge. He says that about one in seven of his experiments pans out, noting there is no such thing as a free education.
Some people center their lives around money or status or community or service to God, but this seems to be a learning-centered life, where little bits of practical knowledge are the daily currency, where the main vocation is to be preoccupied with some exciting little project or maybe a dozen.
Some people specialize, and certainly the modern economy encourages that. But there are still people, even if only out in the African wilderness, with a wandering curiosity, alighting on every interesting part of their environment.
The late Richard Holbrooke used to give the essential piece of advice for a question-driven life: Know something about something. Don’t just present your wonderful self to the world. Constantly amass knowledge and offer it around.
Paul Krugman is off today.
http://www.nytimes.com/2011/08/19/opini ... emc=tha212
August 31, 2011, 6:05 pm
Happiness, Philosophy and Science
By GARY GUTTING
The Stone is featuring occasional posts by Gary Gutting, a professor of philosophy at the University of Notre Dame, that apply critical thinking to information and events that have appeared in the news.
Philosophy was the origin of most scientific disciplines. Aristotle was in some sense an astronomer, a physicist, a biologist, a psychologist and a political scientist. As various philosophical subdiscplines found ways of treating their topics with full empirical rigor, they gradually separated themselves from philosophy, which increasingly became a purely armchair enterprise, working not from controlled experiments but from common-sense experiences and conceptual analysis.
In recent years, however, the sciences — in particular, psychology and the social sciences — have begun to return to their origin, combining data and hypotheses with conceptual and normative considerations that are essentially philosophical. An excellent example of this return is the new psychological science of happiness, represented, for example, by the fundamental work of Edward Diener.
The empirical basis of this discipline is a vast amount of data suggesting correlations (or lack thereof) between happiness and various genetic, social, economic, and personal factors. Some of the results are old news: wealth, beauty, and pleasure, for example, have little effect on happiness. But there are some surprises: serious illness typically does not make us much less happy, marriage in the long run is not a major source of either happiness or unhappiness.
The new research has both raised hopes and provoked skepticism. Psychologists such as Sonja Lyubomirsky have developed a new genre of self-help books, purporting to replace the intuitions and anecdotes of traditional advisors with scientific programs for making people happy. At the same time, there are serious methodological challenges, questioning, for example, the use of individuals’ self-reports of how happy they are and the effort to objectify and even quantify so subjective and elusive a quality as happiness.
But the most powerful challenge concerns the meaning and value of happiness. Researchers emphasize that when we ask people if they are happy the answers tell us nothing if we don’t know what our respondents mean by “happy.” One person might mean, “I’m not currently feeling any serious pain”; another, “My life is pretty horrible but I’m reconciled to it”; another, “I’m feeling a lot better than I did yesterday.” Happiness research requires a clear understanding of the possible meanings of the term. For example, most researchers distinguish between happiness as a psychological state (for example, feeling overall more pleasure than pain) and happiness as a positive evaluation of your life, even if it has involved more pain than pleasure. Above all, there is the fundamental question: In which sense, if any, is happiness a proper goal of a human life?
These issues inevitably lead to philosophical reflection. Empirical surveys can give us a list of the different ideas people have of happiness. But research has shown that when people achieve their ideas of happiness (marriage, children, wealth, fame), they often are still not happy. There’s no reason to think that the ideas of happiness we discover by empirical surveys are sufficiently well thought out to lead us to genuine happiness. For richer and more sensitive conceptions of happiness, we need to turn to philosophers, who, from Plato and Aristotle, through Hume and Mill, to Hegel and Nietzsche, have provided some of the deepest insight into the possible meanings of happiness.
Even if empirical investigation could discover the full range of possible conceptions of happiness, there would still remain the question of which conception we ought to try to achieve. Here we have a question of values that empirical inquiry alone is unable to decide without appeal to philosophical thinking.
This is not to say that, as Plato thought, we can simply appeal to expert philosophical opinion to tells us how we ought to live. We all need to answer this question for ourselves. But if philosophy does not have the answers, it does provide tools we need to arrive at answers. If, for example, we are inclined to think that pleasure is the key to happiness, John Stuart Mill shows us how to distinguish between the more sensory and the more intellectual pleasures. Robert Nozick asks us to consider whether we would choose to attach ourselves to a device that would produce a constant state of intense pleasure, even if we never achieved anything in our lives other than experiencing this pleasure.
On another level, Immanuel Kant asks whether happiness should even be a goal of a good human life, which, he suggests, is rather directed toward choosing to do the right thing even if it destroys our happiness. Nietzsche and Sartre help us consider whether even morality itself is a worthy goal of human existence. These essential questions are not empirical.
Still, psychologists understandably want to address such questions, and their scientific data can make an important contribution to the discussion. But to the extent that psychology takes on questions about basic human values, it is taking on a humanistic dimension that needs to engage with philosophy and the other disciplines — history, art, literature, even theology — that are essential for grappling with the question of happiness. (For a good discussion of philosophical views of happiness and their connection to psychological work, see Dan Haybron’s Stanford Encyclopedia article.) Psychologist should recognize this and give up the pretension that empirical investigations alone can answer the big questions about happiness. Philosophers and other humanists, in turn, should be happy to welcome psychologists into their world.
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya1
Happiness, Philosophy and Science
By GARY GUTTING
The Stone is featuring occasional posts by Gary Gutting, a professor of philosophy at the University of Notre Dame, that apply critical thinking to information and events that have appeared in the news.
Philosophy was the origin of most scientific disciplines. Aristotle was in some sense an astronomer, a physicist, a biologist, a psychologist and a political scientist. As various philosophical subdiscplines found ways of treating their topics with full empirical rigor, they gradually separated themselves from philosophy, which increasingly became a purely armchair enterprise, working not from controlled experiments but from common-sense experiences and conceptual analysis.
In recent years, however, the sciences — in particular, psychology and the social sciences — have begun to return to their origin, combining data and hypotheses with conceptual and normative considerations that are essentially philosophical. An excellent example of this return is the new psychological science of happiness, represented, for example, by the fundamental work of Edward Diener.
The empirical basis of this discipline is a vast amount of data suggesting correlations (or lack thereof) between happiness and various genetic, social, economic, and personal factors. Some of the results are old news: wealth, beauty, and pleasure, for example, have little effect on happiness. But there are some surprises: serious illness typically does not make us much less happy, marriage in the long run is not a major source of either happiness or unhappiness.
The new research has both raised hopes and provoked skepticism. Psychologists such as Sonja Lyubomirsky have developed a new genre of self-help books, purporting to replace the intuitions and anecdotes of traditional advisors with scientific programs for making people happy. At the same time, there are serious methodological challenges, questioning, for example, the use of individuals’ self-reports of how happy they are and the effort to objectify and even quantify so subjective and elusive a quality as happiness.
But the most powerful challenge concerns the meaning and value of happiness. Researchers emphasize that when we ask people if they are happy the answers tell us nothing if we don’t know what our respondents mean by “happy.” One person might mean, “I’m not currently feeling any serious pain”; another, “My life is pretty horrible but I’m reconciled to it”; another, “I’m feeling a lot better than I did yesterday.” Happiness research requires a clear understanding of the possible meanings of the term. For example, most researchers distinguish between happiness as a psychological state (for example, feeling overall more pleasure than pain) and happiness as a positive evaluation of your life, even if it has involved more pain than pleasure. Above all, there is the fundamental question: In which sense, if any, is happiness a proper goal of a human life?
These issues inevitably lead to philosophical reflection. Empirical surveys can give us a list of the different ideas people have of happiness. But research has shown that when people achieve their ideas of happiness (marriage, children, wealth, fame), they often are still not happy. There’s no reason to think that the ideas of happiness we discover by empirical surveys are sufficiently well thought out to lead us to genuine happiness. For richer and more sensitive conceptions of happiness, we need to turn to philosophers, who, from Plato and Aristotle, through Hume and Mill, to Hegel and Nietzsche, have provided some of the deepest insight into the possible meanings of happiness.
Even if empirical investigation could discover the full range of possible conceptions of happiness, there would still remain the question of which conception we ought to try to achieve. Here we have a question of values that empirical inquiry alone is unable to decide without appeal to philosophical thinking.
This is not to say that, as Plato thought, we can simply appeal to expert philosophical opinion to tells us how we ought to live. We all need to answer this question for ourselves. But if philosophy does not have the answers, it does provide tools we need to arrive at answers. If, for example, we are inclined to think that pleasure is the key to happiness, John Stuart Mill shows us how to distinguish between the more sensory and the more intellectual pleasures. Robert Nozick asks us to consider whether we would choose to attach ourselves to a device that would produce a constant state of intense pleasure, even if we never achieved anything in our lives other than experiencing this pleasure.
On another level, Immanuel Kant asks whether happiness should even be a goal of a good human life, which, he suggests, is rather directed toward choosing to do the right thing even if it destroys our happiness. Nietzsche and Sartre help us consider whether even morality itself is a worthy goal of human existence. These essential questions are not empirical.
Still, psychologists understandably want to address such questions, and their scientific data can make an important contribution to the discussion. But to the extent that psychology takes on questions about basic human values, it is taking on a humanistic dimension that needs to engage with philosophy and the other disciplines — history, art, literature, even theology — that are essential for grappling with the question of happiness. (For a good discussion of philosophical views of happiness and their connection to psychological work, see Dan Haybron’s Stanford Encyclopedia article.) Psychologist should recognize this and give up the pretension that empirical investigations alone can answer the big questions about happiness. Philosophers and other humanists, in turn, should be happy to welcome psychologists into their world.
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya1
September 4, 2011, 5:00 pm
What Is Naturalism?
By TIMOTHY WILLIAMSON
Many contemporary philosophers describe themselves as naturalists. They mean that they believe something like this: there is only the natural world, and the best way to find out about it is by the scientific method. I am sometimes described as a naturalist. Why do I resist the description? Not for any religious scruple: I am an atheist of the most straightforward kind. But accepting the naturalist slogan without looking beneath the slick packaging is an unscientific way to form one’s beliefs about the world, not something naturalists should recommend.
What, for a start, is the natural world? If we say it is the world of matter, or the world of atoms, we are left behind by modern physics, which characterizes the world in far more abstract terms. Anyway, the best current scientific theories will probably be superseded by future scientific developments. We might therefore define the natural world as whatever the scientific method eventually discovers. Thus naturalism becomes the belief that there is only whatever the scientific method eventually discovers, and (not surprisingly) the best way to find out about it is by the scientific method. That is no tautology. Why can’t there be things only discoverable by non-scientific means, or not discoverable at all?
Still, naturalism is not as restrictive as it sounds. For example, some of its hard-nosed advocates undertake to postulate a soul or a god, if doing so turns out to be part of the best explanation of our experience, for that would be an application of scientific method. Naturalism is not incompatible in principle with all forms of religion. In practice, however, most naturalists doubt that belief in souls or gods withstands scientific scrutiny.
What is meant by “the scientific method”? Why assume that science only has one method? For naturalists, although natural sciences like physics and biology differ from each other in specific ways, at a sufficiently abstract level they all count as using a single general method. It involves formulating theoretical hypotheses and testing their predictions against systematic observation and controlled experiment. This is called the hypothetico-deductive method.
One challenge to naturalism is to find a place for mathematics. Natural sciences rely on it, but should we count it a science in its own right? If we do, then the description of scientific method just given is wrong, for it does not fit the science of mathematics, which proves its results by pure reasoning, rather than the hypothetico-deductive method. Although a few naturalists, such as W.V. Quine, argued that the real evidence in favor of mathematics comes from its applications in the natural sciences, so indirectly from observation and experiment, that view does not fit the way the subject actually develops. When mathematicians assess a proposed new axiom, they look at its consequences within mathematics, not outside. On the other hand, if we do not count pure mathematics a science, we thereby exclude mathematical proof by itself from the scientific method, and so discredit naturalism. For naturalism privileges the scientific method over all others, and mathematics is one of the most spectacular success stories in the history of human knowledge.
Which other disciplines count as science? Logic? Linguistics? History? Literary theory? How should we decide? The dilemma for naturalists is this. If they are too inclusive in what they count as science, naturalism loses its bite. Naturalists typically criticize some traditional forms of philosophy as insufficiently scientific, because they ignore experimental tests. How can they maintain such objections unless they restrict scientific method to hypothetico-deductivism? But if they are too exclusive in what they count as science, naturalism loses its credibility, by imposing a method appropriate to natural science on areas where it is inappropriate. Unfortunately, rather than clarify the issue, many naturalists oscillate. When on the attack, they assume an exclusive understanding of science as hypothetico-deductive. When under attack themselves, they fall back on a more inclusive understanding of science that drastically waters down naturalism. Such maneuvering makes naturalism an obscure article of faith. I don’t call myself a naturalist because I don’t want to be implicated in equivocal dogma. Dismissing an idea as “inconsistent with naturalism” is little better than dismissing it as “inconsistent with Christianity.”
Still, I sympathize with one motive behind naturalism — the aspiration to think in a scientific spirit. It’s a vague phrase, but one might start to explain it by emphasizing values like curiosity, honesty, accuracy, precision and rigor. What matters isn’t paying lip-service to those qualities — that’s easy — but actually exemplifying them in practice — the hard part. We needn’t pretend that scientists’ motives are pure. They are human. Science doesn’t depend on indifference to fame, professional advancement, money, or comparisons with rivals. Rather, truth is best pursued in social environments, intellectual communities, that minimize conflict between such baser motives and the scientific spirit, by rewarding work that embodies the scientific virtues. Such traditions exist, and not just in natural science.
The scientific spirit is as relevant in mathematics, history, philosophy and elsewhere as in natural science. Where experimentation is the likeliest way to answer a question correctly, the scientific spirit calls for the experiments to be done; where other methods — mathematical proof, archival research, philosophical reasoning — are more relevant it calls for them instead. Although the methods of natural science could beneficially be applied more widely than they have been so far, the default assumption must be that the practitioners of a well-established discipline know what they are doing, and use the available methods most appropriate for answering its questions. Exceptions may result from a conservative tradition, or one that does not value the scientific spirit. Still, impatience with all methods except those of natural science is a poor basis on which to identify those exceptions.
Naturalism tries to condense the scientific spirit into a philosophical theory. But no theory can replace that spirit, for any theory can be applied in an unscientific spirit, as a polemical device to reinforce prejudice. Naturalism as dogma is one more enemy of the scientific spirit.
Timothy Williamson
Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Vagueness” (1994), “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007).
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya1
What Is Naturalism?
By TIMOTHY WILLIAMSON
Many contemporary philosophers describe themselves as naturalists. They mean that they believe something like this: there is only the natural world, and the best way to find out about it is by the scientific method. I am sometimes described as a naturalist. Why do I resist the description? Not for any religious scruple: I am an atheist of the most straightforward kind. But accepting the naturalist slogan without looking beneath the slick packaging is an unscientific way to form one’s beliefs about the world, not something naturalists should recommend.
What, for a start, is the natural world? If we say it is the world of matter, or the world of atoms, we are left behind by modern physics, which characterizes the world in far more abstract terms. Anyway, the best current scientific theories will probably be superseded by future scientific developments. We might therefore define the natural world as whatever the scientific method eventually discovers. Thus naturalism becomes the belief that there is only whatever the scientific method eventually discovers, and (not surprisingly) the best way to find out about it is by the scientific method. That is no tautology. Why can’t there be things only discoverable by non-scientific means, or not discoverable at all?
Still, naturalism is not as restrictive as it sounds. For example, some of its hard-nosed advocates undertake to postulate a soul or a god, if doing so turns out to be part of the best explanation of our experience, for that would be an application of scientific method. Naturalism is not incompatible in principle with all forms of religion. In practice, however, most naturalists doubt that belief in souls or gods withstands scientific scrutiny.
What is meant by “the scientific method”? Why assume that science only has one method? For naturalists, although natural sciences like physics and biology differ from each other in specific ways, at a sufficiently abstract level they all count as using a single general method. It involves formulating theoretical hypotheses and testing their predictions against systematic observation and controlled experiment. This is called the hypothetico-deductive method.
One challenge to naturalism is to find a place for mathematics. Natural sciences rely on it, but should we count it a science in its own right? If we do, then the description of scientific method just given is wrong, for it does not fit the science of mathematics, which proves its results by pure reasoning, rather than the hypothetico-deductive method. Although a few naturalists, such as W.V. Quine, argued that the real evidence in favor of mathematics comes from its applications in the natural sciences, so indirectly from observation and experiment, that view does not fit the way the subject actually develops. When mathematicians assess a proposed new axiom, they look at its consequences within mathematics, not outside. On the other hand, if we do not count pure mathematics a science, we thereby exclude mathematical proof by itself from the scientific method, and so discredit naturalism. For naturalism privileges the scientific method over all others, and mathematics is one of the most spectacular success stories in the history of human knowledge.
Which other disciplines count as science? Logic? Linguistics? History? Literary theory? How should we decide? The dilemma for naturalists is this. If they are too inclusive in what they count as science, naturalism loses its bite. Naturalists typically criticize some traditional forms of philosophy as insufficiently scientific, because they ignore experimental tests. How can they maintain such objections unless they restrict scientific method to hypothetico-deductivism? But if they are too exclusive in what they count as science, naturalism loses its credibility, by imposing a method appropriate to natural science on areas where it is inappropriate. Unfortunately, rather than clarify the issue, many naturalists oscillate. When on the attack, they assume an exclusive understanding of science as hypothetico-deductive. When under attack themselves, they fall back on a more inclusive understanding of science that drastically waters down naturalism. Such maneuvering makes naturalism an obscure article of faith. I don’t call myself a naturalist because I don’t want to be implicated in equivocal dogma. Dismissing an idea as “inconsistent with naturalism” is little better than dismissing it as “inconsistent with Christianity.”
Still, I sympathize with one motive behind naturalism — the aspiration to think in a scientific spirit. It’s a vague phrase, but one might start to explain it by emphasizing values like curiosity, honesty, accuracy, precision and rigor. What matters isn’t paying lip-service to those qualities — that’s easy — but actually exemplifying them in practice — the hard part. We needn’t pretend that scientists’ motives are pure. They are human. Science doesn’t depend on indifference to fame, professional advancement, money, or comparisons with rivals. Rather, truth is best pursued in social environments, intellectual communities, that minimize conflict between such baser motives and the scientific spirit, by rewarding work that embodies the scientific virtues. Such traditions exist, and not just in natural science.
The scientific spirit is as relevant in mathematics, history, philosophy and elsewhere as in natural science. Where experimentation is the likeliest way to answer a question correctly, the scientific spirit calls for the experiments to be done; where other methods — mathematical proof, archival research, philosophical reasoning — are more relevant it calls for them instead. Although the methods of natural science could beneficially be applied more widely than they have been so far, the default assumption must be that the practitioners of a well-established discipline know what they are doing, and use the available methods most appropriate for answering its questions. Exceptions may result from a conservative tradition, or one that does not value the scientific spirit. Still, impatience with all methods except those of natural science is a poor basis on which to identify those exceptions.
Naturalism tries to condense the scientific spirit into a philosophical theory. But no theory can replace that spirit, for any theory can be applied in an unscientific spirit, as a polemical device to reinforce prejudice. Naturalism as dogma is one more enemy of the scientific spirit.
Timothy Williamson
Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Vagueness” (1994), “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007).
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya1
October 20, 2011
Who You Are
By DAVID BROOKS
Daniel Kahneman spent part of his childhood in Nazi-occupied Paris. Like the other Jews, he had to wear a Star of David on the outside of his clothing. One evening, when he was about 7 years old, he stayed late at a friend’s house, past the 6 p.m. curfew.
He turned his sweater inside out to hide the star and tried to sneak home. A German SS trooper approached him on the street, picked him up and gave him a long, emotional hug. The soldier displayed a photo of his own son, spoke passionately about how much he missed him and gave Kahneman some money as a sentimental present. The whole time Kahneman was terrified that the SS trooper might notice the yellow star peeking out from inside his sweater.
Kahneman finally made it home, convinced that people are complicated and bizarre. He went on to become one of the world’s most influential psychologists and to win the Nobel in economic science.
Kahneman doesn’t actually tell that childhood story in his forthcoming book. “Thinking, Fast and Slow” is an intellectual memoir, not a personal one. The book is, nonetheless, sure to be a major intellectual event (look for an excerpt in The Times Magazine this Sunday) because it superbly encapsulates Kahneman’s research, and the vast tide of work that has been sparked by it.
I’d like to use this column not to summarize the book but to describe why I think Kahneman and his research partner, the late Amos Tversky, will be remembered hundreds of years from now, and how their work helped instigate a cultural shift that is already producing astounding results.
Before Kahneman and Tversky, people who thought about social problems and human behavior tended to assume that we are mostly rational agents. They assumed that people have control over the most important parts of their own thinking. They assumed that people are basically sensible utility-maximizers and that when they depart from reason it’s because some passion like fear or love has distorted their judgment.
Kahneman and Tversky conducted experiments. They proved that actual human behavior often deviates from the old models and that the flaws are not just in the passions but in the machinery of cognition. They demonstrated that people rely on unconscious biases and rules of thumb to navigate the world, for good and ill. Many of these biases have become famous: priming, framing, loss-aversion.
Kahneman reports on some delightful recent illustrations from other researchers. Pro golfers putt more accurately from all distances when putting for par than when putting for birdie because they fear the bogie more than they desire the birdie. Israeli parole boards grant parole to about 35 percent of the prisoners they see, except when they hear a case in the hour just after mealtime. In those cases, they grant parole 65 percent of the time. Shoppers will buy many more cans of soup if you put a sign atop the display that reads “Limit 12 per customer.”
Kahneman and Tversky were not given to broad claims. But the work they and others did led to the reappreciation of several old big ideas:
We are dual process thinkers. We have two interrelated systems running in our heads. One is slow, deliberate and arduous (our conscious reasoning). The other is fast, associative, automatic and supple (our unconscious pattern recognition). There is now a complex debate over the relative strengths and weaknesses of these two systems. In popular terms, think of it as the debate between “Moneyball” (look at the data) and “Blink” (go with your intuition).
We are not blank slates. All humans seem to share similar sets of biases. There is such a thing as universal human nature. The trick is to understand the universals and how tightly or loosely they tie us down.
We are players in a game we don’t understand. Most of our own thinking is below awareness. Fifty years ago, people may have assumed we are captains of our own ships, but, in fact, our behavior is often aroused by context in ways we can’t see. Our biases frequently cause us to want the wrong things. Our perceptions and memories are slippery, especially about our own mental states. Our free will is bounded. We have much less control over ourselves than we thought.
This research yielded a different vision of human nature and a different set of debates. The work of Kahneman and Tversky was a crucial pivot point in the way we see ourselves.
They also figured out ways to navigate around our shortcomings. Kahneman champions the idea of “adversarial collaboration” — when studying something, work with people you disagree with. Tversky had a wise maxim: “Let us take what the terrain gives.” Don’t overreach. Understand what your circumstances are offering.
Many people are exploring the inner wilderness. Kahneman and Tversky are like the Lewis and Clark of the mind.
http://www.nytimes.com/2011/10/21/opini ... emc=tha212
Who You Are
By DAVID BROOKS
Daniel Kahneman spent part of his childhood in Nazi-occupied Paris. Like the other Jews, he had to wear a Star of David on the outside of his clothing. One evening, when he was about 7 years old, he stayed late at a friend’s house, past the 6 p.m. curfew.
He turned his sweater inside out to hide the star and tried to sneak home. A German SS trooper approached him on the street, picked him up and gave him a long, emotional hug. The soldier displayed a photo of his own son, spoke passionately about how much he missed him and gave Kahneman some money as a sentimental present. The whole time Kahneman was terrified that the SS trooper might notice the yellow star peeking out from inside his sweater.
Kahneman finally made it home, convinced that people are complicated and bizarre. He went on to become one of the world’s most influential psychologists and to win the Nobel in economic science.
Kahneman doesn’t actually tell that childhood story in his forthcoming book. “Thinking, Fast and Slow” is an intellectual memoir, not a personal one. The book is, nonetheless, sure to be a major intellectual event (look for an excerpt in The Times Magazine this Sunday) because it superbly encapsulates Kahneman’s research, and the vast tide of work that has been sparked by it.
I’d like to use this column not to summarize the book but to describe why I think Kahneman and his research partner, the late Amos Tversky, will be remembered hundreds of years from now, and how their work helped instigate a cultural shift that is already producing astounding results.
Before Kahneman and Tversky, people who thought about social problems and human behavior tended to assume that we are mostly rational agents. They assumed that people have control over the most important parts of their own thinking. They assumed that people are basically sensible utility-maximizers and that when they depart from reason it’s because some passion like fear or love has distorted their judgment.
Kahneman and Tversky conducted experiments. They proved that actual human behavior often deviates from the old models and that the flaws are not just in the passions but in the machinery of cognition. They demonstrated that people rely on unconscious biases and rules of thumb to navigate the world, for good and ill. Many of these biases have become famous: priming, framing, loss-aversion.
Kahneman reports on some delightful recent illustrations from other researchers. Pro golfers putt more accurately from all distances when putting for par than when putting for birdie because they fear the bogie more than they desire the birdie. Israeli parole boards grant parole to about 35 percent of the prisoners they see, except when they hear a case in the hour just after mealtime. In those cases, they grant parole 65 percent of the time. Shoppers will buy many more cans of soup if you put a sign atop the display that reads “Limit 12 per customer.”
Kahneman and Tversky were not given to broad claims. But the work they and others did led to the reappreciation of several old big ideas:
We are dual process thinkers. We have two interrelated systems running in our heads. One is slow, deliberate and arduous (our conscious reasoning). The other is fast, associative, automatic and supple (our unconscious pattern recognition). There is now a complex debate over the relative strengths and weaknesses of these two systems. In popular terms, think of it as the debate between “Moneyball” (look at the data) and “Blink” (go with your intuition).
We are not blank slates. All humans seem to share similar sets of biases. There is such a thing as universal human nature. The trick is to understand the universals and how tightly or loosely they tie us down.
We are players in a game we don’t understand. Most of our own thinking is below awareness. Fifty years ago, people may have assumed we are captains of our own ships, but, in fact, our behavior is often aroused by context in ways we can’t see. Our biases frequently cause us to want the wrong things. Our perceptions and memories are slippery, especially about our own mental states. Our free will is bounded. We have much less control over ourselves than we thought.
This research yielded a different vision of human nature and a different set of debates. The work of Kahneman and Tversky was a crucial pivot point in the way we see ourselves.
They also figured out ways to navigate around our shortcomings. Kahneman champions the idea of “adversarial collaboration” — when studying something, work with people you disagree with. Tversky had a wise maxim: “Let us take what the terrain gives.” Don’t overreach. Understand what your circumstances are offering.
Many people are exploring the inner wilderness. Kahneman and Tversky are like the Lewis and Clark of the mind.
http://www.nytimes.com/2011/10/21/opini ... emc=tha212
The article below is about levels of meaning that exist in life - there is the ordinary meaning and then there is a meaning within a metaphysical context....
November 28, 2011, 3:55 pm
In the Context of No Context
During my recent blogging hiatus, Will Wilkinson penned a withering post criticizing a recent convert to Christianity who had suggested that atheism can’t supply “meaning” in human life. Arguing that questions of meaning are logically independent of questions about metaphysics, he wrote:
"If you ask me, the best reason to think “life is meaningful” is because one’s life seems meaningful. If you can’t stop “acting as if my own life had meaning,” it’s probably because it does have meaning. Indeed, not being able to stop acting as if one’s life is meaningful is probably what it means for life to be meaningful. But why think this has any logical or causal relationship to the scientific facts about our brains or lifespans? The truth of the proposition “life has meaning” is more evident and secure than any proposition about what must be true if life is to have meaning. Epistemic best practices recommend treating “life has meaning” as a more-or-less self-evident, non-conditional proposition. Once we’ve got that squared away, we can go ahead and take the facts about the world as they come. It turns out our lives are infinitesimally short on the scale of cosmic time. We know that to be true. Interesting! So now we know two things: that life has meaning and that our lives are just a blip in the history of the universe.
This is, I’m confident, the right way to do it. Why think the one fact has anything to do with the other?"
I see Wilkinson’s point, but I don’t think he quite sees the point that he’s critiquing. Suppose, by way of analogy, that a group of people find themselves conscripted into a World-War-I-type conflict — they’re thrown together in a platoon and stationed out in no man’s land, where over time a kind of miniature society gets created, with its own loves and hates, hope and joys, and of course its own grinding, life-threatening routines. Eventually, some people in the platoon begin to wonder about the point of it all: Why are they fighting, who are they fighting, what do they hope to gain, what awaits them at war’s end, will there ever be a war’s end, and for that matter are they even sure that they’re the good guys? (Maybe they’ve been conscripted by the Third Reich! Maybe their forever war is just a kind of virtual reality created by alien intelligences to study the human way of combat! Etc.) They begin to wonder, in other words, about the meaning of it all, and whether there’s any larger context to their often-agonizing struggle for survival. And in the absence of such context, many of them flirt with a kind of existential despair, which makes the everyday duties of the trench seem that much more onerous, and the charnel house of war that much more difficult to bear.
At this point, one of the platoon’s more intellectually sophisticated members speaks up. He thinks his angst-ridden comrades are missing the point: Regardless of the larger context of the conflict, they know the war has meaning because they can’t stop acting like it has meaning. Even in their slough of despond, most of them don’t throw themselves on barbed wire or rush headlong into a wave of poison gas. (And the ones who do usually have something clinically wrong with them.) Instead, they duck when the shells sail over, charge when the commander gives the order, tend the wounded and comfort the dying and feel intuitively invested in the capture of the next hill, the next salient, the next trench. They do so, this clever soldier goes on, because their immediate context — life-and-death battles, wartime loves and friendships, etc — supplies intense feelings of meaningfulness, and so long as it does the big-picture questions that they’re worrying about must be logically separable from the everyday challenges of being a front-line soldier. If some of the soldiers want to worry about these big-picture questions, that’s fair enough. But they shouldn’t pretend that their worries give them a monopoly on a life meaningfully lived (or a war meaningfully fought). Instead, given how much meaningfulness is immediately and obviously available — right here and right now, amid the rocket’s red glare and the bombs bursting in air — the desire to understand the war’s larger context is just a personal choice, with no necessary connection to the question of whether today’s battle is worth the fighting.
This is a very natural way to approach warfare, as it happens. (Many studies of combat have shown that the bonds of affection between soldiers tend to matter more to cohesion and morale than the grand ideological purposes — or lack thereof — that they’re fighting for.) And it’s a very natural way to approach everyday life as well. But the part of the point of religion and philosophy is address questions that lurk beneath these natural rhythms, instead of just taking our feelings of meaningfulness as the alpha and omega of human existence. In the context of the war, of course the battle feels meaningful. In the context of daily life as we experience it, of course our joys and sorrows feel intensely meaningful. But just as it surely makes a (if you will) meaningful difference why the war itself is being waged, it surely makes a rather large difference whether our joys and sorrows take place in, say, C.S. Lewis’s Christian universe or Richard Dawkins’s godless cosmos. Saying that “we know life is meaningful because it feels meaningful” is true for the first level of context, but non-responsive for the second.
http://douthat.blogs.nytimes.com/2011/1 ... n&emc=tyb1
November 28, 2011, 3:55 pm
In the Context of No Context
During my recent blogging hiatus, Will Wilkinson penned a withering post criticizing a recent convert to Christianity who had suggested that atheism can’t supply “meaning” in human life. Arguing that questions of meaning are logically independent of questions about metaphysics, he wrote:
"If you ask me, the best reason to think “life is meaningful” is because one’s life seems meaningful. If you can’t stop “acting as if my own life had meaning,” it’s probably because it does have meaning. Indeed, not being able to stop acting as if one’s life is meaningful is probably what it means for life to be meaningful. But why think this has any logical or causal relationship to the scientific facts about our brains or lifespans? The truth of the proposition “life has meaning” is more evident and secure than any proposition about what must be true if life is to have meaning. Epistemic best practices recommend treating “life has meaning” as a more-or-less self-evident, non-conditional proposition. Once we’ve got that squared away, we can go ahead and take the facts about the world as they come. It turns out our lives are infinitesimally short on the scale of cosmic time. We know that to be true. Interesting! So now we know two things: that life has meaning and that our lives are just a blip in the history of the universe.
This is, I’m confident, the right way to do it. Why think the one fact has anything to do with the other?"
I see Wilkinson’s point, but I don’t think he quite sees the point that he’s critiquing. Suppose, by way of analogy, that a group of people find themselves conscripted into a World-War-I-type conflict — they’re thrown together in a platoon and stationed out in no man’s land, where over time a kind of miniature society gets created, with its own loves and hates, hope and joys, and of course its own grinding, life-threatening routines. Eventually, some people in the platoon begin to wonder about the point of it all: Why are they fighting, who are they fighting, what do they hope to gain, what awaits them at war’s end, will there ever be a war’s end, and for that matter are they even sure that they’re the good guys? (Maybe they’ve been conscripted by the Third Reich! Maybe their forever war is just a kind of virtual reality created by alien intelligences to study the human way of combat! Etc.) They begin to wonder, in other words, about the meaning of it all, and whether there’s any larger context to their often-agonizing struggle for survival. And in the absence of such context, many of them flirt with a kind of existential despair, which makes the everyday duties of the trench seem that much more onerous, and the charnel house of war that much more difficult to bear.
At this point, one of the platoon’s more intellectually sophisticated members speaks up. He thinks his angst-ridden comrades are missing the point: Regardless of the larger context of the conflict, they know the war has meaning because they can’t stop acting like it has meaning. Even in their slough of despond, most of them don’t throw themselves on barbed wire or rush headlong into a wave of poison gas. (And the ones who do usually have something clinically wrong with them.) Instead, they duck when the shells sail over, charge when the commander gives the order, tend the wounded and comfort the dying and feel intuitively invested in the capture of the next hill, the next salient, the next trench. They do so, this clever soldier goes on, because their immediate context — life-and-death battles, wartime loves and friendships, etc — supplies intense feelings of meaningfulness, and so long as it does the big-picture questions that they’re worrying about must be logically separable from the everyday challenges of being a front-line soldier. If some of the soldiers want to worry about these big-picture questions, that’s fair enough. But they shouldn’t pretend that their worries give them a monopoly on a life meaningfully lived (or a war meaningfully fought). Instead, given how much meaningfulness is immediately and obviously available — right here and right now, amid the rocket’s red glare and the bombs bursting in air — the desire to understand the war’s larger context is just a personal choice, with no necessary connection to the question of whether today’s battle is worth the fighting.
This is a very natural way to approach warfare, as it happens. (Many studies of combat have shown that the bonds of affection between soldiers tend to matter more to cohesion and morale than the grand ideological purposes — or lack thereof — that they’re fighting for.) And it’s a very natural way to approach everyday life as well. But the part of the point of religion and philosophy is address questions that lurk beneath these natural rhythms, instead of just taking our feelings of meaningfulness as the alpha and omega of human existence. In the context of the war, of course the battle feels meaningful. In the context of daily life as we experience it, of course our joys and sorrows feel intensely meaningful. But just as it surely makes a (if you will) meaningful difference why the war itself is being waged, it surely makes a rather large difference whether our joys and sorrows take place in, say, C.S. Lewis’s Christian universe or Richard Dawkins’s godless cosmos. Saying that “we know life is meaningful because it feels meaningful” is true for the first level of context, but non-responsive for the second.
http://douthat.blogs.nytimes.com/2011/1 ... n&emc=tyb1
December 4, 2011, 5:30 pm
Art and the Limits of Neuroscience
By ALVA NOë
What is art? What does art reveal about human nature? The trend these days is to approach such questions in the key of neuroscience.
“Neuroaesthetics” is a term that has been coined to refer to the project of studying art using the methods of neuroscience. It would be fair to say that neuroaesthetics has become a hot field. It is not unusual for leading scientists and distinguished theorists of art to collaborate on papers that find their way into top scientific journals.
Semir Zeki, a neuroscientist at University College London, likes to say that art is governed by the laws of the brain. It is brains, he says, that see art and it is brains that make art. Champions of the new brain-based approach to art sometimes think of themselves as fighting a battle with scholars in the humanities who may lack the courage (in the words of the art historian John Onians) to acknowledge the ways in which biology constrains cultural activity. Strikingly, it hasn’t been much of a battle. Students of culture, like so many of us, seem all too glad to join in the general enthusiasm for neural approaches to just about everything.
What is striking about neuroaesthetics is not so much the fact that it has failed to produce interesting or surprising results about art, but rather the fact that no one — not the scientists, and not the artists and art historians — seem to have minded, or even noticed. What stands in the way of success in this new field is, first, the fact that neuroscience has yet to frame anything like an adequate biological or “naturalistic” account of human experience — of thought, perception, or consciousness.
The idea that a person is a functioning assembly of brain cells and associated molecules is not something neuroscience has discovered. It is, rather, something it takes for granted. You are your brain. Francis Crick once called this “the astonishing hypothesis,” because, as he claimed, it is so remote from the way most people alive today think about themselves. But what is really astonishing about this supposedly astonishing hypothesis is how astonishing it is not! The idea that there is a thing inside us that thinks and feels — and that we are that thing — is an old one. Descartes thought that the thinking thing inside had to be immaterial; he couldn’t conceive how flesh could perform the job. Scientists today suppose that it is the brain that is the thing inside us that thinks and feels. But the basic idea is the same. And this is not an idle point. However surprising it may seem, the fact is we don’t actually have a better understanding how the brain might produce consciousness than Descartes did of how the immaterial soul would accomplish this feat; after all, at the present time we lack even the rudimentary outlines of a neural theory of consciousness.
What we do know is that a healthy brain is necessary for normal mental life, and indeed, for any life at all. But of course much else is necessary for mental life. We need roughly normal bodies and a roughly normal environment. We also need the presence and availability of other people if we are to have anything like the sorts of lives that we know and value. So we really ought to say that it is the normally embodied, environmentally- and socially-situated human animal that thinks, feels, decides and is conscious. But once we say this, it would be simpler, and more accurate, to allow that it is people, not their brains, who think and feel and decide. It is people, not their brains, that make and enjoy art. You are not your brain, you are a living human being.
We need finally to break with the dogma that you are something inside of you — whether we think of this as the brain or an immaterial soul — and we need finally take seriously the possibility that the conscious mind is achieved by persons and other animals thanks to their dynamic exchange with the world around them (a dynamic exchange that no doubt depends on the brain, among other things). Importantly, to break with the Cartesian dogmas of contemporary neuroscience would not be to cave in and give up on a commitment to understanding ourselves as natural. It would be rather to rethink what a biologically adequate conception of our nature would be.
But there is a second obstacle to progress in neuroaesthetics. Neural approaches to art have not yet been able to find a way to bring art into focus in the laboratory. As mentioned, theorists in this field like to say that art is constrained by the laws of the brain. But in practice what this is usually taken to come down to is the humble fact that the brain constrains the experience of art because it constrains all experience. Visual artists, for example, don’t work with ultraviolet light, as Zeki reminds us, because we can’t see ultraviolet light. They do work with shape and form and color because we can see them.
Now it is doubtless correct that visual artists confine themselves to materials and effects that are, well, visible. And likewise, it seems right that our perception of works of art, like our perception of anything, depends on the nature of our perceptual capacities, capacities which, in their turn, are constrained by the brain.
But there is a problem with this: An account of how the brain constrains our ability to perceive has no greater claim to being an account of our ability to perceive art than it has to being an account of how we perceive sports, or how we perceive the man across from us on the subway. In works about neuroaesthetics, art is discussed in the prefaces and touted on the book jackets, but never really manages to show up in the body of the works themselves!
Some of us might wonder whether the relevant question is how we perceive works of art, anyway. What we ought to be asking is: Why do we value some works as art? Why do they move us? Why does art matter? And here again, the closest neural scientists or psychologists come to saying anything about this kind of aesthetic evaluation is to say something about preference. But the class of things we like, or that we prefer as compared to other things, is much wider than the class of things we value as art. And the sorts of reasons we have for valuing one art work over another are not the same kind of reasons we would give for liking one person more than another, or one flavor more than another. And it is no help to appeal to beauty here. Beauty is both too wide and too narrow. Not all art works are beautiful (or pleasing for that matter, even if many are), and not everything we find beautiful (a person, say, or a sunset) is a work of art.
Again we find not that neuroaesthetics takes aim at our target and misses, but that it fails even to bring the target into focus.
Yet it’s early. Neuroaesthetics, like the neuroscience of consciousness itself, is still in its infancy. Is there any reason to doubt that progress will be made? Is there any principled reason to be skeptical that there can be a valuable study of art making use of the methods and tools of neuroscience? I think the answer to these questions must be yes, but not because there is no value in bringing art and empirical science into contact, and not because art does not reflect our human biology.
To begin to see this, consider: engagement with a work of art is a bit like engagement with another person in conversation; and a work of art itself can be usefully compared with a humorous gesture or a joke. Just as getting a joke requires sensitivity to a whole background context, to presuppositions and intended as well as unintended meanings, so “getting” a work of art requires an attunement to problems, questions, attitudes and expectations; it requires an engagement with the context in which the work of art has work to do. We might say that works of art pose questions and encountering a work of art meaningfully requires understanding the relevant questions and getting why they matter, or maybe even, why they don’t matter, or don’t matter any more, or why they would matter in one context but not another. In short, the work of art, whatever its local subject matter or specific concerns ― God, life, death, politics, the beautiful, art itself, perceptual consciousness ― and whatever its medium, is doing something like philosophical work.
One consequence of this is that it may belong to the very nature of art, as it belongs to the nature of philosophy, that there can be nothing like a settled, once-and-for-all account of what art is, just as there can be no all-purpose account of what happens when people communicate or when they laugh together. Art, even for those who make it and love it, is always a question, a problem for itself. What is art? The question must arise, but it allows no definitive answer.
For these reasons, neuroscience, which looks at events in the brains of individual people and can do no more than describe and analyze them, may just be the wrong kind of empirical science for understanding art.
Far from its being the case that we can apply neuroscience as an intellectual ready-made to understand art, it may be that art, by disclosing the ways in which human experience in general is something we enact together, in exchange, may provide new resources for shaping a more plausible, more empirically rigorous, account of our human nature.
--------------------------------------------------------------------------------
Alva Noë is a philosopher at CUNY’s Graduate Center. He is the author of “Out of Our Heads: Why You Are Not Your Brain and Other Lessons From The Biology of Consciousness.” He is now writing a book on art and human nature. Noë writes a weekly column for NPR’s 13.7: Culture and Cosmos blog. You can follow him on Twitter and Facebook.
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya1
Art and the Limits of Neuroscience
By ALVA NOë
What is art? What does art reveal about human nature? The trend these days is to approach such questions in the key of neuroscience.
“Neuroaesthetics” is a term that has been coined to refer to the project of studying art using the methods of neuroscience. It would be fair to say that neuroaesthetics has become a hot field. It is not unusual for leading scientists and distinguished theorists of art to collaborate on papers that find their way into top scientific journals.
Semir Zeki, a neuroscientist at University College London, likes to say that art is governed by the laws of the brain. It is brains, he says, that see art and it is brains that make art. Champions of the new brain-based approach to art sometimes think of themselves as fighting a battle with scholars in the humanities who may lack the courage (in the words of the art historian John Onians) to acknowledge the ways in which biology constrains cultural activity. Strikingly, it hasn’t been much of a battle. Students of culture, like so many of us, seem all too glad to join in the general enthusiasm for neural approaches to just about everything.
What is striking about neuroaesthetics is not so much the fact that it has failed to produce interesting or surprising results about art, but rather the fact that no one — not the scientists, and not the artists and art historians — seem to have minded, or even noticed. What stands in the way of success in this new field is, first, the fact that neuroscience has yet to frame anything like an adequate biological or “naturalistic” account of human experience — of thought, perception, or consciousness.
The idea that a person is a functioning assembly of brain cells and associated molecules is not something neuroscience has discovered. It is, rather, something it takes for granted. You are your brain. Francis Crick once called this “the astonishing hypothesis,” because, as he claimed, it is so remote from the way most people alive today think about themselves. But what is really astonishing about this supposedly astonishing hypothesis is how astonishing it is not! The idea that there is a thing inside us that thinks and feels — and that we are that thing — is an old one. Descartes thought that the thinking thing inside had to be immaterial; he couldn’t conceive how flesh could perform the job. Scientists today suppose that it is the brain that is the thing inside us that thinks and feels. But the basic idea is the same. And this is not an idle point. However surprising it may seem, the fact is we don’t actually have a better understanding how the brain might produce consciousness than Descartes did of how the immaterial soul would accomplish this feat; after all, at the present time we lack even the rudimentary outlines of a neural theory of consciousness.
What we do know is that a healthy brain is necessary for normal mental life, and indeed, for any life at all. But of course much else is necessary for mental life. We need roughly normal bodies and a roughly normal environment. We also need the presence and availability of other people if we are to have anything like the sorts of lives that we know and value. So we really ought to say that it is the normally embodied, environmentally- and socially-situated human animal that thinks, feels, decides and is conscious. But once we say this, it would be simpler, and more accurate, to allow that it is people, not their brains, who think and feel and decide. It is people, not their brains, that make and enjoy art. You are not your brain, you are a living human being.
We need finally to break with the dogma that you are something inside of you — whether we think of this as the brain or an immaterial soul — and we need finally take seriously the possibility that the conscious mind is achieved by persons and other animals thanks to their dynamic exchange with the world around them (a dynamic exchange that no doubt depends on the brain, among other things). Importantly, to break with the Cartesian dogmas of contemporary neuroscience would not be to cave in and give up on a commitment to understanding ourselves as natural. It would be rather to rethink what a biologically adequate conception of our nature would be.
But there is a second obstacle to progress in neuroaesthetics. Neural approaches to art have not yet been able to find a way to bring art into focus in the laboratory. As mentioned, theorists in this field like to say that art is constrained by the laws of the brain. But in practice what this is usually taken to come down to is the humble fact that the brain constrains the experience of art because it constrains all experience. Visual artists, for example, don’t work with ultraviolet light, as Zeki reminds us, because we can’t see ultraviolet light. They do work with shape and form and color because we can see them.
Now it is doubtless correct that visual artists confine themselves to materials and effects that are, well, visible. And likewise, it seems right that our perception of works of art, like our perception of anything, depends on the nature of our perceptual capacities, capacities which, in their turn, are constrained by the brain.
But there is a problem with this: An account of how the brain constrains our ability to perceive has no greater claim to being an account of our ability to perceive art than it has to being an account of how we perceive sports, or how we perceive the man across from us on the subway. In works about neuroaesthetics, art is discussed in the prefaces and touted on the book jackets, but never really manages to show up in the body of the works themselves!
Some of us might wonder whether the relevant question is how we perceive works of art, anyway. What we ought to be asking is: Why do we value some works as art? Why do they move us? Why does art matter? And here again, the closest neural scientists or psychologists come to saying anything about this kind of aesthetic evaluation is to say something about preference. But the class of things we like, or that we prefer as compared to other things, is much wider than the class of things we value as art. And the sorts of reasons we have for valuing one art work over another are not the same kind of reasons we would give for liking one person more than another, or one flavor more than another. And it is no help to appeal to beauty here. Beauty is both too wide and too narrow. Not all art works are beautiful (or pleasing for that matter, even if many are), and not everything we find beautiful (a person, say, or a sunset) is a work of art.
Again we find not that neuroaesthetics takes aim at our target and misses, but that it fails even to bring the target into focus.
Yet it’s early. Neuroaesthetics, like the neuroscience of consciousness itself, is still in its infancy. Is there any reason to doubt that progress will be made? Is there any principled reason to be skeptical that there can be a valuable study of art making use of the methods and tools of neuroscience? I think the answer to these questions must be yes, but not because there is no value in bringing art and empirical science into contact, and not because art does not reflect our human biology.
To begin to see this, consider: engagement with a work of art is a bit like engagement with another person in conversation; and a work of art itself can be usefully compared with a humorous gesture or a joke. Just as getting a joke requires sensitivity to a whole background context, to presuppositions and intended as well as unintended meanings, so “getting” a work of art requires an attunement to problems, questions, attitudes and expectations; it requires an engagement with the context in which the work of art has work to do. We might say that works of art pose questions and encountering a work of art meaningfully requires understanding the relevant questions and getting why they matter, or maybe even, why they don’t matter, or don’t matter any more, or why they would matter in one context but not another. In short, the work of art, whatever its local subject matter or specific concerns ― God, life, death, politics, the beautiful, art itself, perceptual consciousness ― and whatever its medium, is doing something like philosophical work.
One consequence of this is that it may belong to the very nature of art, as it belongs to the nature of philosophy, that there can be nothing like a settled, once-and-for-all account of what art is, just as there can be no all-purpose account of what happens when people communicate or when they laugh together. Art, even for those who make it and love it, is always a question, a problem for itself. What is art? The question must arise, but it allows no definitive answer.
For these reasons, neuroscience, which looks at events in the brains of individual people and can do no more than describe and analyze them, may just be the wrong kind of empirical science for understanding art.
Far from its being the case that we can apply neuroscience as an intellectual ready-made to understand art, it may be that art, by disclosing the ways in which human experience in general is something we enact together, in exchange, may provide new resources for shaping a more plausible, more empirically rigorous, account of our human nature.
--------------------------------------------------------------------------------
Alva Noë is a philosopher at CUNY’s Graduate Center. He is the author of “Out of Our Heads: Why You Are Not Your Brain and Other Lessons From The Biology of Consciousness.” He is now writing a book on art and human nature. Noë writes a weekly column for NPR’s 13.7: Culture and Cosmos blog. You can follow him on Twitter and Facebook.
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya1
Philosophy Is Not a Science
By JULIAN FRIEDLAND
The Stone is a forum for contemporary philosophers on issues both timely and timeless.
For roughly 98 percent of the last 2,500 years of Western intellectual history, philosophy was considered the mother of all knowledge. It generated most of the fields of research still with us today. This is why we continue to call our highest degrees Ph.D.’s, namely, philosophy doctorates. At the same time, we live an age in which many seem no longer sure what philosophy is or is good for anymore. Most seem to see it as a highly abstracted discipline with little if any bearing on objective reality — something more akin to art, literature or religion. All have plenty to say about reality. But the overarching assumption is that none of it actually qualifies as knowledge until proven scientifically.
Yet philosophy differs in a fundamental way from art, literature or religion, as its etymological meaning is “the love of wisdom,” which implies a significant degree of objective knowledge. And this knowledge must be attained on its own terms. Or else it would be but another branch of science.
So what objective knowledge can philosophy bring that is not already determinable by science? This is a question that has become increasingly fashionable — even in philosophy — to answer with a defiant “none.” For numerous philosophers have come to believe, in concert with the prejudices of our age, that only science holds the potential to solve persistent philosophical mysteries as the nature of truth, life, mind, meaning, justice, the good and the beautiful.
Thus, myriad contemporary philosophers are perfectly willing to offer themselves up as intellectual servants or ushers of scientific progress. Their research largely functions as a spearhead for scientific exploration and as a balm for making those pursuits more palpable and palatable to the wider population. The philosopher S.M. Liao, for example, argued recently in The Atlantic that we begin voluntarily bioengineering ourselves to lower our carbon footprints and to become generally more virtuous. And Prof. Colin McGinn, writing recently in The Stone, claimed to be so tired of philosophy being disrespected and misunderstood that he urged that philosophers begin referring to themselves as “ontic scientists.”
McGinn takes the moniker of science as broad enough to include philosophy since the dictionary defines it as “any systematically organized body of knowledge on any subject.” But this definition is so vague that it betrays a widespread confusion as to what science actually is. And McGinn’s reminder that its etymology comes from “scientia,” the ancient Latin word for “knowledge,” only adds to the muddle. For by this definition we might well brand every academic discipline as science. “Literary studies” then become “literary sciences” — sounds much more respectable. “Fine arts” become “aesthetic sciences” — that would surely get more parents to let their kids major in art. While we’re at it, let’s replace the Bachelor of Arts degree with the Bachelor of Science. (I hesitate to even mention such options lest enterprising deans get any ideas.) Authors and artists aren’t engaged primarily in any kind of science, as their disciplines have more to do with subjective and qualitative standards than objective and quantitative ones. And that’s of course not to say that only science can bring objective and quantitative knowledge. Philosophy can too.
The intellectual culture of scientism clouds our understanding of science itself. What’s more, it eclipses alternative ways of knowing — chiefly the philosophical — that can actually yield greater certainty than the scientific. While science and philosophy do at times overlap, they are fundamentally different approaches to understanding. So philosophers should not add to the conceptual confusion that subsumes all knowledge into science. Rather, we should underscore the fact that various disciplines we ordinarily treat as science are at least as — if not more —philosophical than scientific. Take for example mathematics, theoretical physics, psychology and economics. These are predominately rational conceptual disciplines. That is, they are not chiefly reliant on empirical observation. For unlike science, they may be conducted while sitting in an armchair with eyes closed.
Does this mean these fields do not yield objective knowledge? The question is frankly absurd. Indeed if any of their findings count as genuine knowledge, they may actually be more enduring. For unlike empirical observations, which may be mistaken or incomplete, philosophical findings depend primarily on rational and logical principles. As such, whereas science tends to alter and update its findings day to day through trial and error, logical deductions are timeless. This is why Einstein pompously called attempts to empirically confirm his special theory of relativity “the detail work.” Indeed last September, The New York Times reported that scientists at the European Center for Nuclear Research (CERN) thought they had empirically disproved Einstein’s theory that nothing could travel faster than the speed of light, only to find their results could not be reproduced in follow-up experiments last month. Such experimental anomalies are confounding. But as CERN’s research director Sergio Bertolucci plainly put it, “This is how science works.”
However, 5 plus 7 will always equal 12. No amount of further observation will change that. And while mathematics is empirically testable at such rudimentary levels, it stops being so in its purest forms, like analysis and number theory. Proofs in these areas are conducted entirely conceptually. Similarly with logic, certain arguments are proven inexorably valid while others are inexorably invalid. Logically fallacious arguments can be rather sophisticated
and persuasive. But they are nevertheless invalid and always will be. Exposing such errors is part of philosophy’s stock and trade. Thus as Socrates pointed out long ago, much of the knowledge gained by doing philosophy consists in realizing what is not the case.
One such example is Thrasymachus’ claim that justice is best defined as the advantage of the stronger, namely, that which is in the competitive interest of the powerful. Socrates reduces this view to absurdity by showing that the wise need not compete with anyone.
Or to take a more positive example, Wittgenstein showed that an ordinary word such as “game” is used consistently in myriad contrasting ways without possessing any essential unifying definition. Though this may seem impossible, the meaning of such terms is actually determined by their contextual usage. For when we look at faces within a nuclear family, we see resemblances from one to the next. Yet no single trait need be present in every face to recognize them all as members of the family. Similarly, divergent uses of “game” form a family. Ultimately as a result of Wittgenstein’s philosophy, we know that natural language is a public phenomenon that cannot logically be invented in isolation.
These are essentially conceptual clarifications. And as such, they are relatively timeless philosophical truths.
This is also why jurisprudence qualifies as an objective body of knowledge without needing to change its name to “judicial science,” as some universities now describe it. Though it is informed by empirical research into human nature and the general workings of society, it relies principally on the cogency of arguments from learned experts as
measured by their logical validity and the truth value of their premises. If both of these criteria are present, then the arguments are sound. Hence, Supreme Court justices are not so much scientific as philosophical experts on the nature of justice. And that is not to say their expertise does not count as genuine knowledge. In the best cases, it rises to the loftier level of wisdom — the central objective of philosophy.
Though philosophy does sometimes employ thought experiments, these aren’t actually scientific, for they are conducted entirely in the imagination. For example, judges have imagined what might happen if, say, insider trading were made legal. And they have concluded that while it would lower regulatory costs and promote a degree of investor freedom, legalization would imperil the free market itself by undermining honest securities markets and eroding investor confidence. While this might appear to be an empirical question, it cannot be settled empirically without conducting the experiment, which is naturally beyond the reach of jurisprudence. Only legislatures could conduct the experiment by legalizing insider trading. And even then, one could not conduct it completely scientifically without a separate control-group society in which insider trading remained illegal for comparison. Regardless, judges would likely again forbid legalization essentially on compelling philosophical grounds.
Similarly in ethics, science cannot necessarily tell us what to value. Science has made significant progress in helping to understand human nature. Such research, if accurate, provides very real constraints to philosophical constructs on the nature of the good. Still, evidence of how most people happen to be does not necessarily tell us everything about how we should aspire to be. For how we should aspire to be is a conceptual question, namely, of how we ought to act, as opposed to an empirical question of how we do act. We might administer scientific polls to determine the degree to which people take themselves to be happy and what causes they might attribute to their own levels happiness. But it’s difficult to know if these self-reports are authoritative since many may not have coherent, consistent or accurate conceptions of happiness to begin with. We might even ask them if they find such and such ethical arguments convincing, namely, if happiness ought to be their only aim in life. But we don’t and shouldn’t take those results as sufficient to determine, say, the ethics standards of the American Medical Association, as those require philosophical analysis.
In sum, philosophy is not science. For it employs the rational tools of logical analysis and conceptual clarification in lieu of empirical measurement. And this approach, when carefully carried out, can yield knowledge at times more reliable and enduring than science, strictly speaking. For scientific measurement is in principle always subject to at least some degree of readjustment based on future observation. Yet sound philosophical argument achieves a measure of immortality.
So if we philosophers want to restore philosophy’s authority in the wider culture, we should not change its name but engage more often with issues of contemporary concern — not so much as scientists but as guardians of reason. This might encourage the wider population to think more critically, that is, to become more philosophical.
--------------------------------------------------------------------------------
Julian Friedland is a visiting assistant professor at Fordham University Gabelli School of Business, Division of Law and Ethics. He is editor of “Doing Well and Good: The Human Face of the New Capitalism.” His research focuses primarily on the nature of positive professional duty.
http://opinionator.blogs.nytimes.com/20 ... y_20120406
By JULIAN FRIEDLAND
The Stone is a forum for contemporary philosophers on issues both timely and timeless.
For roughly 98 percent of the last 2,500 years of Western intellectual history, philosophy was considered the mother of all knowledge. It generated most of the fields of research still with us today. This is why we continue to call our highest degrees Ph.D.’s, namely, philosophy doctorates. At the same time, we live an age in which many seem no longer sure what philosophy is or is good for anymore. Most seem to see it as a highly abstracted discipline with little if any bearing on objective reality — something more akin to art, literature or religion. All have plenty to say about reality. But the overarching assumption is that none of it actually qualifies as knowledge until proven scientifically.
Yet philosophy differs in a fundamental way from art, literature or religion, as its etymological meaning is “the love of wisdom,” which implies a significant degree of objective knowledge. And this knowledge must be attained on its own terms. Or else it would be but another branch of science.
So what objective knowledge can philosophy bring that is not already determinable by science? This is a question that has become increasingly fashionable — even in philosophy — to answer with a defiant “none.” For numerous philosophers have come to believe, in concert with the prejudices of our age, that only science holds the potential to solve persistent philosophical mysteries as the nature of truth, life, mind, meaning, justice, the good and the beautiful.
Thus, myriad contemporary philosophers are perfectly willing to offer themselves up as intellectual servants or ushers of scientific progress. Their research largely functions as a spearhead for scientific exploration and as a balm for making those pursuits more palpable and palatable to the wider population. The philosopher S.M. Liao, for example, argued recently in The Atlantic that we begin voluntarily bioengineering ourselves to lower our carbon footprints and to become generally more virtuous. And Prof. Colin McGinn, writing recently in The Stone, claimed to be so tired of philosophy being disrespected and misunderstood that he urged that philosophers begin referring to themselves as “ontic scientists.”
McGinn takes the moniker of science as broad enough to include philosophy since the dictionary defines it as “any systematically organized body of knowledge on any subject.” But this definition is so vague that it betrays a widespread confusion as to what science actually is. And McGinn’s reminder that its etymology comes from “scientia,” the ancient Latin word for “knowledge,” only adds to the muddle. For by this definition we might well brand every academic discipline as science. “Literary studies” then become “literary sciences” — sounds much more respectable. “Fine arts” become “aesthetic sciences” — that would surely get more parents to let their kids major in art. While we’re at it, let’s replace the Bachelor of Arts degree with the Bachelor of Science. (I hesitate to even mention such options lest enterprising deans get any ideas.) Authors and artists aren’t engaged primarily in any kind of science, as their disciplines have more to do with subjective and qualitative standards than objective and quantitative ones. And that’s of course not to say that only science can bring objective and quantitative knowledge. Philosophy can too.
The intellectual culture of scientism clouds our understanding of science itself. What’s more, it eclipses alternative ways of knowing — chiefly the philosophical — that can actually yield greater certainty than the scientific. While science and philosophy do at times overlap, they are fundamentally different approaches to understanding. So philosophers should not add to the conceptual confusion that subsumes all knowledge into science. Rather, we should underscore the fact that various disciplines we ordinarily treat as science are at least as — if not more —philosophical than scientific. Take for example mathematics, theoretical physics, psychology and economics. These are predominately rational conceptual disciplines. That is, they are not chiefly reliant on empirical observation. For unlike science, they may be conducted while sitting in an armchair with eyes closed.
Does this mean these fields do not yield objective knowledge? The question is frankly absurd. Indeed if any of their findings count as genuine knowledge, they may actually be more enduring. For unlike empirical observations, which may be mistaken or incomplete, philosophical findings depend primarily on rational and logical principles. As such, whereas science tends to alter and update its findings day to day through trial and error, logical deductions are timeless. This is why Einstein pompously called attempts to empirically confirm his special theory of relativity “the detail work.” Indeed last September, The New York Times reported that scientists at the European Center for Nuclear Research (CERN) thought they had empirically disproved Einstein’s theory that nothing could travel faster than the speed of light, only to find their results could not be reproduced in follow-up experiments last month. Such experimental anomalies are confounding. But as CERN’s research director Sergio Bertolucci plainly put it, “This is how science works.”
However, 5 plus 7 will always equal 12. No amount of further observation will change that. And while mathematics is empirically testable at such rudimentary levels, it stops being so in its purest forms, like analysis and number theory. Proofs in these areas are conducted entirely conceptually. Similarly with logic, certain arguments are proven inexorably valid while others are inexorably invalid. Logically fallacious arguments can be rather sophisticated
and persuasive. But they are nevertheless invalid and always will be. Exposing such errors is part of philosophy’s stock and trade. Thus as Socrates pointed out long ago, much of the knowledge gained by doing philosophy consists in realizing what is not the case.
One such example is Thrasymachus’ claim that justice is best defined as the advantage of the stronger, namely, that which is in the competitive interest of the powerful. Socrates reduces this view to absurdity by showing that the wise need not compete with anyone.
Or to take a more positive example, Wittgenstein showed that an ordinary word such as “game” is used consistently in myriad contrasting ways without possessing any essential unifying definition. Though this may seem impossible, the meaning of such terms is actually determined by their contextual usage. For when we look at faces within a nuclear family, we see resemblances from one to the next. Yet no single trait need be present in every face to recognize them all as members of the family. Similarly, divergent uses of “game” form a family. Ultimately as a result of Wittgenstein’s philosophy, we know that natural language is a public phenomenon that cannot logically be invented in isolation.
These are essentially conceptual clarifications. And as such, they are relatively timeless philosophical truths.
This is also why jurisprudence qualifies as an objective body of knowledge without needing to change its name to “judicial science,” as some universities now describe it. Though it is informed by empirical research into human nature and the general workings of society, it relies principally on the cogency of arguments from learned experts as
measured by their logical validity and the truth value of their premises. If both of these criteria are present, then the arguments are sound. Hence, Supreme Court justices are not so much scientific as philosophical experts on the nature of justice. And that is not to say their expertise does not count as genuine knowledge. In the best cases, it rises to the loftier level of wisdom — the central objective of philosophy.
Though philosophy does sometimes employ thought experiments, these aren’t actually scientific, for they are conducted entirely in the imagination. For example, judges have imagined what might happen if, say, insider trading were made legal. And they have concluded that while it would lower regulatory costs and promote a degree of investor freedom, legalization would imperil the free market itself by undermining honest securities markets and eroding investor confidence. While this might appear to be an empirical question, it cannot be settled empirically without conducting the experiment, which is naturally beyond the reach of jurisprudence. Only legislatures could conduct the experiment by legalizing insider trading. And even then, one could not conduct it completely scientifically without a separate control-group society in which insider trading remained illegal for comparison. Regardless, judges would likely again forbid legalization essentially on compelling philosophical grounds.
Similarly in ethics, science cannot necessarily tell us what to value. Science has made significant progress in helping to understand human nature. Such research, if accurate, provides very real constraints to philosophical constructs on the nature of the good. Still, evidence of how most people happen to be does not necessarily tell us everything about how we should aspire to be. For how we should aspire to be is a conceptual question, namely, of how we ought to act, as opposed to an empirical question of how we do act. We might administer scientific polls to determine the degree to which people take themselves to be happy and what causes they might attribute to their own levels happiness. But it’s difficult to know if these self-reports are authoritative since many may not have coherent, consistent or accurate conceptions of happiness to begin with. We might even ask them if they find such and such ethical arguments convincing, namely, if happiness ought to be their only aim in life. But we don’t and shouldn’t take those results as sufficient to determine, say, the ethics standards of the American Medical Association, as those require philosophical analysis.
In sum, philosophy is not science. For it employs the rational tools of logical analysis and conceptual clarification in lieu of empirical measurement. And this approach, when carefully carried out, can yield knowledge at times more reliable and enduring than science, strictly speaking. For scientific measurement is in principle always subject to at least some degree of readjustment based on future observation. Yet sound philosophical argument achieves a measure of immortality.
So if we philosophers want to restore philosophy’s authority in the wider culture, we should not change its name but engage more often with issues of contemporary concern — not so much as scientists but as guardians of reason. This might encourage the wider population to think more critically, that is, to become more philosophical.
--------------------------------------------------------------------------------
Julian Friedland is a visiting assistant professor at Fordham University Gabelli School of Business, Division of Law and Ethics. He is editor of “Doing Well and Good: The Human Face of the New Capitalism.” His research focuses primarily on the nature of positive professional duty.
http://opinionator.blogs.nytimes.com/20 ... y_20120406
April 22, 2012, 5:00 pm
The Living Word
By PETER LUDLOW
There is a standard view about language that one finds among philosophers, language departments, pundits and politicians. It is the idea that a language like English is a semi-stable abstract object that we learn to some degree or other and then use in order to communicate or express ideas and perform certain tasks. I call this the static picture of language, because, even though it acknowledges some language change, the pace of change is thought to be slow, and what change there is, is thought to be the hard fought product of conflict. Thus, even the “revisionist” picture of language sketched by Gary Gutting in a recent Stone column counts as static on my view, because the change is slow and it must overcome resistance.
If word meanings can change in the course of a single conversation how could they not change over the course of centuries?
Recent work in philosophy, psychology and artificial intelligence has suggested an alternative picture that rejects the idea that languages are stable abstract objects that we learn and then use. According to the alternative “dynamic” picture, human languages are one-off things that we build “on the fly” on a conversation-by-conversation basis; we can call these one-off fleeting languages microlanguages. Importantly, this picture rejects the idea that words are relatively stable things with fixed meanings that we come to learn. Rather, word meanings themselves are dynamic — they shift from microlanguage to microlanguage.
Shifts of meaning do not merely occur between conversations; they also occur within conversations — in fact conversations are often designed to help this shifting take place. That is, when we engage in conversation, much of what we say does not involve making claims about the world but involves instructing our communicative partners how to adjust word meanings for the purposes of our conversation.
Leif Parsons
For example, the linguist Chris Barker has observed that many of the utterances we make play the role of shifting the meaning of a term. To illustrate, suppose I am thinking of applying for academic jobs and I tell my friend that I don’t care where I teach so long as the school is in a city. My friend suggests that I apply to the University of Michigan and I reply “Ann Arbor is not a city.” In doing this, I am not making a claim about the world so much as instructing my friend (for the purposes of our conversation) to adjust the meaning of “city” from official definitions to one in which places like Ann Arbor do not count as a cities.
Word meanings are dynamic, but they are also underdetermined. What this means is that there is no complete answer to what does and doesn’t fall within the range of a term like “red” or “city” or “hexagonal.” We may sharpen the meaning and we may get clearer on what falls in the range of these terms, but we never completely sharpen the meaning.
This isn’t just the case for words like “city” but, for all words, ranging from words for things, like “person” and “tree,” words for abstract ideas, like “art” and “freedom,” and words for crimes, like “rape” and “murder.” Indeed, I would argue that this is also the case with mathematical and logical terms like “parallel line” and “entailment.” The meanings of these terms remain open to some degree or other, and are sharpened as needed when we make advances in mathematics and logic.
The dynamic lexicon changes the way we look at problems ranging from human-computer interaction to logic itself, but it also has an application in the political realm. Over the last few decades, some important legal scholars and judges — most notably the United States Supreme Court Justice, Antonin Scalia — have made the case that the Constitution is not a living document, and that we should try to get back to understanding the Constitution as it was originally written by the original framers — sometimes this doctrine is called textualism. Scalia’s doctrine says that we cannot do better than concentrate on what the Constitution actually says — on what the words on paper say. Scalia once put this in the form of a tautology: “Words mean what they mean.” In his more cautious formulation he says that “words do have a limited range of meaning, and no interpretation that goes beyond that range is permissible.”
Pretty clearly Scalia is assuming what I have called the static picture of language. But “words mean what they mean” is not the tautology that Scalia seems to think it is. If word meanings can change dramatically during the course of a single conversation how could they not change over the course of centuries? But more importantly, Scalia’s position seems to assume that the original meanings of the words used in the Constitution are nearly fully determined — that the meaning of a term like “person” or phrase like “due process,” as used in the Constitution is fully fleshed out. But is it really determined whether, for example, the term “person” in the Constitution applies to medically viable fetuses, brain dead humans on life support, and, as we will have to ask in the fullness of time, intelligent robots? The dynamic picture says no.
The words used by lawmakers are just as open ended as words used in day-to-day conversation. Indeed, many laws are specifically written so as to be open-ended. But even if that was not the intent, there is no way to close the gap and have the meanings of words fully fleshed out. Technological advances are notorious for exposing the open-endedness of the language in our laws, even when we thought our definitions were airtight. Lawmakers can’t anticipate everything. Indeed, you could make the case that the whole area of patent law just is the problem of deciding whether some new technology should fall within the range of the language of the patent.
Far from being absurd, the idea that the Constitution is a living organism follows from the fact that the words used in writing the Constitution are underdetermined and dynamic and thus “living organisms” in the metaphorical sense in play here. In this respect there is nothing unique about the Constitution. It is a dynamic object because of the simple reason that word meanings are dynamic. Every written document — indeed every word written or uttered — is a living organism.
http://opinionator.blogs.nytimes.com/20 ... y_20120423
The Living Word
By PETER LUDLOW
There is a standard view about language that one finds among philosophers, language departments, pundits and politicians. It is the idea that a language like English is a semi-stable abstract object that we learn to some degree or other and then use in order to communicate or express ideas and perform certain tasks. I call this the static picture of language, because, even though it acknowledges some language change, the pace of change is thought to be slow, and what change there is, is thought to be the hard fought product of conflict. Thus, even the “revisionist” picture of language sketched by Gary Gutting in a recent Stone column counts as static on my view, because the change is slow and it must overcome resistance.
If word meanings can change in the course of a single conversation how could they not change over the course of centuries?
Recent work in philosophy, psychology and artificial intelligence has suggested an alternative picture that rejects the idea that languages are stable abstract objects that we learn and then use. According to the alternative “dynamic” picture, human languages are one-off things that we build “on the fly” on a conversation-by-conversation basis; we can call these one-off fleeting languages microlanguages. Importantly, this picture rejects the idea that words are relatively stable things with fixed meanings that we come to learn. Rather, word meanings themselves are dynamic — they shift from microlanguage to microlanguage.
Shifts of meaning do not merely occur between conversations; they also occur within conversations — in fact conversations are often designed to help this shifting take place. That is, when we engage in conversation, much of what we say does not involve making claims about the world but involves instructing our communicative partners how to adjust word meanings for the purposes of our conversation.
Leif Parsons
For example, the linguist Chris Barker has observed that many of the utterances we make play the role of shifting the meaning of a term. To illustrate, suppose I am thinking of applying for academic jobs and I tell my friend that I don’t care where I teach so long as the school is in a city. My friend suggests that I apply to the University of Michigan and I reply “Ann Arbor is not a city.” In doing this, I am not making a claim about the world so much as instructing my friend (for the purposes of our conversation) to adjust the meaning of “city” from official definitions to one in which places like Ann Arbor do not count as a cities.
Word meanings are dynamic, but they are also underdetermined. What this means is that there is no complete answer to what does and doesn’t fall within the range of a term like “red” or “city” or “hexagonal.” We may sharpen the meaning and we may get clearer on what falls in the range of these terms, but we never completely sharpen the meaning.
This isn’t just the case for words like “city” but, for all words, ranging from words for things, like “person” and “tree,” words for abstract ideas, like “art” and “freedom,” and words for crimes, like “rape” and “murder.” Indeed, I would argue that this is also the case with mathematical and logical terms like “parallel line” and “entailment.” The meanings of these terms remain open to some degree or other, and are sharpened as needed when we make advances in mathematics and logic.
The dynamic lexicon changes the way we look at problems ranging from human-computer interaction to logic itself, but it also has an application in the political realm. Over the last few decades, some important legal scholars and judges — most notably the United States Supreme Court Justice, Antonin Scalia — have made the case that the Constitution is not a living document, and that we should try to get back to understanding the Constitution as it was originally written by the original framers — sometimes this doctrine is called textualism. Scalia’s doctrine says that we cannot do better than concentrate on what the Constitution actually says — on what the words on paper say. Scalia once put this in the form of a tautology: “Words mean what they mean.” In his more cautious formulation he says that “words do have a limited range of meaning, and no interpretation that goes beyond that range is permissible.”
Pretty clearly Scalia is assuming what I have called the static picture of language. But “words mean what they mean” is not the tautology that Scalia seems to think it is. If word meanings can change dramatically during the course of a single conversation how could they not change over the course of centuries? But more importantly, Scalia’s position seems to assume that the original meanings of the words used in the Constitution are nearly fully determined — that the meaning of a term like “person” or phrase like “due process,” as used in the Constitution is fully fleshed out. But is it really determined whether, for example, the term “person” in the Constitution applies to medically viable fetuses, brain dead humans on life support, and, as we will have to ask in the fullness of time, intelligent robots? The dynamic picture says no.
The words used by lawmakers are just as open ended as words used in day-to-day conversation. Indeed, many laws are specifically written so as to be open-ended. But even if that was not the intent, there is no way to close the gap and have the meanings of words fully fleshed out. Technological advances are notorious for exposing the open-endedness of the language in our laws, even when we thought our definitions were airtight. Lawmakers can’t anticipate everything. Indeed, you could make the case that the whole area of patent law just is the problem of deciding whether some new technology should fall within the range of the language of the patent.
Far from being absurd, the idea that the Constitution is a living organism follows from the fact that the words used in writing the Constitution are underdetermined and dynamic and thus “living organisms” in the metaphorical sense in play here. In this respect there is nothing unique about the Constitution. It is a dynamic object because of the simple reason that word meanings are dynamic. Every written document — indeed every word written or uttered — is a living organism.
http://opinionator.blogs.nytimes.com/20 ... y_20120423
Good News: You Are Not Your Brain
Posted: 03/27/2012 7:00 am
By Deepak Chopra, M.D., FACP, and Dr. Rudolph E. Tanzi, Ph.D., Joseph P. and Rose F. Kennedy, Professor of Neurology, Harvard Medical School Director, Genetics and Aging at Massachusetts General Hospital (MGH).
Like a personal computer, science needs a recycle bin for ideas that didn't work out as planned. In this bin would go commuter trains riding on frictionless rails using superconductivity, along with interferon, the last AIDS vaccine, and most genetic therapies. These failed promises have two things in common: They looked like the wave of the future but then reality proved too complex to fit the simple model that was being offered.
The next thing to go into the recycle bin might be the brain. We are living in a golden age of brain research, thanks largely to vast improvements in brain scans. Now that functional MRIs can give snapshots of the brain in real time, researchers can see specific areas of the brain light up, indicating increased activity. On the other hand, dark spots in the brain indicate minimal activity or none at all. Thus, we arrive at those familiar maps that compare a normal brain with one that has deviated from the norm. This is obviously a great boon where disease is concerned. Doctors can see precisely where epilepsy or Parkinsonism or a brain tumor has created damage, and with this knowledge new drugs and more precise surgery can target the problem.
But then overreach crept in. We are shown brain scans of repeat felons with pointers to the defective areas of their brains. The same holds for Buddhist monks, only in their case, brain activity is heightened and improved, especially in the prefrontal lobes associated with compassion. By now there is no condition, good or bad, that hasn't been linked to a brain pattern that either "proves" that there is a link between the brain and a certain behavior or exhibits the "cause" of a certain trait. The whole assumption, shared by 99 percent of neuroscientists, is that we are our brains.
In this scheme, the brain is in charge, having evolved to control certain fixed behaviors. Why do men see other men as rivals for a desirable woman? Why do people seek God? Why does snacking in front of the TV become a habit? We are flooded with articles and books reinforcing the same assumption: The brain is using you, not the other way around. Yet it's clear that a faulty premise is leading to gross overreach.
The flaws in current reasoning can be summarized with devastating force:
1. Brain activity isn't the same as thinking, feeling, or seeing.
2. No one has remotely shown how molecules acquire the qualities of the mind.
3. It is impossible to construct a theory of the mind based on material objects that somehow became conscious.
4. When the brain lights up, its activity is like a radio lighting up when music is played. It is an obvious fallacy to say that the radio composed the music. What is being viewed is only a physical correlation, not a cause.
It's a massive struggle to get neuroscientists to see these flaws. They are king of the hill right now, and so long as new discoveries are being made every day, a sense of triumph pervades the field. "Of course" we will solve everything from depression to overeating, crime to religious fanaticism, by tinkering with neurons and the kinks thrown into normal, desirable brain activity. But that's like hearing a really bad performance of "Rhapsody in Blue" and trying to turn it into a good performance by kicking the radio.
We've become excited by a flawless 2008 article published by Donald D. Hoffman, professor of cognitive sciences at the University of California Irvine. It's called "Conscious Realism and the Mind-Body Problem," and its aim is to show, using logic, philosophy, and neuroscience, that we are not our brains. We are "conscious agents" -- Hoffman's term for minds that shape reality, including the reality of the brain. Hoffman is optimistic that the thorny problem of consciousness can be solved, and science can find a testable model for the mind. But future progress depends on researchers abandoning their current premise, that the brain is the mind. We urge you to read the article in its entirety, but for us, the good news is that Hoffman's ideas show that the tide may be turning.
It is degrading to human potential when the brain uses us instead of vice versa. There is no doubt that we can become trapped by faulty wiring in the brain -- this happens in depression, addictions, and phobias, for example. Neural circuits can seemingly take control, and there is much talk of "hard wiring" by which some activity is fixed and preset by nature, such as the fight-or-flight response. But what about people who break bad habits, kick their addictions, or overcome depression? It would be absurd to say that the brain, being stuck in faulty wiring, suddenly and spontaneously fixed the wiring. What actually happens, as anyone knows who has achieved success in these areas, is that the mind takes control. Mind shapes the brain, and when you make up your mind to do something, you return to the natural state of using your brain instead of the other way around.
It's very good news that you are not your brain, because when your mind finds its true power, the result is healing, inspiration, insight, self-awareness, discovery, curiosity, and quantum leaps in personal growth. The brain is totally incapable of such things. After all, if it is a hard-wired machine, there is no room for sudden leaps and renewed inspiration. The machine simply does what it does. A depressed brain can no more heal itself than a car can suddenly decide to fly. Right now the golden age of brain research is brilliantly decoding neural circuitry, and thanks to neuroplasticity, we know that the brain's neural pathways can be changed. The marvels of brain activity grow more astonishing every day. Yet in our astonishment it would be a grave mistake, and a disservice to our humanity, to forget that the real glory of human existence is the mind, not the brain that serves it.
Deepak Chopra and Rudy Tanzi are co-authors of their forthcoming book Superbrain: New Breakthroughs for Maximizing Health, Happiness and Spiritual Well-Being by Harmony Books.
deepakchopra com
For more by Deepak Chopra, click here.
For more on emotional intelligence, click here.
http://www.huffingtonpost.com/deepak-ch ... 79446.html
Posted: 03/27/2012 7:00 am
By Deepak Chopra, M.D., FACP, and Dr. Rudolph E. Tanzi, Ph.D., Joseph P. and Rose F. Kennedy, Professor of Neurology, Harvard Medical School Director, Genetics and Aging at Massachusetts General Hospital (MGH).
Like a personal computer, science needs a recycle bin for ideas that didn't work out as planned. In this bin would go commuter trains riding on frictionless rails using superconductivity, along with interferon, the last AIDS vaccine, and most genetic therapies. These failed promises have two things in common: They looked like the wave of the future but then reality proved too complex to fit the simple model that was being offered.
The next thing to go into the recycle bin might be the brain. We are living in a golden age of brain research, thanks largely to vast improvements in brain scans. Now that functional MRIs can give snapshots of the brain in real time, researchers can see specific areas of the brain light up, indicating increased activity. On the other hand, dark spots in the brain indicate minimal activity or none at all. Thus, we arrive at those familiar maps that compare a normal brain with one that has deviated from the norm. This is obviously a great boon where disease is concerned. Doctors can see precisely where epilepsy or Parkinsonism or a brain tumor has created damage, and with this knowledge new drugs and more precise surgery can target the problem.
But then overreach crept in. We are shown brain scans of repeat felons with pointers to the defective areas of their brains. The same holds for Buddhist monks, only in their case, brain activity is heightened and improved, especially in the prefrontal lobes associated with compassion. By now there is no condition, good or bad, that hasn't been linked to a brain pattern that either "proves" that there is a link between the brain and a certain behavior or exhibits the "cause" of a certain trait. The whole assumption, shared by 99 percent of neuroscientists, is that we are our brains.
In this scheme, the brain is in charge, having evolved to control certain fixed behaviors. Why do men see other men as rivals for a desirable woman? Why do people seek God? Why does snacking in front of the TV become a habit? We are flooded with articles and books reinforcing the same assumption: The brain is using you, not the other way around. Yet it's clear that a faulty premise is leading to gross overreach.
The flaws in current reasoning can be summarized with devastating force:
1. Brain activity isn't the same as thinking, feeling, or seeing.
2. No one has remotely shown how molecules acquire the qualities of the mind.
3. It is impossible to construct a theory of the mind based on material objects that somehow became conscious.
4. When the brain lights up, its activity is like a radio lighting up when music is played. It is an obvious fallacy to say that the radio composed the music. What is being viewed is only a physical correlation, not a cause.
It's a massive struggle to get neuroscientists to see these flaws. They are king of the hill right now, and so long as new discoveries are being made every day, a sense of triumph pervades the field. "Of course" we will solve everything from depression to overeating, crime to religious fanaticism, by tinkering with neurons and the kinks thrown into normal, desirable brain activity. But that's like hearing a really bad performance of "Rhapsody in Blue" and trying to turn it into a good performance by kicking the radio.
We've become excited by a flawless 2008 article published by Donald D. Hoffman, professor of cognitive sciences at the University of California Irvine. It's called "Conscious Realism and the Mind-Body Problem," and its aim is to show, using logic, philosophy, and neuroscience, that we are not our brains. We are "conscious agents" -- Hoffman's term for minds that shape reality, including the reality of the brain. Hoffman is optimistic that the thorny problem of consciousness can be solved, and science can find a testable model for the mind. But future progress depends on researchers abandoning their current premise, that the brain is the mind. We urge you to read the article in its entirety, but for us, the good news is that Hoffman's ideas show that the tide may be turning.
It is degrading to human potential when the brain uses us instead of vice versa. There is no doubt that we can become trapped by faulty wiring in the brain -- this happens in depression, addictions, and phobias, for example. Neural circuits can seemingly take control, and there is much talk of "hard wiring" by which some activity is fixed and preset by nature, such as the fight-or-flight response. But what about people who break bad habits, kick their addictions, or overcome depression? It would be absurd to say that the brain, being stuck in faulty wiring, suddenly and spontaneously fixed the wiring. What actually happens, as anyone knows who has achieved success in these areas, is that the mind takes control. Mind shapes the brain, and when you make up your mind to do something, you return to the natural state of using your brain instead of the other way around.
It's very good news that you are not your brain, because when your mind finds its true power, the result is healing, inspiration, insight, self-awareness, discovery, curiosity, and quantum leaps in personal growth. The brain is totally incapable of such things. After all, if it is a hard-wired machine, there is no room for sudden leaps and renewed inspiration. The machine simply does what it does. A depressed brain can no more heal itself than a car can suddenly decide to fly. Right now the golden age of brain research is brilliantly decoding neural circuitry, and thanks to neuroplasticity, we know that the brain's neural pathways can be changed. The marvels of brain activity grow more astonishing every day. Yet in our astonishment it would be a grave mistake, and a disservice to our humanity, to forget that the real glory of human existence is the mind, not the brain that serves it.
Deepak Chopra and Rudy Tanzi are co-authors of their forthcoming book Superbrain: New Breakthroughs for Maximizing Health, Happiness and Spiritual Well-Being by Harmony Books.
deepakchopra com
For more by Deepak Chopra, click here.
For more on emotional intelligence, click here.
http://www.huffingtonpost.com/deepak-ch ... 79446.html
April 30, 2012
Insights From the Youngest Minds
By NATALIE ANGIER
CAMBRIDGE, Mass. — Seated in a cheerfully cramped monitoring room at the Harvard University Laboratory for Developmental Studies, Elizabeth S. Spelke, a professor of psychology and a pre-eminent researcher of the basic ingredient list from which all human knowledge is constructed, looked on expectantly as her students prepared a boisterous 8-month-old girl with dark curly hair for the onerous task of watching cartoons.
The video clips featured simple Keith Haring-type characters jumping, sliding and dancing from one group to another. The researchers’ objective, as with half a dozen similar projects under way in the lab, was to explore what infants understand about social groups and social expectations.
Yet even before the recording began, the 15-pound research subject made plain the scope of her social brain. She tracked conversations, stared at newcomers and burned off adult corneas with the brilliance of her smile. Dr. Spelke, who first came to prominence by delineating how infants learn about objects, numbers, the lay of the land, shook her head in self-mocking astonishment.
“Why did it take me 30 years to start studying this?” she said. “All this time I’ve been giving infants objects to hold, or spinning them around in a room to see how they navigate, when what they really wanted to do was engage with other people!”
Dr. Spelke, 62, is tall and slim, and parts her long hair down the middle, like a college student. She dresses casually, in a corduroy jumper or a cardigan and slacks, and when she talks, she pitches forward and plants forearms on thighs, hands clasped, seeming both deeply engaged and ready to bolt. The lab she founded with her colleague Susan Carey is strewed with toys and festooned with children’s T-shirts, but the Elmo atmospherics belie both the lab’s seriousness of purpose and Dr. Spelke’s towering reputation among her peers in cognitive psychology.
“When people ask Liz, ‘What do you do?’ she tells them, ‘I study babies,’ ” said Steven Pinker, a fellow Harvard professor and the author of “The Better Angels of Our Nature,” among other books. “That’s endearingly self-deprecating, but she sells herself short.”
What Dr. Spelke is really doing, he said, is what Descartes, Kant and Locke tried to do. “She is trying to identify the bedrock categories of human knowledge. She is asking, ‘What is number, space, agency, and how does knowledge in each category develop from its minimal state?’ ”
Dr. Spelke studies babies not because they’re cute but because they’re root. “I’ve always been fascinated by questions about human cognition and the organization of the human mind,” she said, “and why we’re good at some tasks and bad at others.”
But the adult mind is far too complicated, Dr. Spelke said, “too stuffed full of facts” to make sense of it. In her view, the best way to determine what, if anything, humans are born knowing, is to go straight to the source, and consult the recently born.
Decoding Infants’ Gaze
Dr. Spelke is a pioneer in the use of the infant gaze as a key to the infant mind — that is, identifying the inherent expectations of babies as young as a week or two by measuring how long they stare at a scene in which those presumptions are upended or unmet. “More than any scientist I know, Liz combines theoretical acumen with experimental genius,” Dr. Carey said. Nancy Kanwisher, a neuroscientist at M.I.T., put it this way: “Liz developed the infant gaze idea into a powerful experimental paradigm that radically changed our view of infant cognition.”
Here, according to the Spelke lab, are some of the things that babies know, generally before the age of 1:
They know what an object is: a discrete physical unit in which all sides move roughly as one, and with some independence from other objects.
“If I reach for a corner of a book and grasp it, I expect the rest of the book to come with me, but not a chunk of the table,” said Phil Kellman, Dr. Spelke’s first graduate student, now at the University of California, Los Angeles.
A baby has the same expectation. If you show the baby a trick sequence in which a rod that appears to be solid moves back and forth behind another object, the baby will gape in astonishment when that object is removed and the rod turns out to be two fragments.
“The visual system comes equipped to partition a scene into functional units we need to know about for survival,” Dr. Kellman said. Wondering whether your bag of four oranges puts you over the limit for the supermarket express lane? A baby would say, “You pick up the bag, the parts hang together, that makes it one item, so please get in line.”
Babies know, too, that objects can’t go through solid boundaries or occupy the same position as other objects, and that objects generally travel through space in a continuous trajectory. If you claimed to have invented a transporter device like the one in “Star Trek,” a baby would scoff.
Babies are born accountants. They can estimate quantities and distinguish between more and less. Show infants arrays of, say, 4 or 12 dots and they will match each number to an accompanying sound, looking longer at the 4 dots when they hear 4 sounds than when they hear 12 sounds, even if each of the 4 sounds is played comparatively longer. Babies also can perform a kind of addition and subtraction, anticipating the relative abundance of groups of dots that are being pushed together or pulled apart, and looking longer when the wrong number of dots appears.
Babies are born Euclideans. Infants and toddlers use geometric clues to orient themselves in three-dimensional space, navigate through rooms and locate hidden treasures. Is the room square or rectangular? Did the nice cardigan lady put the Slinky in a corner whose left wall is long or short?
At the same time, the Spelke lab discovered, young children are quite bad at using landmarks or décor to find their way. Not until age 5 or 6 do they begin augmenting search strategies with cues like “She hid my toy in a corner whose left wall is red rather than white.”
“That was a deep surprise to me,” Dr. Spelke said. “My intuition was, a little kid would never make the mistake of ignoring information like the color of a wall.” Nowadays, she continued, “I don’t place much faith in my intuitions, except as a starting place for designing experiments.”
These core mental modules — object representation, approximate number sense and geometric navigation — are all ancient systems shared at least in part with other animals; for example, rats also navigate through a maze by way of shape but not color. The modules amount to baby’s first crib sheet to the physical world.
“The job of the baby,” Dr. Spelke said, “is to learn.”
Role of Language
More recently, she and her colleagues have begun identifying some of the baseline settings of infant social intelligence. Katherine D. Kinzler, now of the University of Chicago, and Kristin Shutts, now at the University of Wisconsin, have found that infants just a few weeks old show a clear liking for people who use speech patterns the babies have already been exposed to, and that includes the regional accents, twangs, and R’s or lack thereof. A baby from Boston not only gazes longer at somebody speaking English than at somebody speaking French; the baby gazes longest at a person who sounds like Click and Clack of the radio show “Car Talk.”
In guiding early social leanings, accent trumps race. A white American baby would rather accept food from a black English-speaking adult than from a white Parisian, and a 5-year-old would rather befriend a child of another race who sounds like a local than one of the same race who has a foreign accent.
Other researchers in the Spelke lab are studying whether babies expect behavioral conformity among members of a group (hey, the blue character is supposed to be jumping like the rest of the blues, not sliding like the yellow characters); whether they expect other people to behave sensibly (if you’re going to reach for a toy, will you please do it efficiently rather than let your hand meander all over the place?); and how babies decide whether a novel object has “agency” (is this small, fuzzy blob active or inert?).
Dr. Spelke is also seeking to understand how the core domains of the human mind interact to yield our uniquely restless and creative intelligence — able to master calculus, probe the cosmos and play a Bach toccata as no bonobo or New Caledonian crow can. Even though “our core systems are fundamental yet limited,” as she put it, “we manage to get beyond them.”
Dr. Spelke has proposed that human language is the secret ingredient, the cognitive catalyst that allows our numeric, architectonic and social modules to join forces, swap ideas and take us to far horizons. “What’s special about language is its productive combinatorial power,” she said. “We can use it to combine anything with anything.”
She points out that children start integrating what they know about the shape of the environment, their navigational sense, with what they know about its landmarks — object recognition — at just the age when they begin to master spatial language and words like “left” and “right.” Yet, she acknowledges, her ideas about language as the central consolidator of human intelligence remain unproved and contentious.
Whatever their aim, the studies in her lab are difficult, each requiring scores of parentally volunteered participants. Babies don’t follow instructions and often “fuss out” mid-test, taking their data points with them.
Yet Dr. Spelke herself never fusses out or turns rote. She prowls the lab from a knee-high perspective, fretting the details of an experiment like Steve Jobs worrying over iPhone pixel density. “Is this car seat angled a little too far back?” she asked her students, poking the little velveteen chair every which way. “I’m concerned that a baby will have to strain too much to see the screen and decide it’s not worth the trouble.”
Should a student or colleague disagree with her, Dr. Spelke skips the defensive bristling, perhaps in part because she is serenely self-confident about her intellectual powers. “It was all easy for me,” she said of her early school years. “I don’t think I had to work hard until I got to college, or even graduate school.”
So, Radcliffe Phi Beta Kappa, ho hum. “My mother is absolutely brilliant, not just in science, but in everything,” said her daughter, Bridget, a medical student. “There’s a joke in my family that my mother and brother are the geniuses, and Dad and I are the grunts.” (“I hate this joke,” Dr. Spelke commented by e-mail, “and utterly reject this distinction!”)
Above all, Dr. Spelke relishes a good debate. “She welcomes people disagreeing with her,” said her husband, Elliott M. Blass, an emeritus professor of psychology at the University of Massachusetts. “She says it’s not about being right, it’s about getting it right.”
When Lawrence H. Summers, then president of Harvard, notoriously suggested in 2005 that the shortage of women in the physical sciences might be partly due to possible innate shortcomings in math, Dr. Spelke zestily entered the fray. She combed through results from her lab and elsewhere on basic number skills, seeking evidence of early differences between girls and boys. She found none.
“My position is that the null hypothesis is correct,” she said. “There is no cognitive difference and nothing to say about it.”
Dr. Spelke laid out her case in an acclaimed debate with her old friend Dr. Pinker, who defended the Summers camp.
“I have enormous respect for Steve, and I think he’s great,” Dr. Spelke said. “But when he argues that it makes sense that so many women are going into biology and medicine because those are the ‘helping’ professions, well, I remember when being a doctor was considered far too full of blood and gore for women and their uncontrollable emotions to handle.”
Raising Her Babies
For her part, Dr. Spelke has passionately combined science and motherhood. Her mother studied piano at Juilliard but gave it up when Elizabeth was born. “I felt terribly guilty about that,” Dr. Spelke said. “I never wanted my children to go through the same thing.”
When her children were young, Dr. Spelke often took them to the lab or held meetings at home. The whole family traveled together — France, Spain, Sweden, Egypt, Turkey — never reserving lodgings but finding accommodations as they could. (The best, Dr. Blass said, was a casbah in the Moroccan desert.)
Scaling the academic ranks, Dr. Spelke still found time to supplement her children’s public school education with a home-schooled version of the rigorous French curriculum. She baked their birthday cakes from scratch, staged elaborate treasure hunts and spent many days each year creating their Halloween costumes: Bridget as a cave girl or her favorite ballet bird; her younger brother, Joey, as a drawbridge.
“Growing up in my house was a constant adventure,” Bridget said. “As a new mother myself,” she added, “I don’t know how my mom did it.”
Is Dr. Spelke the master of every domain? It’s enough to make the average mother fuss out.
http://www.nytimes.com/2012/05/01/scien ... h_20120501
Insights From the Youngest Minds
By NATALIE ANGIER
CAMBRIDGE, Mass. — Seated in a cheerfully cramped monitoring room at the Harvard University Laboratory for Developmental Studies, Elizabeth S. Spelke, a professor of psychology and a pre-eminent researcher of the basic ingredient list from which all human knowledge is constructed, looked on expectantly as her students prepared a boisterous 8-month-old girl with dark curly hair for the onerous task of watching cartoons.
The video clips featured simple Keith Haring-type characters jumping, sliding and dancing from one group to another. The researchers’ objective, as with half a dozen similar projects under way in the lab, was to explore what infants understand about social groups and social expectations.
Yet even before the recording began, the 15-pound research subject made plain the scope of her social brain. She tracked conversations, stared at newcomers and burned off adult corneas with the brilliance of her smile. Dr. Spelke, who first came to prominence by delineating how infants learn about objects, numbers, the lay of the land, shook her head in self-mocking astonishment.
“Why did it take me 30 years to start studying this?” she said. “All this time I’ve been giving infants objects to hold, or spinning them around in a room to see how they navigate, when what they really wanted to do was engage with other people!”
Dr. Spelke, 62, is tall and slim, and parts her long hair down the middle, like a college student. She dresses casually, in a corduroy jumper or a cardigan and slacks, and when she talks, she pitches forward and plants forearms on thighs, hands clasped, seeming both deeply engaged and ready to bolt. The lab she founded with her colleague Susan Carey is strewed with toys and festooned with children’s T-shirts, but the Elmo atmospherics belie both the lab’s seriousness of purpose and Dr. Spelke’s towering reputation among her peers in cognitive psychology.
“When people ask Liz, ‘What do you do?’ she tells them, ‘I study babies,’ ” said Steven Pinker, a fellow Harvard professor and the author of “The Better Angels of Our Nature,” among other books. “That’s endearingly self-deprecating, but she sells herself short.”
What Dr. Spelke is really doing, he said, is what Descartes, Kant and Locke tried to do. “She is trying to identify the bedrock categories of human knowledge. She is asking, ‘What is number, space, agency, and how does knowledge in each category develop from its minimal state?’ ”
Dr. Spelke studies babies not because they’re cute but because they’re root. “I’ve always been fascinated by questions about human cognition and the organization of the human mind,” she said, “and why we’re good at some tasks and bad at others.”
But the adult mind is far too complicated, Dr. Spelke said, “too stuffed full of facts” to make sense of it. In her view, the best way to determine what, if anything, humans are born knowing, is to go straight to the source, and consult the recently born.
Decoding Infants’ Gaze
Dr. Spelke is a pioneer in the use of the infant gaze as a key to the infant mind — that is, identifying the inherent expectations of babies as young as a week or two by measuring how long they stare at a scene in which those presumptions are upended or unmet. “More than any scientist I know, Liz combines theoretical acumen with experimental genius,” Dr. Carey said. Nancy Kanwisher, a neuroscientist at M.I.T., put it this way: “Liz developed the infant gaze idea into a powerful experimental paradigm that radically changed our view of infant cognition.”
Here, according to the Spelke lab, are some of the things that babies know, generally before the age of 1:
They know what an object is: a discrete physical unit in which all sides move roughly as one, and with some independence from other objects.
“If I reach for a corner of a book and grasp it, I expect the rest of the book to come with me, but not a chunk of the table,” said Phil Kellman, Dr. Spelke’s first graduate student, now at the University of California, Los Angeles.
A baby has the same expectation. If you show the baby a trick sequence in which a rod that appears to be solid moves back and forth behind another object, the baby will gape in astonishment when that object is removed and the rod turns out to be two fragments.
“The visual system comes equipped to partition a scene into functional units we need to know about for survival,” Dr. Kellman said. Wondering whether your bag of four oranges puts you over the limit for the supermarket express lane? A baby would say, “You pick up the bag, the parts hang together, that makes it one item, so please get in line.”
Babies know, too, that objects can’t go through solid boundaries or occupy the same position as other objects, and that objects generally travel through space in a continuous trajectory. If you claimed to have invented a transporter device like the one in “Star Trek,” a baby would scoff.
Babies are born accountants. They can estimate quantities and distinguish between more and less. Show infants arrays of, say, 4 or 12 dots and they will match each number to an accompanying sound, looking longer at the 4 dots when they hear 4 sounds than when they hear 12 sounds, even if each of the 4 sounds is played comparatively longer. Babies also can perform a kind of addition and subtraction, anticipating the relative abundance of groups of dots that are being pushed together or pulled apart, and looking longer when the wrong number of dots appears.
Babies are born Euclideans. Infants and toddlers use geometric clues to orient themselves in three-dimensional space, navigate through rooms and locate hidden treasures. Is the room square or rectangular? Did the nice cardigan lady put the Slinky in a corner whose left wall is long or short?
At the same time, the Spelke lab discovered, young children are quite bad at using landmarks or décor to find their way. Not until age 5 or 6 do they begin augmenting search strategies with cues like “She hid my toy in a corner whose left wall is red rather than white.”
“That was a deep surprise to me,” Dr. Spelke said. “My intuition was, a little kid would never make the mistake of ignoring information like the color of a wall.” Nowadays, she continued, “I don’t place much faith in my intuitions, except as a starting place for designing experiments.”
These core mental modules — object representation, approximate number sense and geometric navigation — are all ancient systems shared at least in part with other animals; for example, rats also navigate through a maze by way of shape but not color. The modules amount to baby’s first crib sheet to the physical world.
“The job of the baby,” Dr. Spelke said, “is to learn.”
Role of Language
More recently, she and her colleagues have begun identifying some of the baseline settings of infant social intelligence. Katherine D. Kinzler, now of the University of Chicago, and Kristin Shutts, now at the University of Wisconsin, have found that infants just a few weeks old show a clear liking for people who use speech patterns the babies have already been exposed to, and that includes the regional accents, twangs, and R’s or lack thereof. A baby from Boston not only gazes longer at somebody speaking English than at somebody speaking French; the baby gazes longest at a person who sounds like Click and Clack of the radio show “Car Talk.”
In guiding early social leanings, accent trumps race. A white American baby would rather accept food from a black English-speaking adult than from a white Parisian, and a 5-year-old would rather befriend a child of another race who sounds like a local than one of the same race who has a foreign accent.
Other researchers in the Spelke lab are studying whether babies expect behavioral conformity among members of a group (hey, the blue character is supposed to be jumping like the rest of the blues, not sliding like the yellow characters); whether they expect other people to behave sensibly (if you’re going to reach for a toy, will you please do it efficiently rather than let your hand meander all over the place?); and how babies decide whether a novel object has “agency” (is this small, fuzzy blob active or inert?).
Dr. Spelke is also seeking to understand how the core domains of the human mind interact to yield our uniquely restless and creative intelligence — able to master calculus, probe the cosmos and play a Bach toccata as no bonobo or New Caledonian crow can. Even though “our core systems are fundamental yet limited,” as she put it, “we manage to get beyond them.”
Dr. Spelke has proposed that human language is the secret ingredient, the cognitive catalyst that allows our numeric, architectonic and social modules to join forces, swap ideas and take us to far horizons. “What’s special about language is its productive combinatorial power,” she said. “We can use it to combine anything with anything.”
She points out that children start integrating what they know about the shape of the environment, their navigational sense, with what they know about its landmarks — object recognition — at just the age when they begin to master spatial language and words like “left” and “right.” Yet, she acknowledges, her ideas about language as the central consolidator of human intelligence remain unproved and contentious.
Whatever their aim, the studies in her lab are difficult, each requiring scores of parentally volunteered participants. Babies don’t follow instructions and often “fuss out” mid-test, taking their data points with them.
Yet Dr. Spelke herself never fusses out or turns rote. She prowls the lab from a knee-high perspective, fretting the details of an experiment like Steve Jobs worrying over iPhone pixel density. “Is this car seat angled a little too far back?” she asked her students, poking the little velveteen chair every which way. “I’m concerned that a baby will have to strain too much to see the screen and decide it’s not worth the trouble.”
Should a student or colleague disagree with her, Dr. Spelke skips the defensive bristling, perhaps in part because she is serenely self-confident about her intellectual powers. “It was all easy for me,” she said of her early school years. “I don’t think I had to work hard until I got to college, or even graduate school.”
So, Radcliffe Phi Beta Kappa, ho hum. “My mother is absolutely brilliant, not just in science, but in everything,” said her daughter, Bridget, a medical student. “There’s a joke in my family that my mother and brother are the geniuses, and Dad and I are the grunts.” (“I hate this joke,” Dr. Spelke commented by e-mail, “and utterly reject this distinction!”)
Above all, Dr. Spelke relishes a good debate. “She welcomes people disagreeing with her,” said her husband, Elliott M. Blass, an emeritus professor of psychology at the University of Massachusetts. “She says it’s not about being right, it’s about getting it right.”
When Lawrence H. Summers, then president of Harvard, notoriously suggested in 2005 that the shortage of women in the physical sciences might be partly due to possible innate shortcomings in math, Dr. Spelke zestily entered the fray. She combed through results from her lab and elsewhere on basic number skills, seeking evidence of early differences between girls and boys. She found none.
“My position is that the null hypothesis is correct,” she said. “There is no cognitive difference and nothing to say about it.”
Dr. Spelke laid out her case in an acclaimed debate with her old friend Dr. Pinker, who defended the Summers camp.
“I have enormous respect for Steve, and I think he’s great,” Dr. Spelke said. “But when he argues that it makes sense that so many women are going into biology and medicine because those are the ‘helping’ professions, well, I remember when being a doctor was considered far too full of blood and gore for women and their uncontrollable emotions to handle.”
Raising Her Babies
For her part, Dr. Spelke has passionately combined science and motherhood. Her mother studied piano at Juilliard but gave it up when Elizabeth was born. “I felt terribly guilty about that,” Dr. Spelke said. “I never wanted my children to go through the same thing.”
When her children were young, Dr. Spelke often took them to the lab or held meetings at home. The whole family traveled together — France, Spain, Sweden, Egypt, Turkey — never reserving lodgings but finding accommodations as they could. (The best, Dr. Blass said, was a casbah in the Moroccan desert.)
Scaling the academic ranks, Dr. Spelke still found time to supplement her children’s public school education with a home-schooled version of the rigorous French curriculum. She baked their birthday cakes from scratch, staged elaborate treasure hunts and spent many days each year creating their Halloween costumes: Bridget as a cave girl or her favorite ballet bird; her younger brother, Joey, as a drawbridge.
“Growing up in my house was a constant adventure,” Bridget said. “As a new mother myself,” she added, “I don’t know how my mom did it.”
Is Dr. Spelke the master of every domain? It’s enough to make the average mother fuss out.
http://www.nytimes.com/2012/05/01/scien ... h_20120501
May 10, 2012, 9:00 pm
Can Physics and Philosophy Get Along?
By GARY GUTTING
Physicists have been giving philosophers a hard time lately. Stephen Hawking claimed in a speech last year that philosophy is “dead” because philosophers haven’t kept up with science. More recently, Lawrence Krauss, in his book, “A Universe From Nothing: Why There Is Something Rather Than Nothing,” has insisted that “philosophy and theology are incapable of addressing by themselves the truly fundamental questions that perplex us about our existence.” David Albert, a distinguished philosopher of science, dismissively reviewed Krauss’s book: “all there is to say about this [Krauss’s claim that the universe may have come from nothing], as far as I can see, is that Krauss is dead wrong and his religious and philosophical critics are absolutely right.” Krauss — ignoring Albert’s Ph.D. in theoretical physics — retorted in an interview that Albert is a “moronic philosopher.” (Krauss somewhat moderates his views in a recent Scientific American article.)
I’d like to see if I can raise the level of the discussion a bit. Despite some nasty asides, Krauss doesn’t deny that philosophers may have something to contribute to our understanding of “fundamental questions” (his “by themselves” in the above quotation is a typical qualification). And almost all philosophers of science — certainly Albert — would agree that an intimate knowledge of science is essential for their discipline. So it should be possible to at least start a line of thought that incorporates both the physicist’s and the philosopher’s sensibilities.
There is a long tradition of philosophers’ arguing for the existence of God on the grounds that the material (physical) universe as a whole requires an immaterial explanation. Otherwise, they maintain, the universe would have to originate from nothing, and it’s not possible that something come from nothing. (One response to the argument is that the universe may have always existed and so never came into being, but the Big Bang, well established by contemporary cosmology, is often said to exclude this possibility.)
Krauss is totally unimpressed by this line of argument, since, he says, its force depends on the meaning of “nothing” and, in the context of cosmology, this meaning depends on what sense science can make of the term. For example, one plausible scientific meaning for “nothing” is “empty space”: space with no elementary particles in it. But quantum mechanics shows that particles can emerge from empty space, and so seems to show that the universe (that is, all elementary particles and so the things they make up) could come from nothing.
But, Krauss admits, particles can emerge from empty space because empty space, despite its name, does contain virtual fields that fluctuate and can give empty space properties even in the absence of particles. These fields are governed by laws allowing for the “spontaneous” production of particles. Virtual fields, the philosopher will urge, are the “something” from which the particles come. All right, says Krauss, but there is the further possibility that the long-sought quantum theory of gravity, uniting quantum mechanics and general relativity, will allow for the spontaneous production of empty space itself, simply in virtue of the theory’s laws. Then we would have everything — space, fields and particles — coming from nothing.
But, the philosopher says, What about the laws of physics? They are something, not nothing—and where do they come from? Well, says Krauss — trying to be patient — there’s another promising theoretical approach that plausibly posits a “multiverse”: a possibly infinite collection of self-contained, non-interacting universes, each with is own laws of nature. In fact, it might well be that the multiverse contains universes with every possible set of laws. We have the laws we do simply because of the particular universe we’re in. But, of course, the philosopher can respond that the multiverse itself is governed by higher-level laws.
At every turn, the philosopher concludes, there are laws of nature, and the laws always apply to some physical “stuff” (particles, fields, whatever) that is governed by the laws. In no case, then, does something really come from nothing.
It seems to me, however, that this is a case of the philosopher’s winning the battle but losing the war. There is an absolute use of “nothing” that excludes literally everything that exists. In one sense, Krauss is just obstinately ignoring this use. But if Krauss knew more philosophy, he could readily cite many philosophers who find this absolute use — and the corresponding principle that something cannot come from nothing — unintelligible. For an excellent survey of arguments along this line, see Roy Sorensen’s Stanford Encyclopedia article, “Nothingness.”
But even if the question survives the many philosophical critiques of its intelligibility, there have been strong objections to applying “something cannot come from nothing” to the universe as a whole. David Hume, for example, argued that it is only from experience that we know that individual things don’t just spring into existence (there is no logical contradiction in their doing so). Since we have no experience of the universe coming into existence, we have no reason to say that if it has come to be, it must have a cause. Hume and his followers would be entirely happy with leaving the question of a cause of the universe up to empirical science.
While Krauss could appeal to philosophy to strengthen his case against “something cannot come from nothing,” he opens himself to philosophical criticism by simply assuming that scientific experiment is, as he puts it, the “ultimate arbiter of truth” about the world. The success of science gives us every reason to continue to pursue its experimental method in search of further truths. But science itself is incapable of establishing that all truths about the world are discoverable by its methods.
Precisely because science deals with only what can be known, direct or indirectly, by sense experience, it cannot answer the question of whether there is anything — for example, consciousness, morality, beauty or God — that is not entirely knowable by sense experience. To show that there is nothing beyond sense experience, we would need philosophical arguments, not scientific experiments.
Krauss may well be right that philosophers should leave questions about the nature of the world to scientists. But, without philosophy, his claim can only be a matter of faith, not knowledge.
http://opinionator.blogs.nytimes.com/20 ... y_20120511
Can Physics and Philosophy Get Along?
By GARY GUTTING
Physicists have been giving philosophers a hard time lately. Stephen Hawking claimed in a speech last year that philosophy is “dead” because philosophers haven’t kept up with science. More recently, Lawrence Krauss, in his book, “A Universe From Nothing: Why There Is Something Rather Than Nothing,” has insisted that “philosophy and theology are incapable of addressing by themselves the truly fundamental questions that perplex us about our existence.” David Albert, a distinguished philosopher of science, dismissively reviewed Krauss’s book: “all there is to say about this [Krauss’s claim that the universe may have come from nothing], as far as I can see, is that Krauss is dead wrong and his religious and philosophical critics are absolutely right.” Krauss — ignoring Albert’s Ph.D. in theoretical physics — retorted in an interview that Albert is a “moronic philosopher.” (Krauss somewhat moderates his views in a recent Scientific American article.)
I’d like to see if I can raise the level of the discussion a bit. Despite some nasty asides, Krauss doesn’t deny that philosophers may have something to contribute to our understanding of “fundamental questions” (his “by themselves” in the above quotation is a typical qualification). And almost all philosophers of science — certainly Albert — would agree that an intimate knowledge of science is essential for their discipline. So it should be possible to at least start a line of thought that incorporates both the physicist’s and the philosopher’s sensibilities.
There is a long tradition of philosophers’ arguing for the existence of God on the grounds that the material (physical) universe as a whole requires an immaterial explanation. Otherwise, they maintain, the universe would have to originate from nothing, and it’s not possible that something come from nothing. (One response to the argument is that the universe may have always existed and so never came into being, but the Big Bang, well established by contemporary cosmology, is often said to exclude this possibility.)
Krauss is totally unimpressed by this line of argument, since, he says, its force depends on the meaning of “nothing” and, in the context of cosmology, this meaning depends on what sense science can make of the term. For example, one plausible scientific meaning for “nothing” is “empty space”: space with no elementary particles in it. But quantum mechanics shows that particles can emerge from empty space, and so seems to show that the universe (that is, all elementary particles and so the things they make up) could come from nothing.
But, Krauss admits, particles can emerge from empty space because empty space, despite its name, does contain virtual fields that fluctuate and can give empty space properties even in the absence of particles. These fields are governed by laws allowing for the “spontaneous” production of particles. Virtual fields, the philosopher will urge, are the “something” from which the particles come. All right, says Krauss, but there is the further possibility that the long-sought quantum theory of gravity, uniting quantum mechanics and general relativity, will allow for the spontaneous production of empty space itself, simply in virtue of the theory’s laws. Then we would have everything — space, fields and particles — coming from nothing.
But, the philosopher says, What about the laws of physics? They are something, not nothing—and where do they come from? Well, says Krauss — trying to be patient — there’s another promising theoretical approach that plausibly posits a “multiverse”: a possibly infinite collection of self-contained, non-interacting universes, each with is own laws of nature. In fact, it might well be that the multiverse contains universes with every possible set of laws. We have the laws we do simply because of the particular universe we’re in. But, of course, the philosopher can respond that the multiverse itself is governed by higher-level laws.
At every turn, the philosopher concludes, there are laws of nature, and the laws always apply to some physical “stuff” (particles, fields, whatever) that is governed by the laws. In no case, then, does something really come from nothing.
It seems to me, however, that this is a case of the philosopher’s winning the battle but losing the war. There is an absolute use of “nothing” that excludes literally everything that exists. In one sense, Krauss is just obstinately ignoring this use. But if Krauss knew more philosophy, he could readily cite many philosophers who find this absolute use — and the corresponding principle that something cannot come from nothing — unintelligible. For an excellent survey of arguments along this line, see Roy Sorensen’s Stanford Encyclopedia article, “Nothingness.”
But even if the question survives the many philosophical critiques of its intelligibility, there have been strong objections to applying “something cannot come from nothing” to the universe as a whole. David Hume, for example, argued that it is only from experience that we know that individual things don’t just spring into existence (there is no logical contradiction in their doing so). Since we have no experience of the universe coming into existence, we have no reason to say that if it has come to be, it must have a cause. Hume and his followers would be entirely happy with leaving the question of a cause of the universe up to empirical science.
While Krauss could appeal to philosophy to strengthen his case against “something cannot come from nothing,” he opens himself to philosophical criticism by simply assuming that scientific experiment is, as he puts it, the “ultimate arbiter of truth” about the world. The success of science gives us every reason to continue to pursue its experimental method in search of further truths. But science itself is incapable of establishing that all truths about the world are discoverable by its methods.
Precisely because science deals with only what can be known, direct or indirectly, by sense experience, it cannot answer the question of whether there is anything — for example, consciousness, morality, beauty or God — that is not entirely knowable by sense experience. To show that there is nothing beyond sense experience, we would need philosophical arguments, not scientific experiments.
Krauss may well be right that philosophers should leave questions about the nature of the world to scientists. But, without philosophy, his claim can only be a matter of faith, not knowledge.
http://opinionator.blogs.nytimes.com/20 ... y_20120511
May 13, 2012, 5:00 pm
Logic and Neutrality
By TIMOTHY WILLIAMSON
Here is an idea many philosophers and logicians have about the function of logic in our cognitive life, our inquiries and debates. It isn’t a player. Rather, it’s an umpire, a neutral arbitrator between opposing theories, imposing some basic rules on all sides in a dispute. The picture is that logic has no substantive content, for otherwise the correctness of that content could itself be debated, which would impugn the neutrality of logic. One way to develop this idea is by saying that logic supplies no information of its own, because the point of information is to rule out possibilities, whereas logic only rules out inconsistencies, which are not genuine possibilities. On this view, logic in itself is totally uninformative, although it may help us extract and handle non-logical information from other sources.
The idea that logic is uninformative strikes me as deeply mistaken, and I’m going to explain why. But it may not seem crazy when one looks at elementary examples of the cognitive value of logic, such as when we extend our knowledge by deducing logical consequences of what we already know. If you know that either Mary or Mark did the murder (only they had access to the crime scene at the right time), and then Mary produces a rock-solid alibi, so you know she didn’t do it, you can deduce that Mark did it. Logic also helps us recognize our mistakes, when our beliefs turn out to contain inconsistencies. If I believe that no politicians are honest, and that John is a politician, and that he is honest, at least one of those three beliefs must be false, although logic doesn’t tell me which one.
The power of logic becomes increasingly clear when we chain together such elementary steps into longer and longer chains of reasoning, and the idea of logic as uninformative becomes correspondingly less and less plausible. Mathematics provides the most striking examples, since all its theorems are ultimately derived from a few simple axioms by chains of logical reasoning, some of them hundreds of pages long, even though mathematicians usually don’t bother to analyze their proofs into the most elementary steps.
For instance, Fermat’s Last Theorem was finally proved by Andrew Wiles and others after it had tortured mathematicians as an unsolved problem for more than three centuries. Exactly which mathematical axioms are indispensable for the proof is only gradually becoming clear, but for present purposes what matters is that together the accepted axioms suffice. One thing the proof showed is that it is a truth of pure logic that those axioms imply Fermat’s Last Theorem. If logic is uninformative, shouldn’t it be uninformative to be told that the accepted axioms of mathematics imply Fermat’s Last Theorem? But it wasn’t uninformative; it was one of the most exciting discoveries in decades. If the idea of information as ruling out possibilities can’t handle the informativeness of logic, that is a problem for that idea of information, not for the informativeness of logic.
The conception of logic as a neutral umpire of debate also fails to withstand scrutiny, for similar reasons. Principles of logic can themselves be debated, and often are, just like principles of any other science. For example, one principle of standard logic is the law of excluded middle, which says that something either is the case, or it isn’t. Either it’s raining, or it’s not. Many philosophers and others have rejected the law of excluded middle, on various grounds. Some think it fails in borderline cases, for instance when very few drops of rain are falling, and avoid it by adopting fuzzy logic. Others think the law fails when applied to future contingencies, such as whether you will be in the same job this time next year. On the other side, many philosophers — including me – argue that the law withstands these challenges. Whichever side is right, logical theories are players in these debates, not neutral umpires.
Another debate in which logical theories are players concerns the ban on contradictions. Most logicians accept the ban but some, known as dialetheists, reject it. They treat some paradoxes as black holes in logical space, where even contradictions are true (and false).
A different dispute in logic concerns “quantum logic.” Standard logic includes the “distributive” law, by which a statement of the form “X and either Y or Z” is equivalent to the corresponding statement of the form “Either X and Y or X and Z.” On one highly controversial view of the phenomenon of complementarity in quantum mechanics, it involves counterexamples to the distributive law: for example, since we can’t simultaneously observe both which way a particle is moving and where it is, the particle may be moving left and either in a given region or not, without either moving left and being in that region or moving left and not being in that region. Although that idea hasn’t done what its advocates originally hoped to solve the puzzles of quantum mechanics, it is yet another case where logical theories were players, not neutral umpires.
As it happens, I think that standard logic can resist all these challenges. The point is that each of them has been seriously proposed by (a minority of) expert logicians, and rationally debated. Although attempts were made to reinterpret the debates as misunderstandings in which the two sides spoke different languages, those attempts underestimated the capacity of our language to function as a forum for debate in which profound theoretical disagreements can be expressed. Logic is just not a controversy-free zone. If we restricted it to uncontroversial principles, nothing would be left. As in the rest of science, no principle is above challenge. That does not imply that nothing is known. The fact that you know something does not mean that nobody else is allowed to challenge it.
Of course, we’d be in trouble if we could never agree on anything in logic. Fortunately, we can secure enough agreement in logic for most purposes, but nothing in the nature of logic guarantees those agreements. Perhaps the methodological privilege of logic is not that its principles are so weak, but that they are so strong. They are formulated at such a high level of generality that, typically, if they crash, they crash so badly that we easily notice, because the counterexamples to them are simple. If we want to identify what is genuinely distinctive of logic, we should stop overlooking its close similarities to the rest of science.
Read previous posts by Timothy Williamson.
Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007) and, most recently, “Modal Logic as Metaphysics,” which will be published next year.
http://opinionator.blogs.nytimes.com/20 ... y_20120514
Logic and Neutrality
By TIMOTHY WILLIAMSON
Here is an idea many philosophers and logicians have about the function of logic in our cognitive life, our inquiries and debates. It isn’t a player. Rather, it’s an umpire, a neutral arbitrator between opposing theories, imposing some basic rules on all sides in a dispute. The picture is that logic has no substantive content, for otherwise the correctness of that content could itself be debated, which would impugn the neutrality of logic. One way to develop this idea is by saying that logic supplies no information of its own, because the point of information is to rule out possibilities, whereas logic only rules out inconsistencies, which are not genuine possibilities. On this view, logic in itself is totally uninformative, although it may help us extract and handle non-logical information from other sources.
The idea that logic is uninformative strikes me as deeply mistaken, and I’m going to explain why. But it may not seem crazy when one looks at elementary examples of the cognitive value of logic, such as when we extend our knowledge by deducing logical consequences of what we already know. If you know that either Mary or Mark did the murder (only they had access to the crime scene at the right time), and then Mary produces a rock-solid alibi, so you know she didn’t do it, you can deduce that Mark did it. Logic also helps us recognize our mistakes, when our beliefs turn out to contain inconsistencies. If I believe that no politicians are honest, and that John is a politician, and that he is honest, at least one of those three beliefs must be false, although logic doesn’t tell me which one.
The power of logic becomes increasingly clear when we chain together such elementary steps into longer and longer chains of reasoning, and the idea of logic as uninformative becomes correspondingly less and less plausible. Mathematics provides the most striking examples, since all its theorems are ultimately derived from a few simple axioms by chains of logical reasoning, some of them hundreds of pages long, even though mathematicians usually don’t bother to analyze their proofs into the most elementary steps.
For instance, Fermat’s Last Theorem was finally proved by Andrew Wiles and others after it had tortured mathematicians as an unsolved problem for more than three centuries. Exactly which mathematical axioms are indispensable for the proof is only gradually becoming clear, but for present purposes what matters is that together the accepted axioms suffice. One thing the proof showed is that it is a truth of pure logic that those axioms imply Fermat’s Last Theorem. If logic is uninformative, shouldn’t it be uninformative to be told that the accepted axioms of mathematics imply Fermat’s Last Theorem? But it wasn’t uninformative; it was one of the most exciting discoveries in decades. If the idea of information as ruling out possibilities can’t handle the informativeness of logic, that is a problem for that idea of information, not for the informativeness of logic.
The conception of logic as a neutral umpire of debate also fails to withstand scrutiny, for similar reasons. Principles of logic can themselves be debated, and often are, just like principles of any other science. For example, one principle of standard logic is the law of excluded middle, which says that something either is the case, or it isn’t. Either it’s raining, or it’s not. Many philosophers and others have rejected the law of excluded middle, on various grounds. Some think it fails in borderline cases, for instance when very few drops of rain are falling, and avoid it by adopting fuzzy logic. Others think the law fails when applied to future contingencies, such as whether you will be in the same job this time next year. On the other side, many philosophers — including me – argue that the law withstands these challenges. Whichever side is right, logical theories are players in these debates, not neutral umpires.
Another debate in which logical theories are players concerns the ban on contradictions. Most logicians accept the ban but some, known as dialetheists, reject it. They treat some paradoxes as black holes in logical space, where even contradictions are true (and false).
A different dispute in logic concerns “quantum logic.” Standard logic includes the “distributive” law, by which a statement of the form “X and either Y or Z” is equivalent to the corresponding statement of the form “Either X and Y or X and Z.” On one highly controversial view of the phenomenon of complementarity in quantum mechanics, it involves counterexamples to the distributive law: for example, since we can’t simultaneously observe both which way a particle is moving and where it is, the particle may be moving left and either in a given region or not, without either moving left and being in that region or moving left and not being in that region. Although that idea hasn’t done what its advocates originally hoped to solve the puzzles of quantum mechanics, it is yet another case where logical theories were players, not neutral umpires.
As it happens, I think that standard logic can resist all these challenges. The point is that each of them has been seriously proposed by (a minority of) expert logicians, and rationally debated. Although attempts were made to reinterpret the debates as misunderstandings in which the two sides spoke different languages, those attempts underestimated the capacity of our language to function as a forum for debate in which profound theoretical disagreements can be expressed. Logic is just not a controversy-free zone. If we restricted it to uncontroversial principles, nothing would be left. As in the rest of science, no principle is above challenge. That does not imply that nothing is known. The fact that you know something does not mean that nobody else is allowed to challenge it.
Of course, we’d be in trouble if we could never agree on anything in logic. Fortunately, we can secure enough agreement in logic for most purposes, but nothing in the nature of logic guarantees those agreements. Perhaps the methodological privilege of logic is not that its principles are so weak, but that they are so strong. They are formulated at such a high level of generality that, typically, if they crash, they crash so badly that we easily notice, because the counterexamples to them are simple. If we want to identify what is genuinely distinctive of logic, we should stop overlooking its close similarities to the rest of science.
Read previous posts by Timothy Williamson.
Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007) and, most recently, “Modal Logic as Metaphysics,” which will be published next year.
http://opinionator.blogs.nytimes.com/20 ... y_20120514
Truth, Reality and Religion: On the use of Knowledge and Intellect in Deen and Dunia,
by Mohib Ebrahim
http://ismailimail.wordpress.com/2012/0 ... ilimail%29
by Mohib Ebrahim
http://ismailimail.wordpress.com/2012/0 ... ilimail%29
Reality Is Flat. (Or Is It?)
By RICHARD POLT
Adopting the reductionism that equates humans with other animals or computers has a serious downside: it wipes out the meaning of your own life.
In a recent essay for The Stone, I claimed that humans are “something more than other animals, and essentially more than any computer.” Some readers found the claim importantly or trivially true; others found it partially or totally false; still others reacted as if I’d said that we’re not animals at all, or that there are no resemblances between our brains and computers. Some pointed out, rightly, that plenty of people do fine research in biology or computer science without reducing the human to the subhuman.
But reductionism is also afoot, often not within science itself but in the way scientific findings get interpreted. John Gray writes in his 2002 British best seller, “Straw Dogs,” “Humans think they are free, conscious beings, when in truth they are deluded animals.” The neurologist-philosopher Raymond Tallis lambastes such notions in his 2011 book, “Aping Mankind,” where he cites many more examples of reductionism from all corners of contemporary culture.
Now, what do I mean by reductionism, and what’s wrong with it? Every thinking person tries to reduce some things to others; if you attribute your cousin’s political outburst to his indigestion, you’ve reduced the rant to the reflux. But the reductionism that’s at stake here is a much broader habit of thinking that tries to flatten reality down and allow only certain kinds of explanations. Here I’ll provide a little historical perspective on this kind of thinking and explain why adopting it is a bad bargain: it wipes out the meaning of your own life.
~~~~~
Over 2,300 years ago, Aristotle argued in his “Physics” that we should try to explain natural phenomena in four different but compatible ways, traditionally known as the four causes. We can identify a moving cause, or what initiates a change: the impact of a cue stick on a billiard ball is the moving cause of the ball’s motion. We can account for some properties of things in terms of what they’re made of (material cause), as when we explain why a balloon is stretchy by pointing out that it’s made of rubber. We can understand the nature or kind of a phenomenon (formal cause), as when we define a cumulus cloud. And we can understand a thing’s function or end (final cause), as when we say that eyes are for seeing.
You’ll notice that the first two kinds of cause sound more modern than the others. Since Galileo, we have increasingly been living in a post-Aristotelian world where talk of “natures” and “ends” strikes us as unscientific jargon — although it hasn’t disappeared altogether. Aristotle thought that final causality applied to all natural things, but many of his final-cause explanations now seem naïve — say, the idea that heavy things fall because their natural end is to reach the earth. Final cause plays no part in our physics. In biology and medicine, though, it’s still at least convenient to use final-cause language and say, for instance, that a function of the liver is to aid in digestion. As for formal cause, every science works with some notion of what kind of thing it studies — such as what an organism is, what an economy is, or what language is.
But do things really come in a profusion of different kinds? For example, are living things irreducibly different from nonliving things? Reductionists would answer that a horse isn’t ultimately different in kind from a chunk of granite; the horse is just a more complicated effect of the moving and material causes that physics investigates. This view flattens life down to more general facts about patterns of matter and energy.
Likewise, reductionists will say that human beings aren’t irreducibly different from horses: politics, music, money and romance are just complex effects of biological phenomena, and these are just effects of the phenomena we observe in nonliving things. Humans get flattened down along with the rest of nature.
Reductionism, then, tries to limit reality to as few kinds as possible. For reductionists, as things combine into more complicated structures, they don’t turn into something that they really weren’t before, or reach any qualitatively new level. In this sense, reality is flat.
Notice that in this world view, since modern physics doesn’t use final causes and physics is the master science, ends or purposes play no role in reality, although talk of such things may be a convenient figure of speech. The questions “How did we get here?” and “What are we made of?” make sense for a reductionist, but questions such as “What is human nature?” and “How should we live?”— if they have any meaning at all — have to be reframed as questions about moving or material physical causes.
Now let’s consider a nonreductionist alternative: there are a great many different kinds of beings, with different natures. Reality is messy and diverse, with lumps and gaps, peaks and valleys.
But what would account for these differences in kind? The traditional Western answer is that there is a highest being who is responsible for giving created beings their natures and their very existence.
Today this traditional answer doesn’t seem as convincing as it once did. As Nietzsche complains in “Twilight of the Idols” (1889), in the traditional view “the higher is not allowed to develop from the lower, is not allowed to have developed at all.” But Darwin has helped us see that new species can develop from simpler ones. Nietzsche abandoned not just traditional creationism but God as well; others find evolution compatible with monotheism. The point for our present purposes is that Nietzsche is opposing not only the view that things require a top-down act of creation, but also reductionists who flatten everything down to the same level; he suggests that reality has peaks and valleys, and the higher emerges from the lower. Some call such a view emergentism.
An emergentist account of reality could go something like this. Over billions of years, increasingly complex beings have evolved from simpler ones. But there isn’t just greater complexity — new kinds of beings emerge, living beings, and new capacities: feeling pleasure and pain, instead of just interacting chemically and physically with other things; becoming aware of other things and oneself; and eventually, human love, freedom and reason. Reality isn’t flat.
Higher beings continue to have lower dimensions. People are still animals, and animals are still physical things — throw me out a window and I’ll follow the law of gravity, with deleterious consequences for my freedom and reason. So we can certainly study ourselves as biological, chemical and physical beings. We can correctly reconstruct the moving causes that brought us about, and analyze our material causes.
However, these findings aren’t enough for a full understanding of what humans are. We must also understand our formal cause — what’s distinctive about us in our evolved state. Thanks to the process of emergence, we have become something more than other animals.
That doesn’t mean we’re all morally excellent (we can become heroic or vile); it doesn’t gives us the right to abuse and exterminate other species; and it doesn’t mean humans can do everything better (a cheetah will outrun me and a bloodhound will outsniff me every time). But we’ve developed a wealth of irreducibly human abilities, desires, responsibilities, predicaments, insights and questions that other species, as far as we can tell, approximate only vaguely.
In particular, recognizing our connections to other animals isn’t enough for us to understand ethics and politics. As incomplete, open-ended, partially self-determining animals, we must deliberate on how to live, acting in the light of what we understand about human virtue and vice, human justice and injustice. We will often go astray in our choices, but the realm of human freedom and purposes is irreducible and real: we really do envision and debate possibilities, we really do take decisions, and we really do reach better or worse goals.
As for our computing devices, who knows? Maybe we’ll find a way to jump-start them into reaching a higher level, so that they become conscious actors instead of the blind, indifferent electron pushers they’ve been so far — although, like anyone who’s seen a few science fiction movies, I’m not sure this project is particularly wise.
So is something like this emergentist view right, or is reductionism the way to go?
.
One thing is clear: a totally flattened-out explanation of reality far exceeds our current scientific ability. Our knowledge of general physics has to be enriched with new concepts when we study complex systems such as a muddy stream or a viral infection, not to mention human phenomena such as the Arab Spring, Twitter or “Glee.” We have to develop new ideas when we look at new kinds of reality.
In principle, though, is reductionism ultimately true? Serious thinkers have given serious arguments on both sides of this metaphysical question. For great philosophers with a reductionist cast of mind, read Spinoza or Hobbes. For brilliant emergentists, read John Dewey or Maurice Merleau-Ponty. Such issues can’t be settled in a single essay.
But make no mistake, reductionism comes at a very steep price: it asks you to hammer your own life flat. If you believe that love, freedom, reason and human purpose have no distinctive nature of their own, you’ll have to regard many of your own pursuits as phantasms and view yourself as a “deluded animal.”
Everything you feel that you’re choosing because you affirm it as good — your career, your marriage, reading The New York Times today, or even espousing reductionism — you’ll have to regard intellectually as just an effect of moving and material causes. You’ll have to abandon trust in your own experience for the sake of trust in the metaphysical principle of reductionism.
That’s what I’d call a bad bargain.
http://opinionator.blogs.nytimes.com/20 ... y_20120817
--------------------------------------------------------------------------------
Richard Polt is a professor of philosophy at Xavier University in Cincinnati. His books include “Heidegger: An Introduction.”
By RICHARD POLT
Adopting the reductionism that equates humans with other animals or computers has a serious downside: it wipes out the meaning of your own life.
In a recent essay for The Stone, I claimed that humans are “something more than other animals, and essentially more than any computer.” Some readers found the claim importantly or trivially true; others found it partially or totally false; still others reacted as if I’d said that we’re not animals at all, or that there are no resemblances between our brains and computers. Some pointed out, rightly, that plenty of people do fine research in biology or computer science without reducing the human to the subhuman.
But reductionism is also afoot, often not within science itself but in the way scientific findings get interpreted. John Gray writes in his 2002 British best seller, “Straw Dogs,” “Humans think they are free, conscious beings, when in truth they are deluded animals.” The neurologist-philosopher Raymond Tallis lambastes such notions in his 2011 book, “Aping Mankind,” where he cites many more examples of reductionism from all corners of contemporary culture.
Now, what do I mean by reductionism, and what’s wrong with it? Every thinking person tries to reduce some things to others; if you attribute your cousin’s political outburst to his indigestion, you’ve reduced the rant to the reflux. But the reductionism that’s at stake here is a much broader habit of thinking that tries to flatten reality down and allow only certain kinds of explanations. Here I’ll provide a little historical perspective on this kind of thinking and explain why adopting it is a bad bargain: it wipes out the meaning of your own life.
~~~~~
Over 2,300 years ago, Aristotle argued in his “Physics” that we should try to explain natural phenomena in four different but compatible ways, traditionally known as the four causes. We can identify a moving cause, or what initiates a change: the impact of a cue stick on a billiard ball is the moving cause of the ball’s motion. We can account for some properties of things in terms of what they’re made of (material cause), as when we explain why a balloon is stretchy by pointing out that it’s made of rubber. We can understand the nature or kind of a phenomenon (formal cause), as when we define a cumulus cloud. And we can understand a thing’s function or end (final cause), as when we say that eyes are for seeing.
You’ll notice that the first two kinds of cause sound more modern than the others. Since Galileo, we have increasingly been living in a post-Aristotelian world where talk of “natures” and “ends” strikes us as unscientific jargon — although it hasn’t disappeared altogether. Aristotle thought that final causality applied to all natural things, but many of his final-cause explanations now seem naïve — say, the idea that heavy things fall because their natural end is to reach the earth. Final cause plays no part in our physics. In biology and medicine, though, it’s still at least convenient to use final-cause language and say, for instance, that a function of the liver is to aid in digestion. As for formal cause, every science works with some notion of what kind of thing it studies — such as what an organism is, what an economy is, or what language is.
But do things really come in a profusion of different kinds? For example, are living things irreducibly different from nonliving things? Reductionists would answer that a horse isn’t ultimately different in kind from a chunk of granite; the horse is just a more complicated effect of the moving and material causes that physics investigates. This view flattens life down to more general facts about patterns of matter and energy.
Likewise, reductionists will say that human beings aren’t irreducibly different from horses: politics, music, money and romance are just complex effects of biological phenomena, and these are just effects of the phenomena we observe in nonliving things. Humans get flattened down along with the rest of nature.
Reductionism, then, tries to limit reality to as few kinds as possible. For reductionists, as things combine into more complicated structures, they don’t turn into something that they really weren’t before, or reach any qualitatively new level. In this sense, reality is flat.
Notice that in this world view, since modern physics doesn’t use final causes and physics is the master science, ends or purposes play no role in reality, although talk of such things may be a convenient figure of speech. The questions “How did we get here?” and “What are we made of?” make sense for a reductionist, but questions such as “What is human nature?” and “How should we live?”— if they have any meaning at all — have to be reframed as questions about moving or material physical causes.
Now let’s consider a nonreductionist alternative: there are a great many different kinds of beings, with different natures. Reality is messy and diverse, with lumps and gaps, peaks and valleys.
But what would account for these differences in kind? The traditional Western answer is that there is a highest being who is responsible for giving created beings their natures and their very existence.
Today this traditional answer doesn’t seem as convincing as it once did. As Nietzsche complains in “Twilight of the Idols” (1889), in the traditional view “the higher is not allowed to develop from the lower, is not allowed to have developed at all.” But Darwin has helped us see that new species can develop from simpler ones. Nietzsche abandoned not just traditional creationism but God as well; others find evolution compatible with monotheism. The point for our present purposes is that Nietzsche is opposing not only the view that things require a top-down act of creation, but also reductionists who flatten everything down to the same level; he suggests that reality has peaks and valleys, and the higher emerges from the lower. Some call such a view emergentism.
An emergentist account of reality could go something like this. Over billions of years, increasingly complex beings have evolved from simpler ones. But there isn’t just greater complexity — new kinds of beings emerge, living beings, and new capacities: feeling pleasure and pain, instead of just interacting chemically and physically with other things; becoming aware of other things and oneself; and eventually, human love, freedom and reason. Reality isn’t flat.
Higher beings continue to have lower dimensions. People are still animals, and animals are still physical things — throw me out a window and I’ll follow the law of gravity, with deleterious consequences for my freedom and reason. So we can certainly study ourselves as biological, chemical and physical beings. We can correctly reconstruct the moving causes that brought us about, and analyze our material causes.
However, these findings aren’t enough for a full understanding of what humans are. We must also understand our formal cause — what’s distinctive about us in our evolved state. Thanks to the process of emergence, we have become something more than other animals.
That doesn’t mean we’re all morally excellent (we can become heroic or vile); it doesn’t gives us the right to abuse and exterminate other species; and it doesn’t mean humans can do everything better (a cheetah will outrun me and a bloodhound will outsniff me every time). But we’ve developed a wealth of irreducibly human abilities, desires, responsibilities, predicaments, insights and questions that other species, as far as we can tell, approximate only vaguely.
In particular, recognizing our connections to other animals isn’t enough for us to understand ethics and politics. As incomplete, open-ended, partially self-determining animals, we must deliberate on how to live, acting in the light of what we understand about human virtue and vice, human justice and injustice. We will often go astray in our choices, but the realm of human freedom and purposes is irreducible and real: we really do envision and debate possibilities, we really do take decisions, and we really do reach better or worse goals.
As for our computing devices, who knows? Maybe we’ll find a way to jump-start them into reaching a higher level, so that they become conscious actors instead of the blind, indifferent electron pushers they’ve been so far — although, like anyone who’s seen a few science fiction movies, I’m not sure this project is particularly wise.
So is something like this emergentist view right, or is reductionism the way to go?
.
One thing is clear: a totally flattened-out explanation of reality far exceeds our current scientific ability. Our knowledge of general physics has to be enriched with new concepts when we study complex systems such as a muddy stream or a viral infection, not to mention human phenomena such as the Arab Spring, Twitter or “Glee.” We have to develop new ideas when we look at new kinds of reality.
In principle, though, is reductionism ultimately true? Serious thinkers have given serious arguments on both sides of this metaphysical question. For great philosophers with a reductionist cast of mind, read Spinoza or Hobbes. For brilliant emergentists, read John Dewey or Maurice Merleau-Ponty. Such issues can’t be settled in a single essay.
But make no mistake, reductionism comes at a very steep price: it asks you to hammer your own life flat. If you believe that love, freedom, reason and human purpose have no distinctive nature of their own, you’ll have to regard many of your own pursuits as phantasms and view yourself as a “deluded animal.”
Everything you feel that you’re choosing because you affirm it as good — your career, your marriage, reading The New York Times today, or even espousing reductionism — you’ll have to regard intellectually as just an effect of moving and material causes. You’ll have to abandon trust in your own experience for the sake of trust in the metaphysical principle of reductionism.
That’s what I’d call a bad bargain.
http://opinionator.blogs.nytimes.com/20 ... y_20120817
--------------------------------------------------------------------------------
Richard Polt is a professor of philosophy at Xavier University in Cincinnati. His books include “Heidegger: An Introduction.”
"Towards an Integral Psychology of Islam from Al-Fatiha,
The Opening, to the Gardens of Paradise,"
A doctoral dissertation presentation by Jalaledin Ebrahim
Video at:
http://jalaledin.blogspot.com/
The Opening, to the Gardens of Paradise,"
A doctoral dissertation presentation by Jalaledin Ebrahim
Video at:
http://jalaledin.blogspot.com/
Stone Links: Consider the Octopus
By MARK DE SILVA
Scientific American discusses the claim of a group of prominent researchers on consciousness recently convening at Cambridge: a neocortex is not a precondition of conscious experience. In fact, these researchers believe there are good reasons to suppose that the neural substrates of experiential states, and of emotive states as well, are present in creatures with brains structured very differently from our own. Most interestingly, even some invertebrates—specifically, octopuses—appear to show signs of being conscious. “That does not necessarily mean that you could have a distraught octopus or an elated cuttlefish on your hands,” Katherine Harmon writes. But it does mean we need to think of consciousness as being spread across a wide range of species, and being realizable by a number of different neurological structures.
http://opinionator.blogs.nytimes.com/20 ... y_20120829
Related link....
http://blogs.scientificamerican.com/oct ... claration/
By MARK DE SILVA
Scientific American discusses the claim of a group of prominent researchers on consciousness recently convening at Cambridge: a neocortex is not a precondition of conscious experience. In fact, these researchers believe there are good reasons to suppose that the neural substrates of experiential states, and of emotive states as well, are present in creatures with brains structured very differently from our own. Most interestingly, even some invertebrates—specifically, octopuses—appear to show signs of being conscious. “That does not necessarily mean that you could have a distraught octopus or an elated cuttlefish on your hands,” Katherine Harmon writes. But it does mean we need to think of consciousness as being spread across a wide range of species, and being realizable by a number of different neurological structures.
http://opinionator.blogs.nytimes.com/20 ... y_20120829
Related link....
http://blogs.scientificamerican.com/oct ... claration/
On Thursday, March 14, Khalil Andani (Master’s Candidate, Harvard University) delivered a presentation on the concept of Knowledge (‘ilm) according to Sayyidnā Nāṣir-i Khusraw. This presentation took place during the 17th annual NMCGSA Graduate Symposium held at the University of Toronto.
Khalil’s presentation explores the ideas of knowledge (‘ilm), intellect (‘aql), perception (andar yāftan; idrāk), recognition (ma‘rifah) and inspiration (ta’yīd) in the philosophy of Sayyidnā Nāṣir-i Khusraw and discusses two levels of knowledge – direct intellectual perception (andar yāftan) and conceptual knowledge (taṣawwur). His presentation (click here for abstract) consists of the following sections:
a) Contextualizing Nasir-i Khusraw
b) The Objects of Knowledge
c) Knowledge as Intellectual Perception
d) Knowledge as Conception
e) Ma‘rifah (Recognition)
f) From Potential Intellect to Actual Intellect
g) Conclusion
Listen to the presentation and view the powerpoint slideshow video (view on 720p quality):
http://youtu.be/HK-HWEazkzw
via Video: The Concept of Knowledge (‘ilm) in Nasir-i Khusraw’s Philosophy | Ismā‘īlī Gnosis.
Khalil’s presentation explores the ideas of knowledge (‘ilm), intellect (‘aql), perception (andar yāftan; idrāk), recognition (ma‘rifah) and inspiration (ta’yīd) in the philosophy of Sayyidnā Nāṣir-i Khusraw and discusses two levels of knowledge – direct intellectual perception (andar yāftan) and conceptual knowledge (taṣawwur). His presentation (click here for abstract) consists of the following sections:
a) Contextualizing Nasir-i Khusraw
b) The Objects of Knowledge
c) Knowledge as Intellectual Perception
d) Knowledge as Conception
e) Ma‘rifah (Recognition)
f) From Potential Intellect to Actual Intellect
g) Conclusion
Listen to the presentation and view the powerpoint slideshow video (view on 720p quality):
http://youtu.be/HK-HWEazkzw
via Video: The Concept of Knowledge (‘ilm) in Nasir-i Khusraw’s Philosophy | Ismā‘īlī Gnosis.
Knowledge is soon changed, then lost in the mist, an echo half-heard.
- Gene Wolfe
I remind myself every morning:
Nothing I say this day will teach me anything.
So if I'm going to learn, I must do it by listening.
- Larry King
Don't limit a child to your own learning,
for he was born in another time.
- Rabindranath Tagore
Develop a passion for learning.
If you do, you will never cease to grow.
- Anthony J. D'Angelo
We now accept the fact that learning is a lifelong
process of keeping abreast of change.
And the most pressing task is to teach people how to learn.
- Peter Drucker
It's what you learn after you know it all that counts.
- John Wooden
- Gene Wolfe
I remind myself every morning:
Nothing I say this day will teach me anything.
So if I'm going to learn, I must do it by listening.
- Larry King
Don't limit a child to your own learning,
for he was born in another time.
- Rabindranath Tagore
Develop a passion for learning.
If you do, you will never cease to grow.
- Anthony J. D'Angelo
We now accept the fact that learning is a lifelong
process of keeping abreast of change.
And the most pressing task is to teach people how to learn.
- Peter Drucker
It's what you learn after you know it all that counts.
- John Wooden
Reclaiming the Power of Play
Play is the highest form of human activity. At least that’s what Friedrich Nietzsche suggested in “Thus Spoke Zarathustra,” when he described a three-step development of the human spirit. First, the human psyche has the form of a camel because it takes on the heavy burden of cultural duties — ethical obligations, social rank, and the weight of tradition. Next, the camel transforms into a lion, which represents the rebellion of the psyche — the “holy nay” that frees a rule-governed person from slavish obedience to authority. Finally, this negative insurgent phase evolves into the highest level of humanity, symbolized as the playing child — innocent and creative, the “holy yea.” Cue the Richard Strauss music.
Bertrand Russell argued that ‘the road to happiness and prosperity lies in an organized diminution of work.’ But he didn’t have much company.
As usual with Nietzsche, we can debate the precise meaning of this cryptic simile (e.g., is the child supposed to be the nihilism-defeating Übermensch?), but it’s clear at least that Nietzsche considered play vitally important for humanity. Apart from such a rare paean, however, philosophy has had little interest in play, and where it does take interest it is usually dismissive. For many hard-nosed intellectuals, play stands as a symbol of disorder. Plato’s reproach in “The Republic” of artists as merely playing in the realm of illusion famously set the trend, as did Aristotle’s claim that play (paidia) is simply rest or downtime for the otherwise industrious soul. He calls it a “relaxation of the soul” and dismisses it from the “proper occupation of leisure.”
Leisure, for Aristotle, is serious business. We get our word “scholar” from the Greek word for leisure, skole. It should not be squandered on play, in Aristotle’s view, because play is beneficial only as a break or siesta in our otherwise highbrow endeavors.
The Roman poet Juvenal (circa A.D. 100) used the expression “bread and circuses” to describe the decline of Roman civic duty, in favor of mere amusement. The selfish common people, he scolded, are now happy with diversion and distraction. They care not for the wider Roman destiny because play has distracted them from social consciousness.
To be fair, philosophy has not been completely devoid of proponents of play. Bertrand Russell, in his 1932 essay, “In Praise of Idleness,” offered a positive view of “idleness” and leisure, lamenting “the modern man thinks that everything ought to be done for the sake of something else, and never for its own sake.” He also argued that “the road to happiness and prosperity lies in an organized diminution of work.” If we reduced our workday to four hours, he suggested, we would have the leisure time to think and reflect on every topic, especially the social injustices around us and the manipulations of the state.
So, is play a cultural “cheesecake” that emerged from Homo sapiens’ big-brained adaptations, like language and imagination? Or is it a common feature of animal life? We now know, from animal ethology and affective neuroscience, that play is widely distributed in the mammal class. Juvenile play in mammals is an important means of social engagement that helps animals become familiar with bodies, learn dominance and submission relations, form alliance friendships and experience something that looks a lot like joy. The neuroscientist and “rat tickler” Jaak Panksepp is famous for detailing how rats play, and amazingly how they even “laugh” (with 50 kilohertz ultrasonic chirps). Play is underwritten by an innate brain system, where rough-and-tumble play is motivated and anticipated by spikes in dopamine, while the play itself seems to release pleasurable opioids and oxytocin.
Animal scientists suggest that play evolved as an adaptation for social bonding. Peter Farb’s “Man’s Rise to Civilization” puts forth the theory that play probably increased substantially for early humans when childhoods became safer — as fathers and mothers hit upon a new division of labor through pair-bonding. Safer, stable family structures during the Pleistocene created greater leisure for our big brains to fill with learning, creating and playing. Even more recent hunter-gatherer tribes, like the Shoshone of the Great Basin, enjoyed surprising leisure time because their subsistence labor was carved so efficiently. (The subject of animal play is nicely summarized in Gordon Burghardt’s 2014 survey (pdf) in the journal Animal Behavior and Cognition.)
All this suggests that play is also a crucial part of the full life of the human animal, and yet philosophers have said very little about it. Usually, if we see an appreciation of play, it’s an attempt to show its secret utility value — “See, it’s pragmatic after all!” See how playing music makes you smarter at other, more valued forms of thinking, like math, logic or even business strategy? See how play is adaptive for social evolution? All this is true of course, but one also wonders about the uniquely human meaning of play and leisure. Can we consider play and leisure as something with inherent value, independent of their accidental usefulness?
A philosophical thought experiment might help us here. The storied tradition of unrealistic scenarios — like Plato’s “ring of Gyges,” John Rawls’s “original position,” and Harold Ramis’s movie “Groundhog Day” — often help us isolate our hidden values and commitments. So, consider for a moment what life would be like if we did not need to work. Don’t merely imagine retirement, but rather a world after labor itself. Imagine that the “second machine age” (currently underway) brings us a robotic, A.I. utopia, where humans no longer need to work to survive. Buckminster Fuller imagined such a techno-future, saying “the true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.”
The point of this thought experiment is to isolate Aristotle’s question for ourselves. What is the proper occupation of leisure? The very phrase “occupation of leisure” demonstrates the trouble. What is the use of the useless? What would, and what should, we do with our free time? After the world of work, will we have the time, energy and ambition to do philosophy, make art, study history, master languages and make craft beers? Will we play creatively as “holy yea-sayers,” or will we just watch more TV?
In consideration of this, I want to suggest that we divide play into two major categories; active and passive. The passive
forms — let’s call them amusements — are indeed suspicious, as they seem to anesthetize the agent and reduce creative engagement. From our “bread and circuses” television culture to Aldous Huxley’s soma culture in “Brave New World,” the passive forms of leisure are cheap pleasures that come at no effort, skill or struggle. On the other hand, active play — everything from sport to music to chess, and even some video games — energizes the agent and costs practice, skill, effort and calories. Even the exploration of conscious inner-space, through artificial or natural means, can be very active. The true cultures of meditation, for example, evidence the rigors of inner-space play.
Philosophy should come out to play. At the very least, we need an epistemology of play (that investigates how play produces knowledge) and an ethics of play (investigating the normative issues of play). We might start with Aristotle, despite his suspicion of it. He saw music, for instance, as a pastime that befitted free and noble people. No apologies or justifications are needed for music, he concluded in his “Politics,” because “to seek for utility everywhere is entirely unsuited to people that are great-souled and free.” I am suggesting that the same charitable logic of civilization be applied to many other forms of play as well.
The stakes for play are higher than we think. Play is a way of being that resists the instrumental, expedient mode of existence. In play, we do not measure ourselves in terms of tangible productivity (extrinsic value), but instead, our physical and mental lives have intrinsic value of their own. It provides the source from which other extrinsic goods flow and eventually return.
When we see an activity like music as merely a “key to success,” we shortchange it and ourselves. Playing a musical instrument is both the pursuit of fulfillment and the very thing itself (the actualizing of potential). Playing, or even listening, in this case, is a kind of unique, embodied contemplation that can feed both the mind and the body.
When we truly engage in such “impractical” leisure activities — with our physical and mental selves — we do so for the pleasure they bring us and others, for the inherent good that arises from that engagement, and nothing else. That’s the “holy yea.”
http://opinionator.blogs.nytimes.com/20 ... d=45305309
Play is the highest form of human activity. At least that’s what Friedrich Nietzsche suggested in “Thus Spoke Zarathustra,” when he described a three-step development of the human spirit. First, the human psyche has the form of a camel because it takes on the heavy burden of cultural duties — ethical obligations, social rank, and the weight of tradition. Next, the camel transforms into a lion, which represents the rebellion of the psyche — the “holy nay” that frees a rule-governed person from slavish obedience to authority. Finally, this negative insurgent phase evolves into the highest level of humanity, symbolized as the playing child — innocent and creative, the “holy yea.” Cue the Richard Strauss music.
Bertrand Russell argued that ‘the road to happiness and prosperity lies in an organized diminution of work.’ But he didn’t have much company.
As usual with Nietzsche, we can debate the precise meaning of this cryptic simile (e.g., is the child supposed to be the nihilism-defeating Übermensch?), but it’s clear at least that Nietzsche considered play vitally important for humanity. Apart from such a rare paean, however, philosophy has had little interest in play, and where it does take interest it is usually dismissive. For many hard-nosed intellectuals, play stands as a symbol of disorder. Plato’s reproach in “The Republic” of artists as merely playing in the realm of illusion famously set the trend, as did Aristotle’s claim that play (paidia) is simply rest or downtime for the otherwise industrious soul. He calls it a “relaxation of the soul” and dismisses it from the “proper occupation of leisure.”
Leisure, for Aristotle, is serious business. We get our word “scholar” from the Greek word for leisure, skole. It should not be squandered on play, in Aristotle’s view, because play is beneficial only as a break or siesta in our otherwise highbrow endeavors.
The Roman poet Juvenal (circa A.D. 100) used the expression “bread and circuses” to describe the decline of Roman civic duty, in favor of mere amusement. The selfish common people, he scolded, are now happy with diversion and distraction. They care not for the wider Roman destiny because play has distracted them from social consciousness.
To be fair, philosophy has not been completely devoid of proponents of play. Bertrand Russell, in his 1932 essay, “In Praise of Idleness,” offered a positive view of “idleness” and leisure, lamenting “the modern man thinks that everything ought to be done for the sake of something else, and never for its own sake.” He also argued that “the road to happiness and prosperity lies in an organized diminution of work.” If we reduced our workday to four hours, he suggested, we would have the leisure time to think and reflect on every topic, especially the social injustices around us and the manipulations of the state.
So, is play a cultural “cheesecake” that emerged from Homo sapiens’ big-brained adaptations, like language and imagination? Or is it a common feature of animal life? We now know, from animal ethology and affective neuroscience, that play is widely distributed in the mammal class. Juvenile play in mammals is an important means of social engagement that helps animals become familiar with bodies, learn dominance and submission relations, form alliance friendships and experience something that looks a lot like joy. The neuroscientist and “rat tickler” Jaak Panksepp is famous for detailing how rats play, and amazingly how they even “laugh” (with 50 kilohertz ultrasonic chirps). Play is underwritten by an innate brain system, where rough-and-tumble play is motivated and anticipated by spikes in dopamine, while the play itself seems to release pleasurable opioids and oxytocin.
Animal scientists suggest that play evolved as an adaptation for social bonding. Peter Farb’s “Man’s Rise to Civilization” puts forth the theory that play probably increased substantially for early humans when childhoods became safer — as fathers and mothers hit upon a new division of labor through pair-bonding. Safer, stable family structures during the Pleistocene created greater leisure for our big brains to fill with learning, creating and playing. Even more recent hunter-gatherer tribes, like the Shoshone of the Great Basin, enjoyed surprising leisure time because their subsistence labor was carved so efficiently. (The subject of animal play is nicely summarized in Gordon Burghardt’s 2014 survey (pdf) in the journal Animal Behavior and Cognition.)
All this suggests that play is also a crucial part of the full life of the human animal, and yet philosophers have said very little about it. Usually, if we see an appreciation of play, it’s an attempt to show its secret utility value — “See, it’s pragmatic after all!” See how playing music makes you smarter at other, more valued forms of thinking, like math, logic or even business strategy? See how play is adaptive for social evolution? All this is true of course, but one also wonders about the uniquely human meaning of play and leisure. Can we consider play and leisure as something with inherent value, independent of their accidental usefulness?
A philosophical thought experiment might help us here. The storied tradition of unrealistic scenarios — like Plato’s “ring of Gyges,” John Rawls’s “original position,” and Harold Ramis’s movie “Groundhog Day” — often help us isolate our hidden values and commitments. So, consider for a moment what life would be like if we did not need to work. Don’t merely imagine retirement, but rather a world after labor itself. Imagine that the “second machine age” (currently underway) brings us a robotic, A.I. utopia, where humans no longer need to work to survive. Buckminster Fuller imagined such a techno-future, saying “the true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.”
The point of this thought experiment is to isolate Aristotle’s question for ourselves. What is the proper occupation of leisure? The very phrase “occupation of leisure” demonstrates the trouble. What is the use of the useless? What would, and what should, we do with our free time? After the world of work, will we have the time, energy and ambition to do philosophy, make art, study history, master languages and make craft beers? Will we play creatively as “holy yea-sayers,” or will we just watch more TV?
In consideration of this, I want to suggest that we divide play into two major categories; active and passive. The passive
forms — let’s call them amusements — are indeed suspicious, as they seem to anesthetize the agent and reduce creative engagement. From our “bread and circuses” television culture to Aldous Huxley’s soma culture in “Brave New World,” the passive forms of leisure are cheap pleasures that come at no effort, skill or struggle. On the other hand, active play — everything from sport to music to chess, and even some video games — energizes the agent and costs practice, skill, effort and calories. Even the exploration of conscious inner-space, through artificial or natural means, can be very active. The true cultures of meditation, for example, evidence the rigors of inner-space play.
Philosophy should come out to play. At the very least, we need an epistemology of play (that investigates how play produces knowledge) and an ethics of play (investigating the normative issues of play). We might start with Aristotle, despite his suspicion of it. He saw music, for instance, as a pastime that befitted free and noble people. No apologies or justifications are needed for music, he concluded in his “Politics,” because “to seek for utility everywhere is entirely unsuited to people that are great-souled and free.” I am suggesting that the same charitable logic of civilization be applied to many other forms of play as well.
The stakes for play are higher than we think. Play is a way of being that resists the instrumental, expedient mode of existence. In play, we do not measure ourselves in terms of tangible productivity (extrinsic value), but instead, our physical and mental lives have intrinsic value of their own. It provides the source from which other extrinsic goods flow and eventually return.
When we see an activity like music as merely a “key to success,” we shortchange it and ourselves. Playing a musical instrument is both the pursuit of fulfillment and the very thing itself (the actualizing of potential). Playing, or even listening, in this case, is a kind of unique, embodied contemplation that can feed both the mind and the body.
When we truly engage in such “impractical” leisure activities — with our physical and mental selves — we do so for the pleasure they bring us and others, for the inherent good that arises from that engagement, and nothing else. That’s the “holy yea.”
http://opinionator.blogs.nytimes.com/20 ... d=45305309