TECHNOLOGY AND DEVELOPMENT
Follow the light to wireless haven
LED project may change the way we use energy
By David Wylie,
Canwest News Service
August 21, 2009
I magine a world where refrigerator lights beam information to your cellphone telling you what to buy during your next trip to the grocery store.
That's one of the ideas researchers at the University of California's Riverside campus are investigating during a five-year, multimillion-dollar project into using energy-sipping, cheap LED lights to create wireless connections, even as they illuminate rooms.
However, inventions like clever fridge lights barely scratch the surface of the kinds of changes that breakthroughs in the field could bring.
Researchers say the advent of wireless communications through lighting could change the way we build homes and offices in the same dramatic way elevators allowed architects to design skyscrapers.
Srikanth Krishnamurthy, a computer science and engineering professor at the university and one of the project's team members, said buildings may become more open-concept, with space between the walls and ceiling for light to creep through. That way, a wireless connection could be maintained from room to room.
"The thing today is we don't care how lights are placed," he said. "The lighting has to be revisited because now these devices, these LEDs will have to communicate with each other. . . . The planning itself is going to be somewhat different.
"You cannot just put lights where you want. You have to place them in a way so that wherever you go, you can use your computer or your phone."
He said the project, dubbed UCLight, is all about being green.
"The whole idea here is to decrease energy costs," he said. "These LED lighting systems are very low power."
Krishnamurthy said lighting is already essential, so using it as a pathway for devices to communicate -- rather than conventional Wi-Fi --is a logical way to save energy.
Krishnamurthy said researchers have already found that by slightly increasing the intensity of LEDs, light can be used to establish wireless communications. The increase in visible light is not noticeable to the eye, he added.
"In the next five years we will really know how good this technology is," he said.
UCR electrical engineering Prof. Zhengyuan Xu, who is the director of the new research centre investigating LEDs, said the ability to build extremely cheap communication and navigation systems using the current lighting infrastructure would be a boon for places such as airports and hospitals, where conventional Wi-Fi can interfere with sensitive equipment.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 0&sponsor=
LED project may change the way we use energy
By David Wylie,
Canwest News Service
August 21, 2009
I magine a world where refrigerator lights beam information to your cellphone telling you what to buy during your next trip to the grocery store.
That's one of the ideas researchers at the University of California's Riverside campus are investigating during a five-year, multimillion-dollar project into using energy-sipping, cheap LED lights to create wireless connections, even as they illuminate rooms.
However, inventions like clever fridge lights barely scratch the surface of the kinds of changes that breakthroughs in the field could bring.
Researchers say the advent of wireless communications through lighting could change the way we build homes and offices in the same dramatic way elevators allowed architects to design skyscrapers.
Srikanth Krishnamurthy, a computer science and engineering professor at the university and one of the project's team members, said buildings may become more open-concept, with space between the walls and ceiling for light to creep through. That way, a wireless connection could be maintained from room to room.
"The thing today is we don't care how lights are placed," he said. "The lighting has to be revisited because now these devices, these LEDs will have to communicate with each other. . . . The planning itself is going to be somewhat different.
"You cannot just put lights where you want. You have to place them in a way so that wherever you go, you can use your computer or your phone."
He said the project, dubbed UCLight, is all about being green.
"The whole idea here is to decrease energy costs," he said. "These LED lighting systems are very low power."
Krishnamurthy said lighting is already essential, so using it as a pathway for devices to communicate -- rather than conventional Wi-Fi --is a logical way to save energy.
Krishnamurthy said researchers have already found that by slightly increasing the intensity of LEDs, light can be used to establish wireless communications. The increase in visible light is not noticeable to the eye, he added.
"In the next five years we will really know how good this technology is," he said.
UCR electrical engineering Prof. Zhengyuan Xu, who is the director of the new research centre investigating LEDs, said the ability to build extremely cheap communication and navigation systems using the current lighting infrastructure would be a boon for places such as airports and hospitals, where conventional Wi-Fi can interfere with sensitive equipment.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 0&sponsor=
Scientist hatches plan for 'chickensaurus'
Chicken embryo reverse engineered
By Ken Meaney, Canwest News ServiceAugust 26, 2009
It's no Jurassic Park, but research by a Montreal paleontologist could lead to the development of chicken embryos with dinosaur-like features within five years.
Hans Larsson of McGill University is working to produce chicken embryos with features the dinosaur descendants share with their gigantic ancestors of millions of years ago -- a longer tail, teeth and clawed fingers, for instance.
The goal is to understand and illustrate evolutionary mechanisms in birds--and by extension dinosaurs, humans and all other animal life on the planet.
"We're not going to hatch a T. rex or something," Larsson said, chuckling. "It's purely staying on anatomy and using the existing genome. I mean even if in a crazy world one of these were to get out and 'rampage' in the city -- a chicken-sized animal that had similar kinds of anatomy, if they bred they would produce an ordinary chicken."
The idea came up in a conversation with a colleague, Jack Horner, a renowned paleontologist and technical adviser on the Jurassic Park movies.
Birds, being direct descendants of dinosaurs, have some developmental mechanisms of dinosaurs within their embryology, Larsson said.
Horner suggested he "turn the (developmental) dial backwards a bit" to demonstrate how evolution works--what are "the mechanisms dinosaur-to-bird transition," Larsson said.
What Larsson is doing is looking for what he calls "switchpoints" in the chicken embryo's development -- periods when teeth or claws, for instance, appear and then fade away as it grows. By manipulating the chicken's development at those points, he figures to give it characteristics of its long-dead relatives.
Funding for the research comes from the Natural Sciences and Engineering Research Council of Canada.
Larsson doesn't foresee major to look at embryonic development. He said producing a hatchling would be different.
"For sure there are (issues) once it hatches and is viable on its own. With the embryo, there are ethical issues there, as well. But the kinds of manipulations we are doing are very standard across the board of experimental embryology. Anything from sea urchins to mice are being experimented on in very similar ways," he said.
But he said there would be no scientific reason to produce an animal that would essentially be a chickensaurus.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 7&sponsor=
Chicken embryo reverse engineered
By Ken Meaney, Canwest News ServiceAugust 26, 2009
It's no Jurassic Park, but research by a Montreal paleontologist could lead to the development of chicken embryos with dinosaur-like features within five years.
Hans Larsson of McGill University is working to produce chicken embryos with features the dinosaur descendants share with their gigantic ancestors of millions of years ago -- a longer tail, teeth and clawed fingers, for instance.
The goal is to understand and illustrate evolutionary mechanisms in birds--and by extension dinosaurs, humans and all other animal life on the planet.
"We're not going to hatch a T. rex or something," Larsson said, chuckling. "It's purely staying on anatomy and using the existing genome. I mean even if in a crazy world one of these were to get out and 'rampage' in the city -- a chicken-sized animal that had similar kinds of anatomy, if they bred they would produce an ordinary chicken."
The idea came up in a conversation with a colleague, Jack Horner, a renowned paleontologist and technical adviser on the Jurassic Park movies.
Birds, being direct descendants of dinosaurs, have some developmental mechanisms of dinosaurs within their embryology, Larsson said.
Horner suggested he "turn the (developmental) dial backwards a bit" to demonstrate how evolution works--what are "the mechanisms dinosaur-to-bird transition," Larsson said.
What Larsson is doing is looking for what he calls "switchpoints" in the chicken embryo's development -- periods when teeth or claws, for instance, appear and then fade away as it grows. By manipulating the chicken's development at those points, he figures to give it characteristics of its long-dead relatives.
Funding for the research comes from the Natural Sciences and Engineering Research Council of Canada.
Larsson doesn't foresee major to look at embryonic development. He said producing a hatchling would be different.
"For sure there are (issues) once it hatches and is viable on its own. With the embryo, there are ethical issues there, as well. But the kinds of manipulations we are doing are very standard across the board of experimental embryology. Anything from sea urchins to mice are being experimented on in very similar ways," he said.
But he said there would be no scientific reason to produce an animal that would essentially be a chickensaurus.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 7&sponsor=
August 28, 2009
Editorial Notebook
Time to Be Afraid of the Web?
By EDUARDO PORTER
Internet users used to comfort themselves by thinking that to become victims of the pirates of the Web, they had to frequent the online porn circuit or respond to an e-mail from the widowed wife of the former central bank governor of Nigeria. The idea was that one had to do something naughty to get caught in the wrongdoers’ net, or at least go for a late-night stroll in the rough end of town.
But the conceit has become untenable. Two years ago, engineers at Google reported that about 10 percent of millions of Web pages they analyzed engaged in “drive-by downloads” of malware. Google today has about 330,000 Web sites listed as malicious, up from about 150,000 a year ago.
Earlier this month, the Justice Department charged a 28-year-old from Miami and a couple of Russians with stealing 130 million credit card numbers from one of the largest payment processing companies in the world, which should know how to protect its computers from hackers. And last week, McAfee, the maker of antivirus software, reported that fans searching for Hollywood gossip and memorabilia faced a high risk of getting caught up by online bad guys.
Searching for the actress Jessica Biel, who won an achievement award at the Newport Beach Film Festival in 2006 and ranked in third place on Maxim magazine’s Hot 100 list last year, is most dangerous, with a 1 in 5 chance of landing at a Web site that tested positive for spyware, adware, spam, phishing, viruses or other noxious stuff. Searches of Beyoncé, Britney Spears and even Tom Brady of the New England Patriots are risky, too, according to McAfee. More than 40 percent of Google search results for “Jennifer Aniston screen savers” contained viruses, including one called FunLove.
Perhaps cybercops will respond more aggressively to Internet threats as they spread to the more wholesome parts of the Web, like police forces that leave crime alone in the poor parts of town but snap into action when it seeps into middle-class neighborhoods. McAfee, to no one’s surprise, suggests that we buy McAfee software.
But with more and more information about people’s credit cards, browsing histories and identities sloshing around online, I wonder whether this will do. A few months ago, I nervously created my first Facebook page with the minimum necessary information to view pictures posted by old friends.
I returned to the page a few days later to discover that somehow it had found out both the name of my college and my graduation class, displaying them under my name. I have not returned since. In the back of my mind, I fear a 28-year-old hacker and a couple of Russians have gathered two more facts about me that I would rather they didn’t have. And it’s way too late to take my life offline.
http://www.nytimes.com/2009/08/28/opini ... nted=print
Editorial Notebook
Time to Be Afraid of the Web?
By EDUARDO PORTER
Internet users used to comfort themselves by thinking that to become victims of the pirates of the Web, they had to frequent the online porn circuit or respond to an e-mail from the widowed wife of the former central bank governor of Nigeria. The idea was that one had to do something naughty to get caught in the wrongdoers’ net, or at least go for a late-night stroll in the rough end of town.
But the conceit has become untenable. Two years ago, engineers at Google reported that about 10 percent of millions of Web pages they analyzed engaged in “drive-by downloads” of malware. Google today has about 330,000 Web sites listed as malicious, up from about 150,000 a year ago.
Earlier this month, the Justice Department charged a 28-year-old from Miami and a couple of Russians with stealing 130 million credit card numbers from one of the largest payment processing companies in the world, which should know how to protect its computers from hackers. And last week, McAfee, the maker of antivirus software, reported that fans searching for Hollywood gossip and memorabilia faced a high risk of getting caught up by online bad guys.
Searching for the actress Jessica Biel, who won an achievement award at the Newport Beach Film Festival in 2006 and ranked in third place on Maxim magazine’s Hot 100 list last year, is most dangerous, with a 1 in 5 chance of landing at a Web site that tested positive for spyware, adware, spam, phishing, viruses or other noxious stuff. Searches of Beyoncé, Britney Spears and even Tom Brady of the New England Patriots are risky, too, according to McAfee. More than 40 percent of Google search results for “Jennifer Aniston screen savers” contained viruses, including one called FunLove.
Perhaps cybercops will respond more aggressively to Internet threats as they spread to the more wholesome parts of the Web, like police forces that leave crime alone in the poor parts of town but snap into action when it seeps into middle-class neighborhoods. McAfee, to no one’s surprise, suggests that we buy McAfee software.
But with more and more information about people’s credit cards, browsing histories and identities sloshing around online, I wonder whether this will do. A few months ago, I nervously created my first Facebook page with the minimum necessary information to view pictures posted by old friends.
I returned to the page a few days later to discover that somehow it had found out both the name of my college and my graduation class, displaying them under my name. I have not returned since. In the back of my mind, I fear a 28-year-old hacker and a couple of Russians have gathered two more facts about me that I would rather they didn’t have. And it’s way too late to take my life offline.
http://www.nytimes.com/2009/08/28/opini ... nted=print
Technology gives voice to non-verbal people
Hot breath used to trigger computer
By Linda Nguyen And Bradley Bouzane, Canwest News ServiceAugust 31, 2009
Sometimes it's not what you say that is important, but how heated the conversation is, according to one Canadian researcher.
A University of Toronto professor says he has developed technology that will allow people with severe speech impairments to communicate using infrared cameras.
It is believed to be the first time the technology has been harnessed to translate the heat from someone's mouth into speech.
"This is something I've researched for many years and we have such a large population of people--and many don't realize--that are aware and alert of what's going on around them, but they have no means of interacting with their world," engineering Prof. Tom Chau said Sunday. "We're just trying to open up a world of possibilities for these people."
Chau said the heat that resonates from the human body is an indicator of the message people are trying to get across.
"The human body is an emitter of radiation, and the radiation that's emitted can be measured," Chau said in a recent news release.
"In the face, there's a complex network of blood vessels. When you experience different emotions, there's different flow of blood through the face and this causes temperature changes we can measure noninvasively using a thermal camera."
He said he came up with the idea when working with a 26-year-old patient who was unable to speak or move his head or limbs.
"A couple of weeks ago, the individual said his first word in his life, in his 26 years," Chau said. "This was in the lab and his mother was there. He was typing letter by letter (using his mouth as a switch). He typed m-u-t-h-e-r (sic).His mom realized he was saying mother and she just broke down in tears. It was a dramatic moment. It has become such a liberating technology for this individual."
Chau said that an infrared camera can measure someone's mouth movement by comparing the temperatures that are emitted.
"We monitor the facial temperature distribution . . . and when he opens his mouth, because there's warm air inside the oral cavity, it actually shows up as a hot spot," Chau said. "We can map out his mouth and detect when he's opening and closing his mouth and translate that through software into basically a mouse click on a computer. So he's now able to type on the computer using an onscreen keyboard."
Chau said the young man is able to type about eight to 10 words per minute.
The technology is essentially directed at speech and language pathologists and occupational therapists, who primarily deal with non-verbal individuals, but Chau added that "potentially (school) teachers as well might be able to tap into this."
Chau said the lab model he and his team of graduate students have worked with for the last two years cost around $100,000 to produce.
A home version, which Chau said could be available for "late fall," would cost about $2,000, but its distribution once it's launched will be limited.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 1&sponsor=
Hot breath used to trigger computer
By Linda Nguyen And Bradley Bouzane, Canwest News ServiceAugust 31, 2009
Sometimes it's not what you say that is important, but how heated the conversation is, according to one Canadian researcher.
A University of Toronto professor says he has developed technology that will allow people with severe speech impairments to communicate using infrared cameras.
It is believed to be the first time the technology has been harnessed to translate the heat from someone's mouth into speech.
"This is something I've researched for many years and we have such a large population of people--and many don't realize--that are aware and alert of what's going on around them, but they have no means of interacting with their world," engineering Prof. Tom Chau said Sunday. "We're just trying to open up a world of possibilities for these people."
Chau said the heat that resonates from the human body is an indicator of the message people are trying to get across.
"The human body is an emitter of radiation, and the radiation that's emitted can be measured," Chau said in a recent news release.
"In the face, there's a complex network of blood vessels. When you experience different emotions, there's different flow of blood through the face and this causes temperature changes we can measure noninvasively using a thermal camera."
He said he came up with the idea when working with a 26-year-old patient who was unable to speak or move his head or limbs.
"A couple of weeks ago, the individual said his first word in his life, in his 26 years," Chau said. "This was in the lab and his mother was there. He was typing letter by letter (using his mouth as a switch). He typed m-u-t-h-e-r (sic).His mom realized he was saying mother and she just broke down in tears. It was a dramatic moment. It has become such a liberating technology for this individual."
Chau said that an infrared camera can measure someone's mouth movement by comparing the temperatures that are emitted.
"We monitor the facial temperature distribution . . . and when he opens his mouth, because there's warm air inside the oral cavity, it actually shows up as a hot spot," Chau said. "We can map out his mouth and detect when he's opening and closing his mouth and translate that through software into basically a mouse click on a computer. So he's now able to type on the computer using an onscreen keyboard."
Chau said the young man is able to type about eight to 10 words per minute.
The technology is essentially directed at speech and language pathologists and occupational therapists, who primarily deal with non-verbal individuals, but Chau added that "potentially (school) teachers as well might be able to tap into this."
Chau said the lab model he and his team of graduate students have worked with for the last two years cost around $100,000 to produce.
A home version, which Chau said could be available for "late fall," would cost about $2,000, but its distribution once it's launched will be limited.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 1&sponsor=
September 1, 2009
After the Transistor, a Leap Into the Microcosm
By JOHN MARKOFF
YORKTOWN HEIGHTS, N.Y. — Gaze into the electron microscope display in Frances Ross’s laboratory here and it is possible to persuade yourself that Dr. Ross, a 21st-century materials scientist, is actually a farmer in some Lilliputian silicon world.
Dr. Ross, an I.B.M. researcher, is growing a crop of mushroom-shaped silicon nanowires that may one day become a basic building block for a new kind of electronics. Nanowires are just one example, although one of the most promising, of a transformation now taking place in the material sciences as researchers push to create the next generation of switching devices smaller, faster and more powerful than today’s transistors.
The reason that many computer scientists are pursuing this goal is that the shrinking of the transistor has approached fundamental physical limits. Increasingly, transistor manufacturers grapple with subatomic effects, like the tendency for electrons to “leak” across material boundaries. The leaking electrons make it more difficult to know when a transistor is in an on or off state, the information that makes electronic computing possible. They have also led to excess heat, the bane of the fastest computer chips.
The transistor is not just another element of the electronic world. It is the invention that made the computer revolution possible. In essence it is an on-off switch controlled by the flow of electricity. For the purposes of computing, when the switch is on it represents a one. When it is off it represents a zero. These zeros and ones are the most basic language of computers.
For more than half a century, transistors have gotten smaller and cheaper, following something called Moore’s Law, which states that circuit density doubles roughly every two years. This was predicted by the computer scientist Douglas Engelbart in 1959, and then described by Gordon Moore, the co-founder of Intel, in a now-legendary 1965 article in Electronics, the source of Moore’s Law
More....
http://www.nytimes.com/2009/09/01/scien ... &th&emc=th
After the Transistor, a Leap Into the Microcosm
By JOHN MARKOFF
YORKTOWN HEIGHTS, N.Y. — Gaze into the electron microscope display in Frances Ross’s laboratory here and it is possible to persuade yourself that Dr. Ross, a 21st-century materials scientist, is actually a farmer in some Lilliputian silicon world.
Dr. Ross, an I.B.M. researcher, is growing a crop of mushroom-shaped silicon nanowires that may one day become a basic building block for a new kind of electronics. Nanowires are just one example, although one of the most promising, of a transformation now taking place in the material sciences as researchers push to create the next generation of switching devices smaller, faster and more powerful than today’s transistors.
The reason that many computer scientists are pursuing this goal is that the shrinking of the transistor has approached fundamental physical limits. Increasingly, transistor manufacturers grapple with subatomic effects, like the tendency for electrons to “leak” across material boundaries. The leaking electrons make it more difficult to know when a transistor is in an on or off state, the information that makes electronic computing possible. They have also led to excess heat, the bane of the fastest computer chips.
The transistor is not just another element of the electronic world. It is the invention that made the computer revolution possible. In essence it is an on-off switch controlled by the flow of electricity. For the purposes of computing, when the switch is on it represents a one. When it is off it represents a zero. These zeros and ones are the most basic language of computers.
For more than half a century, transistors have gotten smaller and cheaper, following something called Moore’s Law, which states that circuit density doubles roughly every two years. This was predicted by the computer scientist Douglas Engelbart in 1959, and then described by Gordon Moore, the co-founder of Intel, in a now-legendary 1965 article in Electronics, the source of Moore’s Law
More....
http://www.nytimes.com/2009/09/01/scien ... &th&emc=th
Hologram images made touchable
By Chris Meyers, ReutersSeptember 18, 2009
Imagine a light switch or a book that appears only when you need it --Japanese scientists are one step closer to making the stuff of sci-fi films into reality after creating a hologram that can also be felt.
"Up until now, holography has been for the eyes only, and if you'd try to touch it, your hand would go right through," said Hiroyuki Shinoda, professor at Tokyo University and one of the developers of the technology.
"But now we have a technology that also adds the sensation of touch to holograms."
Holograms--three-dimensional images--are commonly found on credit cards, DVDs and CDs to prevent forgery, and larger scale holograms have been used in entertainment.
By using ultrasonic waves, the scientists have developed software that creates pressure when a user's hand "touches" a hologram that is projected.
To track a user's hand, the researchers use control sticks from Nintendo's popular Wii gaming system that are mounted above the hologram display area.
The technology has so far been tested with relatively simple objects, although the researchers have more practical plans, including virtual switches at hospitals, for example, and other places where contamination by touch is an issue.
Shinoda also said the technology could be used to replace other physical objects, making it economical and environmentally friendly.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 4&sponsor=
****
Implanted 'eye tooth' restores woman's sight
Agence France-PresseSeptember 18, 2009
Sharron Thornton can see again after undergoing surgery.
Photograph by: Joe Raedle, AFP-Getty Images, Agence France-Presse
A 60-year-old U. S. grandmother, blind for nearly a decade, has recovered her sight after surgeons implanted a tooth in her eye as a base to hold a tiny plastic lens, her doctors said Wednesday.
Sharron (Kay) Thornton, from the Southern U. S. state of Mississippi, lost her sight in 2000 when she came down with a case of Stevens-Johnson syndrome, a rare disease that scarred her cornea, according to the University of Miami's Bascom Palmer Eye Institute. For patients whose bodies reject a transplanted or artificial cornea, this procedure " implants the patient's tooth in the eye to anchor a prosthetic lens and restore vision," said Thornton's surgeon Victor Perez.
In the procedure--which was pioneered in Italy, but was a first in the U. S.--the medical team extracted Thornton's canine or "eye tooth" and surrounding bone, shaved and sculpted it, and drilled a hole into it to insert an optical cylinder lens.
"We take sight for granted, not realizing that it can be lost at any moment," the grateful patient said. "This truly is a miracle."
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 9&sponsor=
By Chris Meyers, ReutersSeptember 18, 2009
Imagine a light switch or a book that appears only when you need it --Japanese scientists are one step closer to making the stuff of sci-fi films into reality after creating a hologram that can also be felt.
"Up until now, holography has been for the eyes only, and if you'd try to touch it, your hand would go right through," said Hiroyuki Shinoda, professor at Tokyo University and one of the developers of the technology.
"But now we have a technology that also adds the sensation of touch to holograms."
Holograms--three-dimensional images--are commonly found on credit cards, DVDs and CDs to prevent forgery, and larger scale holograms have been used in entertainment.
By using ultrasonic waves, the scientists have developed software that creates pressure when a user's hand "touches" a hologram that is projected.
To track a user's hand, the researchers use control sticks from Nintendo's popular Wii gaming system that are mounted above the hologram display area.
The technology has so far been tested with relatively simple objects, although the researchers have more practical plans, including virtual switches at hospitals, for example, and other places where contamination by touch is an issue.
Shinoda also said the technology could be used to replace other physical objects, making it economical and environmentally friendly.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 4&sponsor=
****
Implanted 'eye tooth' restores woman's sight
Agence France-PresseSeptember 18, 2009
Sharron Thornton can see again after undergoing surgery.
Photograph by: Joe Raedle, AFP-Getty Images, Agence France-Presse
A 60-year-old U. S. grandmother, blind for nearly a decade, has recovered her sight after surgeons implanted a tooth in her eye as a base to hold a tiny plastic lens, her doctors said Wednesday.
Sharron (Kay) Thornton, from the Southern U. S. state of Mississippi, lost her sight in 2000 when she came down with a case of Stevens-Johnson syndrome, a rare disease that scarred her cornea, according to the University of Miami's Bascom Palmer Eye Institute. For patients whose bodies reject a transplanted or artificial cornea, this procedure " implants the patient's tooth in the eye to anchor a prosthetic lens and restore vision," said Thornton's surgeon Victor Perez.
In the procedure--which was pioneered in Italy, but was a first in the U. S.--the medical team extracted Thornton's canine or "eye tooth" and surrounding bone, shaved and sculpted it, and drilled a hole into it to insert an optical cylinder lens.
"We take sight for granted, not realizing that it can be lost at any moment," the grateful patient said. "This truly is a miracle."
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 9&sponsor=
Moon missions find evidence of water
ReutersSeptember 24, 2009
Three separate missions examining the moon have found clear evidence of water there, apparently concentrated at the poles and possibly formed by the solar wind.
The reports, to be published in the journal Science on Friday, show the water may be actively moving around, forming and reforming as particles mixed up in the dust on the surface of the moon.
Carle Pieters of Brown University in Rhode Island and colleagues reviewed data from India's Chandrayaan-1 mission --India's first mission to the moon--and found spectrographic evidence of water. The water seems thicker closer to the poles, they reported.
"When we say 'water on the moon,'we are not talking about lakes, oceans or even puddles. Water on the moon means molecules of water and hydroxyl (hydrogen and oxygen) that interact with molecules of rock and dust specifically in the top millimetres of the moon's surface," Pieters said in a statement.
Jessica Sunshine of the University of Maryland and colleagues used infrared mapping from the Deep Impact spacecraft to show water all over the moon, while Roger Clark of the U. S. Geological Survey and colleagues used a spectrometer--which breaks down light waves to analyze elements and chemicals reflecting them -- from the Cassini spacecraft to identify water.
"These reports of lunar surface water coincide with intense interest in water at the poles of the Moon," Paul Lucey of the University of Hawaii, who was not involved in the research, wrote in a commentary.
"There may be much 'wetter' regions to be discovered far from the sites that have been sampled to date," Lucey added.
"It is also possible that rare water-bearing minerals previously observed in lunar samples, but argued to be terrestrial contamination, might be indigenous. Perhaps the most valuable result of these new observations is that they prompt a critical re-examination of the notion that the Moon is dry. It is not."
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 8&sponsor=
ReutersSeptember 24, 2009
Three separate missions examining the moon have found clear evidence of water there, apparently concentrated at the poles and possibly formed by the solar wind.
The reports, to be published in the journal Science on Friday, show the water may be actively moving around, forming and reforming as particles mixed up in the dust on the surface of the moon.
Carle Pieters of Brown University in Rhode Island and colleagues reviewed data from India's Chandrayaan-1 mission --India's first mission to the moon--and found spectrographic evidence of water. The water seems thicker closer to the poles, they reported.
"When we say 'water on the moon,'we are not talking about lakes, oceans or even puddles. Water on the moon means molecules of water and hydroxyl (hydrogen and oxygen) that interact with molecules of rock and dust specifically in the top millimetres of the moon's surface," Pieters said in a statement.
Jessica Sunshine of the University of Maryland and colleagues used infrared mapping from the Deep Impact spacecraft to show water all over the moon, while Roger Clark of the U. S. Geological Survey and colleagues used a spectrometer--which breaks down light waves to analyze elements and chemicals reflecting them -- from the Cassini spacecraft to identify water.
"These reports of lunar surface water coincide with intense interest in water at the poles of the Moon," Paul Lucey of the University of Hawaii, who was not involved in the research, wrote in a commentary.
"There may be much 'wetter' regions to be discovered far from the sites that have been sampled to date," Lucey added.
"It is also possible that rare water-bearing minerals previously observed in lunar samples, but argued to be terrestrial contamination, might be indigenous. Perhaps the most valuable result of these new observations is that they prompt a critical re-examination of the notion that the Moon is dry. It is not."
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 8&sponsor=
October 5, 2009
Op-Ed Contributors
Thumbs on the Wheel
By MARK A. SHIFFRIN and AVI SILBERSCHATZ
PRESIDENT Obama has forbidden federal employees from texting while driving. The federal Transportation Department plans to do the same for commercial-truck and Interstate-bus drivers. And support is building in Congress for legislation that would require states to outlaw texting or e-mailing while driving. Such distractions cause tens of thousands of deaths each year.
But the way to stop people from using cellphones while driving is not to make it a crime. Too many drivers value convenience more than safety and would assume they wouldn’t get caught. A more effective approach is to get telecommunications companies to tweak technology to make it difficult or impossible to text and drive.
When a cellphone is used in a moving car, its signal must be handed off from one cell tower to the next along the route. This process tells the service provider that the phone is in motion. Cellphone towers could be engineered to not transmit while a phone is traveling. After a phone had stopped moving for a certain amount of time — three minutes, maybe — it would be able to transmit again.
Another solution would be to install hardware in cars and software in cellphones that would disable some phone functions when cars are moving. It would be the electronics equivalent of putting a guard on a knife handle or a grill over the blades of a fan.
This would, of course, affect passengers in moving cars as well as drivers. The inconvenience would arguably be worth it. But it is also easy to imagine technology that would allow only passengers to use their phones — by tethering them to devices, placed on the passenger side of the car, that would override the system.
Just as the text function could be disabled from a moving vehicle, so could the talk function be limited — at least when used without hands-free operating technology like Bluetooth. Given the evidence suggesting that even hands-free operation is dangerously distracting to drivers, we may need to ask whether all cellphone use should be technologically impeded in moving cars. There is nothing unreasonable in expecting drivers to park before making calls.
While texting behind the wheel is a problem today, innovations may give rise to other risky behaviors within a few years, if not months. The best solutions will come not from lawmakers plugging holes in the dike, but from the engineers finding ways to make products safer.
Mark A. Shiffrin, a lawyer, is a former consumer protection commissioner for Connecticut. Avi Silberschatz is the chairman of the computer science department at Yale.
http://www.nytimes.com/2009/10/05/opini ... nted=print
Op-Ed Contributors
Thumbs on the Wheel
By MARK A. SHIFFRIN and AVI SILBERSCHATZ
PRESIDENT Obama has forbidden federal employees from texting while driving. The federal Transportation Department plans to do the same for commercial-truck and Interstate-bus drivers. And support is building in Congress for legislation that would require states to outlaw texting or e-mailing while driving. Such distractions cause tens of thousands of deaths each year.
But the way to stop people from using cellphones while driving is not to make it a crime. Too many drivers value convenience more than safety and would assume they wouldn’t get caught. A more effective approach is to get telecommunications companies to tweak technology to make it difficult or impossible to text and drive.
When a cellphone is used in a moving car, its signal must be handed off from one cell tower to the next along the route. This process tells the service provider that the phone is in motion. Cellphone towers could be engineered to not transmit while a phone is traveling. After a phone had stopped moving for a certain amount of time — three minutes, maybe — it would be able to transmit again.
Another solution would be to install hardware in cars and software in cellphones that would disable some phone functions when cars are moving. It would be the electronics equivalent of putting a guard on a knife handle or a grill over the blades of a fan.
This would, of course, affect passengers in moving cars as well as drivers. The inconvenience would arguably be worth it. But it is also easy to imagine technology that would allow only passengers to use their phones — by tethering them to devices, placed on the passenger side of the car, that would override the system.
Just as the text function could be disabled from a moving vehicle, so could the talk function be limited — at least when used without hands-free operating technology like Bluetooth. Given the evidence suggesting that even hands-free operation is dangerously distracting to drivers, we may need to ask whether all cellphone use should be technologically impeded in moving cars. There is nothing unreasonable in expecting drivers to park before making calls.
While texting behind the wheel is a problem today, innovations may give rise to other risky behaviors within a few years, if not months. The best solutions will come not from lawmakers plugging holes in the dike, but from the engineers finding ways to make products safer.
Mark A. Shiffrin, a lawyer, is a former consumer protection commissioner for Connecticut. Avi Silberschatz is the chairman of the computer science department at Yale.
http://www.nytimes.com/2009/10/05/opini ... nted=print
Google moves in on city streets
By Valerie Berenyi, Calgary Herald
October 8, 2009
the view from the street, top to bottom: calgary's stephen avenue Walk, erlton district, kensington area and downtown Banff.
Photograph by: Courtesy, Google Maps, Calgary Herald
Google's controversial Street View, a free feature on Google Maps, went live Wednesday with images of Calgary and other Canadian cities, allowing the world to come snoop through our town and virtually poke around our neighbourhoods.
Street View gives Internet users a 360-degree view as it would be seen by someone driving or walking down the street. Users can zoom in for close-ups, although faces and licence plates are intentionally blurred.
Judging from the conversations overheard as people tried the tool -- "Let's check out where the boss lives," "Who's that outside the strip club?" and "There's my house!" -- Street View is a shiny, new Internet toy that's sure to delight some, unsettle others and bring out the voyeurs.
Google cars with roof-mounted cameras have been a common sight on Canadian streets since the company began filming here in 2007.Now you can also "explore" Vancouver, Whistler, Squamish, Banff, Kitchener-Waterloo, Toronto, Ottawa, Montreal, Quebec City and Halifax, along with cities in 13 other countries.
http://www.calgaryherald.com/story_prin ... 5&sponsor=
By Valerie Berenyi, Calgary Herald
October 8, 2009
the view from the street, top to bottom: calgary's stephen avenue Walk, erlton district, kensington area and downtown Banff.
Photograph by: Courtesy, Google Maps, Calgary Herald
Google's controversial Street View, a free feature on Google Maps, went live Wednesday with images of Calgary and other Canadian cities, allowing the world to come snoop through our town and virtually poke around our neighbourhoods.
Street View gives Internet users a 360-degree view as it would be seen by someone driving or walking down the street. Users can zoom in for close-ups, although faces and licence plates are intentionally blurred.
Judging from the conversations overheard as people tried the tool -- "Let's check out where the boss lives," "Who's that outside the strip club?" and "There's my house!" -- Street View is a shiny, new Internet toy that's sure to delight some, unsettle others and bring out the voyeurs.
Google cars with roof-mounted cameras have been a common sight on Canadian streets since the company began filming here in 2007.Now you can also "explore" Vancouver, Whistler, Squamish, Banff, Kitchener-Waterloo, Toronto, Ottawa, Montreal, Quebec City and Halifax, along with cities in 13 other countries.
http://www.calgaryherald.com/story_prin ... 5&sponsor=
NASA crashes spacecraft into moon crater in search of water
By Peter Henderson, Reuters
October 10, 2009
Two U. S. spacecraft were crashed into a lunar crater on Friday, but scientists said it was too early to say whether the mission to search for supplies of water on the moon had been a success.
NASA, which is hoping to find sufficient quantities of water to use as fuel for space exploration, said it could take two months to make a conclusive assessment of what was found.
A two-ton empty rocket stage slammed into the eternally dark Cabeus crater near the moon's south pole at 4:31 a. m. Pacific time, intended to throw up a plume of spray from any ice that was there.
Instruments on a second craft, that flew through the plume and hit close to the same spot four minutes later, as well as a lunar orbiter and telescopes on Earth, captured data that could show whether there was ice there.
Video transmitted back from the trailing craft did not show, as hoped, the eruption of debris, but infrared devices showed a hot flash that indicated a crater about 18 to 20 metres wide.
"We didn't see a big splashy plume like we wanted to see," said Michael Bicay, director of science at the National Aeronautics and Space Administration's Ames Research Center.
Scientists did not know whether there had been no plume or if it could not be seen in the Internet-quality video shown as the craft crashed.
The $79 million program, a bargain by space exploration standards, could help change views of the moon.
Recent signs of water have upended ideas of the lunar surface as barren and unchanging, and evidence of ice would also suggest new possibilities for space travel.
"Water is essentially energy," scientist Victoria Friedensen said on NASA TV. "It can be used to make fuel."
Three studies released last month found clear evidence of water on the moon, but the skein of water bound with dust that was disclosed then was extremely thin.
"It's not enough to be of any economic importance," said NASA Lunar Science Institute director David Morrison.
Hidden in the Cabeus crater near the pole, out of sunlight, could be soil concentrations of two per cent to three per cent ice that would be usable. "You're going into a place where the sun hasn't shined for a billion years," Morrison said.
Video from the trailing spacecraft gave the sense of the fast approaching crash as craters edged with light grew larger and larger.
"I was blown away by how long this little spacecraft lasted," Tony Colaprete, the mission's principal investigator, told a news conference.
He said it got good spectroscopic data, which would show what elements were in the crater and how they were changed by the heat of the first impact.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 5&sponsor=
By Peter Henderson, Reuters
October 10, 2009
Two U. S. spacecraft were crashed into a lunar crater on Friday, but scientists said it was too early to say whether the mission to search for supplies of water on the moon had been a success.
NASA, which is hoping to find sufficient quantities of water to use as fuel for space exploration, said it could take two months to make a conclusive assessment of what was found.
A two-ton empty rocket stage slammed into the eternally dark Cabeus crater near the moon's south pole at 4:31 a. m. Pacific time, intended to throw up a plume of spray from any ice that was there.
Instruments on a second craft, that flew through the plume and hit close to the same spot four minutes later, as well as a lunar orbiter and telescopes on Earth, captured data that could show whether there was ice there.
Video transmitted back from the trailing craft did not show, as hoped, the eruption of debris, but infrared devices showed a hot flash that indicated a crater about 18 to 20 metres wide.
"We didn't see a big splashy plume like we wanted to see," said Michael Bicay, director of science at the National Aeronautics and Space Administration's Ames Research Center.
Scientists did not know whether there had been no plume or if it could not be seen in the Internet-quality video shown as the craft crashed.
The $79 million program, a bargain by space exploration standards, could help change views of the moon.
Recent signs of water have upended ideas of the lunar surface as barren and unchanging, and evidence of ice would also suggest new possibilities for space travel.
"Water is essentially energy," scientist Victoria Friedensen said on NASA TV. "It can be used to make fuel."
Three studies released last month found clear evidence of water on the moon, but the skein of water bound with dust that was disclosed then was extremely thin.
"It's not enough to be of any economic importance," said NASA Lunar Science Institute director David Morrison.
Hidden in the Cabeus crater near the pole, out of sunlight, could be soil concentrations of two per cent to three per cent ice that would be usable. "You're going into a place where the sun hasn't shined for a billion years," Morrison said.
Video from the trailing spacecraft gave the sense of the fast approaching crash as craters edged with light grew larger and larger.
"I was blown away by how long this little spacecraft lasted," Tony Colaprete, the mission's principal investigator, told a news conference.
He said it got good spectroscopic data, which would show what elements were in the crater and how they were changed by the heat of the first impact.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 5&sponsor=
October 30, 2009, 6:35 am
Virtopsies
Virtual autopsies.
New Scientist magazine recently reported on the creation of a robot that performs “virtual autopsies”:
A team of forensic pathologists at the University of Bern in Switzerland reckon it could make autopsies more accurate and also less distressing for families.
The researchers are already pioneers of virtual autopsies, or “virtopsies,” which use non-invasive imaging of a body inside and out rather than the radical post-mortem surgery typically used to determine cause of death.
Now they are using a robot, dubbed Virtobot, to carry out parts of that process, making it more reliable – and standardised.
Their virtopsies combine 3D imaging of a body’s surface with a CT scan of its interior anatomy. The result is a faithful, high-resolution virtual double of the corpse … This double can be used to accurately determine what killed someone. And it’s a more tactful approach: only needle biopsies are used to sample tissues, leaving a body essentially undamaged.
Virtobot has already performed 52 virtopsies “in real cases, including 26 road deaths, 10 by impacts from a blunt object, six knifings, five shootings, and two throttlings.” Yet not everyone is convinced:
However, the president of the UK’s Royal College of Pathologists, Peter Furness, says that much longer term comparisons of virtopsies with conventional procedures are still needed. “The circumstances where this might be valuable are not well defined, the reliability of the approach is unclear and the cost can be considerable,” he says, adding that studies to work out just when a conventional autopsy is essential are under way.
http://schott.blogs.nytimes.com/2009/10 ... 8ty&emc=ty
Virtopsies
Virtual autopsies.
New Scientist magazine recently reported on the creation of a robot that performs “virtual autopsies”:
A team of forensic pathologists at the University of Bern in Switzerland reckon it could make autopsies more accurate and also less distressing for families.
The researchers are already pioneers of virtual autopsies, or “virtopsies,” which use non-invasive imaging of a body inside and out rather than the radical post-mortem surgery typically used to determine cause of death.
Now they are using a robot, dubbed Virtobot, to carry out parts of that process, making it more reliable – and standardised.
Their virtopsies combine 3D imaging of a body’s surface with a CT scan of its interior anatomy. The result is a faithful, high-resolution virtual double of the corpse … This double can be used to accurately determine what killed someone. And it’s a more tactful approach: only needle biopsies are used to sample tissues, leaving a body essentially undamaged.
Virtobot has already performed 52 virtopsies “in real cases, including 26 road deaths, 10 by impacts from a blunt object, six knifings, five shootings, and two throttlings.” Yet not everyone is convinced:
However, the president of the UK’s Royal College of Pathologists, Peter Furness, says that much longer term comparisons of virtopsies with conventional procedures are still needed. “The circumstances where this might be valuable are not well defined, the reliability of the approach is unclear and the cost can be considerable,” he says, adding that studies to work out just when a conventional autopsy is essential are under way.
http://schott.blogs.nytimes.com/2009/10 ... 8ty&emc=ty
November 14, 2009
Water Found on Moon, Researchers Say
By KENNETH CHANG
There is water on the Moon, scientists stated unequivocally on Friday.
“Indeed yes, we found water,” Anthony Colaprete, the principal investigator for NASA’s Lunar Crater Observation and Sensing Satellite, said in a news conference. “And we didn’t find just a little bit. We found a significant amount.”
The confirmation of scientists’ suspicions is welcome news to explorers who might set up home on the lunar surface and to scientists who hope that the water, in the form of ice accumulated over billions of years, holds a record of the solar system’s history.
The satellite, known as Lcross (pronounced L-cross), crashed into a crater near the Moon’s south pole a month ago. The 5,600-miles-per-hour impact carved out a hole 60 to 100 feet wide and kicked up at least 26 gallons of water.
“We got more than just a whiff,” Peter H. Schultz, a professor of geological sciences at Brown University and a co-investigator of the mission, said in a telephone interview. “We practically tasted it with the impact.”
For more than a decade, planetary scientists have seen tantalizing hints of water ice at the bottom of these cold craters where the sun never shines. The Lcross mission, intended to look for water, was made up of two pieces — an empty rocket stage to slam into the floor of Cabeus, a crater 60 miles wide and 2 miles deep, and a small spacecraft to measure what was kicked up.
For space enthusiasts who stayed up, or woke up early, to watch the impact on Oct. 9, the event was anticlimactic, even disappointing, as they failed to see the anticipated debris plume. Even some high-powered telescopes on Earth like the Palomar Observatory in California did not see anything.
The National Aeronautics and Space Administration later said that Lcross did indeed photograph a plume but that the live video stream was not properly attuned to pick out the details.
The water findings came through an analysis of the slight shifts in color after the impact, showing telltale signs of water molecules that had absorbed specific wavelengths of light. “We got good fits,” Dr. Colaprete said. “It was a unique fit.”
The scientists also saw colors of ultraviolet light associated with molecules of hydroxyl, consisting of one hydrogen and one oxygen, presumably water molecules that had been broken apart by the impact and then glowed like neon signs.
In addition, there were squiggles in the data that indicated other molecules, possibly carbon dioxide, sulfur dioxide, methane or more complex carbon-based molecules. “All of those are possibilities,” Dr. Colaprete said, “but we really need to do the work to see which ones work best.”
Remaining in perpetual darkness like other craters near the lunar poles, the bottom of Cabeus is a frigid minus 365 degrees Fahrenheit, cold enough that anything at the bottom of such craters never leaves. These craters are “really like the dusty attic of the solar system,” said Michael Wargo, the chief lunar scientist at NASA headquarters.
The Moon was once thought to be dry. Then came hints of ice in the polar craters. In September, scientists reported an unexpected finding that most of the surface, not just the polar regions, might be covered with a thin veneer of water.
The Lcross scientists said it was not clear how all the different readings of water related to one another, if at all.
The deposits in the lunar craters may be as informative about the Moon as ice cores from Earth’s polar regions are about the planet’s past climates. Scientists want to know the source and history of whatever water they find. It could have come from the impacts of comets, for instance, or from within the Moon.
“Now that we know that water is there, thanks to Lcross, we can begin in earnest to go to this next set of questions,” said Gregory T. Delory of the University of California, Berkeley.
Dr. Delory said the findings of Lcross and other spacecraft were “painting a really surprising new picture of the Moon; rather than a dead and unchanging world, it could be in fact a very dynamic and interesting one.”
Lunar ice, if bountiful, not only gives future settlers something to drink, but could also be broken apart into oxygen and hydrogen. Both are valuable as rocket fuel, and the oxygen would also give astronauts air to breathe.
NASA’s current exploration plans call for a return of astronauts to the Moon by 2020, for the first visit since 1972. But a panel appointed in May recently concluded that trimmings of the agency’s budget made that goal impossible. One option presented to the Obama administration was to bypass Moon landings for now and focus on long-duration missions in deep space.
Even though the signs of water were clear and definitive, the Moon is far from wet. The Cabeus soil could still turn out to be drier than that in deserts on Earth. But Dr. Colaprete also said that he expected that the 26 gallons were a lower limit and that it was too early to estimate the concentration of water in the soil.
http://www.nytimes.com/2009/11/14/scien ... ?th&emc=th
Water Found on Moon, Researchers Say
By KENNETH CHANG
There is water on the Moon, scientists stated unequivocally on Friday.
“Indeed yes, we found water,” Anthony Colaprete, the principal investigator for NASA’s Lunar Crater Observation and Sensing Satellite, said in a news conference. “And we didn’t find just a little bit. We found a significant amount.”
The confirmation of scientists’ suspicions is welcome news to explorers who might set up home on the lunar surface and to scientists who hope that the water, in the form of ice accumulated over billions of years, holds a record of the solar system’s history.
The satellite, known as Lcross (pronounced L-cross), crashed into a crater near the Moon’s south pole a month ago. The 5,600-miles-per-hour impact carved out a hole 60 to 100 feet wide and kicked up at least 26 gallons of water.
“We got more than just a whiff,” Peter H. Schultz, a professor of geological sciences at Brown University and a co-investigator of the mission, said in a telephone interview. “We practically tasted it with the impact.”
For more than a decade, planetary scientists have seen tantalizing hints of water ice at the bottom of these cold craters where the sun never shines. The Lcross mission, intended to look for water, was made up of two pieces — an empty rocket stage to slam into the floor of Cabeus, a crater 60 miles wide and 2 miles deep, and a small spacecraft to measure what was kicked up.
For space enthusiasts who stayed up, or woke up early, to watch the impact on Oct. 9, the event was anticlimactic, even disappointing, as they failed to see the anticipated debris plume. Even some high-powered telescopes on Earth like the Palomar Observatory in California did not see anything.
The National Aeronautics and Space Administration later said that Lcross did indeed photograph a plume but that the live video stream was not properly attuned to pick out the details.
The water findings came through an analysis of the slight shifts in color after the impact, showing telltale signs of water molecules that had absorbed specific wavelengths of light. “We got good fits,” Dr. Colaprete said. “It was a unique fit.”
The scientists also saw colors of ultraviolet light associated with molecules of hydroxyl, consisting of one hydrogen and one oxygen, presumably water molecules that had been broken apart by the impact and then glowed like neon signs.
In addition, there were squiggles in the data that indicated other molecules, possibly carbon dioxide, sulfur dioxide, methane or more complex carbon-based molecules. “All of those are possibilities,” Dr. Colaprete said, “but we really need to do the work to see which ones work best.”
Remaining in perpetual darkness like other craters near the lunar poles, the bottom of Cabeus is a frigid minus 365 degrees Fahrenheit, cold enough that anything at the bottom of such craters never leaves. These craters are “really like the dusty attic of the solar system,” said Michael Wargo, the chief lunar scientist at NASA headquarters.
The Moon was once thought to be dry. Then came hints of ice in the polar craters. In September, scientists reported an unexpected finding that most of the surface, not just the polar regions, might be covered with a thin veneer of water.
The Lcross scientists said it was not clear how all the different readings of water related to one another, if at all.
The deposits in the lunar craters may be as informative about the Moon as ice cores from Earth’s polar regions are about the planet’s past climates. Scientists want to know the source and history of whatever water they find. It could have come from the impacts of comets, for instance, or from within the Moon.
“Now that we know that water is there, thanks to Lcross, we can begin in earnest to go to this next set of questions,” said Gregory T. Delory of the University of California, Berkeley.
Dr. Delory said the findings of Lcross and other spacecraft were “painting a really surprising new picture of the Moon; rather than a dead and unchanging world, it could be in fact a very dynamic and interesting one.”
Lunar ice, if bountiful, not only gives future settlers something to drink, but could also be broken apart into oxygen and hydrogen. Both are valuable as rocket fuel, and the oxygen would also give astronauts air to breathe.
NASA’s current exploration plans call for a return of astronauts to the Moon by 2020, for the first visit since 1972. But a panel appointed in May recently concluded that trimmings of the agency’s budget made that goal impossible. One option presented to the Obama administration was to bypass Moon landings for now and focus on long-duration missions in deep space.
Even though the signs of water were clear and definitive, the Moon is far from wet. The Cabeus soil could still turn out to be drier than that in deserts on Earth. But Dr. Colaprete also said that he expected that the 26 gallons were a lower limit and that it was too early to estimate the concentration of water in the soil.
http://www.nytimes.com/2009/11/14/scien ... ?th&emc=th
November 20, 2009
Op-Ed Contributor
The Wet Side of the Moon
By WILLIAM S. MARSHALL
Moffett Field, Calif.
PICTURE a habitat atop a hill in warm sunlight on the edge of a crater near the south pole of the Moon. There are metal ores in the rocks nearby and ice in the shadows of the crater below. Solar arrays are set up on the regolith that covers the Moon’s surface. Humans live in sealed, cave-like lava tubes, protected from solar flares and sustained by large surface greenhouses. Imagine the Moon as the first self-sustainable human settlement away from Earth and a high-speed transportation hub for the solar system.
We can finally begin to think seriously about establishing such a self-sufficient home on the Moon because last week, NASA announced that it had discovered large quantities of water there.
While we have known for decades that the Moon had all the raw chemicals necessary for sustaining life, we believed they were trapped in rocks and thus difficult to extract. The discovery of plentiful lunar water is of tremendous importance to humanity and our long-term survival.
There have been 73 missions, manned and unmanned, to the Moon, and understanding its chemical composition, particularly finding water, has always been a priority. So why haven’t we seen significant amounts of water until now?
The answer lies in the Moon’s rotation. Unlike Earth, which rotates on a significant tilt to the Sun, the Moon is barely tilted at all. At the poles, some hills remain in permanent sunlight while some troughs are always in shadow. When water lands in sunny spots, perhaps carried by comets or asteroids, the water transforms directly into gas; if it lands in shadow, the water freezes and can remain indefinitely. The lack of light explains why spectrometers — instruments that can be used for remote water detection but rely on reflected light to do so — never picked up on the water.
This changed last month, when NASA shot a satellite into a permanently shadowed region on the Moon’s surface, throwing a plume of material containing water up out of the shadow.
From the perspective of human space exploration, that water is the most important scientific discovery since the ’60s. We can drink it, grow food with it and breathe it — by separating the oxygen from the hydrogen through a process called electrolysis. These elements can even be used to fuel rocket engines. (Discovering water on Mars was not quite as significant because the major hurdle to establishing permanent settlements there is the eight-month journey.)
Creating a permanent lunar habitat is important primarily for our species’ survival. Humanity needs more than one home because, with all our eggs in one basket, we are at risk of low-probability but high-consequence catastrophes like asteroid strikes, nuclear war or bioterrorism.
But it would also lead to valuable technological and other advancements. Consider the side-effects of the Apollo program: it drove the development of small computers, doubled the number of doctoral students in science and math in about a decade and marked a new stage in relations between the Americans and Soviets.
Imagine what we could learn from living on the Moon permanently. On its far side, shielded from the Earth’s radio noise, there is a quiet zone perfect for radio astronomy — which allows us to see objects we can’t from Earth. Out of necessity we could develop bacteria to extract resources directly from the regolith — a useful technology for Earth as well. And an international venture could open a new era of global cooperation.
Almost as surprising as NASA’s announcement is the lack of attention it has received. Thirty years ago, a development like this would have been heralded as one of humanity’s greatest discoveries. Perhaps the indifference is partly because of the disappointment of astronomers, amateur and professional, who tried to watch NASA’s October blast through their telescopes, but couldn’t see the plume. Or perhaps it’s a symptom of our age, that the problems that bedevil us on Earth limit our interest in other worlds — just when we need them (and the inspiration they offer) most.
William S. Marshall is a staff scientist with the Universities Space Research Association based at the NASA Ames Research Center.
http://www.nytimes.com/2009/11/20/opini ... nted=print
Op-Ed Contributor
The Wet Side of the Moon
By WILLIAM S. MARSHALL
Moffett Field, Calif.
PICTURE a habitat atop a hill in warm sunlight on the edge of a crater near the south pole of the Moon. There are metal ores in the rocks nearby and ice in the shadows of the crater below. Solar arrays are set up on the regolith that covers the Moon’s surface. Humans live in sealed, cave-like lava tubes, protected from solar flares and sustained by large surface greenhouses. Imagine the Moon as the first self-sustainable human settlement away from Earth and a high-speed transportation hub for the solar system.
We can finally begin to think seriously about establishing such a self-sufficient home on the Moon because last week, NASA announced that it had discovered large quantities of water there.
While we have known for decades that the Moon had all the raw chemicals necessary for sustaining life, we believed they were trapped in rocks and thus difficult to extract. The discovery of plentiful lunar water is of tremendous importance to humanity and our long-term survival.
There have been 73 missions, manned and unmanned, to the Moon, and understanding its chemical composition, particularly finding water, has always been a priority. So why haven’t we seen significant amounts of water until now?
The answer lies in the Moon’s rotation. Unlike Earth, which rotates on a significant tilt to the Sun, the Moon is barely tilted at all. At the poles, some hills remain in permanent sunlight while some troughs are always in shadow. When water lands in sunny spots, perhaps carried by comets or asteroids, the water transforms directly into gas; if it lands in shadow, the water freezes and can remain indefinitely. The lack of light explains why spectrometers — instruments that can be used for remote water detection but rely on reflected light to do so — never picked up on the water.
This changed last month, when NASA shot a satellite into a permanently shadowed region on the Moon’s surface, throwing a plume of material containing water up out of the shadow.
From the perspective of human space exploration, that water is the most important scientific discovery since the ’60s. We can drink it, grow food with it and breathe it — by separating the oxygen from the hydrogen through a process called electrolysis. These elements can even be used to fuel rocket engines. (Discovering water on Mars was not quite as significant because the major hurdle to establishing permanent settlements there is the eight-month journey.)
Creating a permanent lunar habitat is important primarily for our species’ survival. Humanity needs more than one home because, with all our eggs in one basket, we are at risk of low-probability but high-consequence catastrophes like asteroid strikes, nuclear war or bioterrorism.
But it would also lead to valuable technological and other advancements. Consider the side-effects of the Apollo program: it drove the development of small computers, doubled the number of doctoral students in science and math in about a decade and marked a new stage in relations between the Americans and Soviets.
Imagine what we could learn from living on the Moon permanently. On its far side, shielded from the Earth’s radio noise, there is a quiet zone perfect for radio astronomy — which allows us to see objects we can’t from Earth. Out of necessity we could develop bacteria to extract resources directly from the regolith — a useful technology for Earth as well. And an international venture could open a new era of global cooperation.
Almost as surprising as NASA’s announcement is the lack of attention it has received. Thirty years ago, a development like this would have been heralded as one of humanity’s greatest discoveries. Perhaps the indifference is partly because of the disappointment of astronomers, amateur and professional, who tried to watch NASA’s October blast through their telescopes, but couldn’t see the plume. Or perhaps it’s a symptom of our age, that the problems that bedevil us on Earth limit our interest in other worlds — just when we need them (and the inspiration they offer) most.
William S. Marshall is a staff scientist with the Universities Space Research Association based at the NASA Ames Research Center.
http://www.nytimes.com/2009/11/20/opini ... nted=print
Swiss doctors develop incision-free autopsies
3-D scanner detects up to 80 per cent of causes of death
ReutersNovember 26, 2009
A team of Swiss doctors is conducting about 100 autopsies a year without cutting open bodies, instead using devices including an optical 3-D scanner that can detect up to 80 per cent of the causes of death.
Michael Thali, a professor at the University of Bern, and his colleagues have developed a system called "virtopsy," which since 2006 has been used to examine all sudden deaths or those of unnatural causes in the Swiss capital.
The U.S. military at Dover Air Force Base is using a more limited version for autopsies on soldiers, he said.
"Without opening the body, we can detect 60 to 80 per cent of the injuries and causes of death," Thali explained, standing beside the white cylindrical CT scanner in his laboratory.
The advantages of virtual autopsies are that digital, permanent records are created that can be shared via the Internet, Thali said.
During an autopsy, which takes about 30 minutes, the deceased is placed on an examining table and the surface scanner, just larger than a shoebox and suspended from a robotic arm, traces along the body's contours.
Two technicians then use computers to evaluate the findings.
"At the moment, here in Bern is the only place worldwide which is combining the surface scanning with CT magnetic resonance scanning and post-mortem angiography and post-mortem biopsy," Thali said, explaining that the total installation cost more than $2 million.
The CT scanner makes images of skeletal injuries and damage to the brain, while the magnetic scanner produces finer images of soft tissue, Thali said. Angiography visualizes the inside of blood vessels.
"That's the big advantage; because you don't have to destroy the body you can see projectiles in 3-D and can do the analysis," Thali said of the system's use to the U.S. military.
The 3-D imaging began in the mid 1990s, but the post-mortem biopsy device -- which uses a needle to extract cells--has been in his lab for only six months, he said.
Although there was little initial interest in the project, Thali said he and his 16 colleagues were now receiving queries from places such as Australia and Scandinavia.
Despite their strengths, Thali said virtual autopsies were unlikely to replace the scalpel variety any time soon.
"At the moment the regular autopsy, which is a very old procedure, is still the gold standard."
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 0&sponsor=
3-D scanner detects up to 80 per cent of causes of death
ReutersNovember 26, 2009
A team of Swiss doctors is conducting about 100 autopsies a year without cutting open bodies, instead using devices including an optical 3-D scanner that can detect up to 80 per cent of the causes of death.
Michael Thali, a professor at the University of Bern, and his colleagues have developed a system called "virtopsy," which since 2006 has been used to examine all sudden deaths or those of unnatural causes in the Swiss capital.
The U.S. military at Dover Air Force Base is using a more limited version for autopsies on soldiers, he said.
"Without opening the body, we can detect 60 to 80 per cent of the injuries and causes of death," Thali explained, standing beside the white cylindrical CT scanner in his laboratory.
The advantages of virtual autopsies are that digital, permanent records are created that can be shared via the Internet, Thali said.
During an autopsy, which takes about 30 minutes, the deceased is placed on an examining table and the surface scanner, just larger than a shoebox and suspended from a robotic arm, traces along the body's contours.
Two technicians then use computers to evaluate the findings.
"At the moment, here in Bern is the only place worldwide which is combining the surface scanning with CT magnetic resonance scanning and post-mortem angiography and post-mortem biopsy," Thali said, explaining that the total installation cost more than $2 million.
The CT scanner makes images of skeletal injuries and damage to the brain, while the magnetic scanner produces finer images of soft tissue, Thali said. Angiography visualizes the inside of blood vessels.
"That's the big advantage; because you don't have to destroy the body you can see projectiles in 3-D and can do the analysis," Thali said of the system's use to the U.S. military.
The 3-D imaging began in the mid 1990s, but the post-mortem biopsy device -- which uses a needle to extract cells--has been in his lab for only six months, he said.
Although there was little initial interest in the project, Thali said he and his 16 colleagues were now receiving queries from places such as Australia and Scandinavia.
Despite their strengths, Thali said virtual autopsies were unlikely to replace the scalpel variety any time soon.
"At the moment the regular autopsy, which is a very old procedure, is still the gold standard."
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 0&sponsor=
There is a related vidoe linked at:
http://www.nytimes.com/2009/12/10/scien ... &th&emc=th
December 10, 2009
Collider Sets Record, and Europe Takes U.S.’s Lead
By DENNIS OVERBYE
Tiny spitfires of energy blossomed under the countryside outside Geneva late Tuesday night, heralding the arrival of a new European particle collider as the biggest, baddest physics machine in the world.
Scientists said that the new Large Hadron Collider, a 17-mile loop underneath the Swiss-French border, had accelerated protons to energies of 1.2 trillion electron volts apiece and then crashed them together, eclipsing a record for collisions held by an American machine, the Tevatron, at the Fermi National Accelerator Laboratory in Illinois.
Officials at CERN, the European Center for Nuclear Research, which built the collider, said that the collisions lasted just a few minutes as a byproduct of testing, and that the Champagne was still on ice in Geneva. But in conjunction with other recent successes, those tiny fireballs displaced American physicists as the leaders in the art of banging subatomic particles together to see what nature is made of.
The collider first boosted a beam of protons to the new energy record of 1.2 trillion electron volts on Nov. 29 without making collisions; CERN hopes to be having sustained collisions at that energy within a week. In the future, as the collider ramps up to seven trillion electron volts, the dateline for physics discoveries will be Geneva, not Batavia, Ill., the home of Fermilab.
That future, physicists say, includes not only the sheen of announcing exotic particles and strange dimensions, but also the ancillary rewards of increased technological competence and innovation that spring from the pursuit of esoteric knowledge. The World Wide Web, lest anyone forget, was invented by particle physicists at CERN. Detectors developed for physics experiments are now used in medical devices like PET scans, and it was the industrial-scale production of superconducting magnets for the Tevatron that made commercial magnetic resonance imagers possible, said Young-Kee Kim, deputy director of Fermilab.
It is all very fine to worry about the value of the dollar. But what about the value of the proton?
“Particle accelerators and detectors (initially with the bold and innovative ideas and technologies) have touched our lives in many ways and I have no doubt that this will continue,” Dr. Kim wrote in an e-mail message.
Those spinoffs now will invigorate the careers and labs of Europe, not the United States, pointed out Steven Weinberg, a physicist at the University of Texas in Austin, who won the Nobel Prize for work that will be tested in the new collider. Americans will work at CERN, but not as leaders, he said in an e-mail interview.
“There is also a depressing symbolism,” he added, “in the fact that the hottest new results in fundamental physics will for decades not be coming from our country.”
This moment has been inevitable since fall 1993, when Congress canceled a behemoth project in Texas known as the Superconducting Supercollider, after estimated costs rose to $11 billion. That accelerator, designed at 54 miles and 20 trillion electron volts, would have been working by now and would have had an even greater reach for new physics than Europe’s machine. American physicists have reacted to the Large Hadron Collider with a mixture of excitement, good sportsmanship and wistfulness.
The United States has not exactly been shut out of the action at the new collider, as Dr. Kim pointed out. It contributed $531 million to the project, and about 1,700 of the 10,000 scientists who work on the giant particle detectors in the collider tunnel are Americans, the largest of any national group. (Italians are next.)
Thanks in part to delays with the CERN collider and other problems that will keep it from performing up to snuff for the next couple of years, she said, Fermilab’s Tevatron is still in the lead in the hunt for one of the collider’s main quarries, the Higgs boson, a particle that is thought to imbue other particles with mass.
In the meantime, Fermilab is investing $53 million from the federal stimulus package in a “Project X” to make more intense proton beams, which in turn could be used to make beams of the strange ghostlike particles called neutrinos. The lab is also going into cosmology. Other physics labs, like Brookhaven on Long Island and the Stanford Linear Accelerator Center, have converted their accelerators into powerful X-ray sources, which can be used to plumb the properties and structures of molecules in work that led to this year’s Nobel Prize in chemistry.
For CERN, the Fermilab-topping collisions will be only the end of the beginning of a 15-year, $10 billion quest to recreate laws and particles that prevailed just after the Big Bang, when the universe was less than a trillionth of a second old.
Particle colliders get their magic from Einstein’s equation of mass and energy. The more energy that these machines can pack into their little fireballs, in effect the farther back in time they can go, and the smaller and smaller things they can see.
The first modern accelerator, the cyclotron built by Ernest Lawrence at the University of California, Berkeley, in 1932, was a foot in diameter and boosted protons to just 1.25 million electron volts.
CERN, a 20-nation consortium, grew from the ashes of World War II and has provided a template for other pan-European organizations like the European Space Agency and the European Southern Observatory. With a budget and dues set by treaty, CERN enjoys a long-term stability that is the envy of American labs. For decades, CERN and Fermilab leapfrogged each other building bigger and bigger machines, but the game ended when the supercollider was canceled.
Despite the lack of competition, CERN’s collider has not had a bump-free ride. In 2007 the housing around one magnet exploded during a pressure test, necessitating the removal and redesign of nine 80-foot magnet assemblies. In September 2008, the junction between two magnets vaporized, shutting down the project for a year.
Testing revealed that the collider is riddled with thousands of defective electrical joints and dozens of underperforming magnets that will keep it from reaching its full potential until an overhaul scheduled for 2011. When it starts doing real physics after the holidays, the collider will be running at half power.
The collider was designed to investigate what happens at energies and temperatures so high that the reigning theory of particle physics called the Standard Model breaks down. In effect, the new machine’s job is to “break” the Standard Model and give physicists a glimpse of something deeper and more profound.
The future of particle physics could depend on whether the Large Hadron Collider finds anything.
If it yields nothing, in the words of the CERN physicist John Ellis, it would mean that theorists have been talking rubbish for the last 35 years. Actually, he used a stronger word.
http://www.nytimes.com/2009/12/10/scien ... &th&emc=th
December 10, 2009
Collider Sets Record, and Europe Takes U.S.’s Lead
By DENNIS OVERBYE
Tiny spitfires of energy blossomed under the countryside outside Geneva late Tuesday night, heralding the arrival of a new European particle collider as the biggest, baddest physics machine in the world.
Scientists said that the new Large Hadron Collider, a 17-mile loop underneath the Swiss-French border, had accelerated protons to energies of 1.2 trillion electron volts apiece and then crashed them together, eclipsing a record for collisions held by an American machine, the Tevatron, at the Fermi National Accelerator Laboratory in Illinois.
Officials at CERN, the European Center for Nuclear Research, which built the collider, said that the collisions lasted just a few minutes as a byproduct of testing, and that the Champagne was still on ice in Geneva. But in conjunction with other recent successes, those tiny fireballs displaced American physicists as the leaders in the art of banging subatomic particles together to see what nature is made of.
The collider first boosted a beam of protons to the new energy record of 1.2 trillion electron volts on Nov. 29 without making collisions; CERN hopes to be having sustained collisions at that energy within a week. In the future, as the collider ramps up to seven trillion electron volts, the dateline for physics discoveries will be Geneva, not Batavia, Ill., the home of Fermilab.
That future, physicists say, includes not only the sheen of announcing exotic particles and strange dimensions, but also the ancillary rewards of increased technological competence and innovation that spring from the pursuit of esoteric knowledge. The World Wide Web, lest anyone forget, was invented by particle physicists at CERN. Detectors developed for physics experiments are now used in medical devices like PET scans, and it was the industrial-scale production of superconducting magnets for the Tevatron that made commercial magnetic resonance imagers possible, said Young-Kee Kim, deputy director of Fermilab.
It is all very fine to worry about the value of the dollar. But what about the value of the proton?
“Particle accelerators and detectors (initially with the bold and innovative ideas and technologies) have touched our lives in many ways and I have no doubt that this will continue,” Dr. Kim wrote in an e-mail message.
Those spinoffs now will invigorate the careers and labs of Europe, not the United States, pointed out Steven Weinberg, a physicist at the University of Texas in Austin, who won the Nobel Prize for work that will be tested in the new collider. Americans will work at CERN, but not as leaders, he said in an e-mail interview.
“There is also a depressing symbolism,” he added, “in the fact that the hottest new results in fundamental physics will for decades not be coming from our country.”
This moment has been inevitable since fall 1993, when Congress canceled a behemoth project in Texas known as the Superconducting Supercollider, after estimated costs rose to $11 billion. That accelerator, designed at 54 miles and 20 trillion electron volts, would have been working by now and would have had an even greater reach for new physics than Europe’s machine. American physicists have reacted to the Large Hadron Collider with a mixture of excitement, good sportsmanship and wistfulness.
The United States has not exactly been shut out of the action at the new collider, as Dr. Kim pointed out. It contributed $531 million to the project, and about 1,700 of the 10,000 scientists who work on the giant particle detectors in the collider tunnel are Americans, the largest of any national group. (Italians are next.)
Thanks in part to delays with the CERN collider and other problems that will keep it from performing up to snuff for the next couple of years, she said, Fermilab’s Tevatron is still in the lead in the hunt for one of the collider’s main quarries, the Higgs boson, a particle that is thought to imbue other particles with mass.
In the meantime, Fermilab is investing $53 million from the federal stimulus package in a “Project X” to make more intense proton beams, which in turn could be used to make beams of the strange ghostlike particles called neutrinos. The lab is also going into cosmology. Other physics labs, like Brookhaven on Long Island and the Stanford Linear Accelerator Center, have converted their accelerators into powerful X-ray sources, which can be used to plumb the properties and structures of molecules in work that led to this year’s Nobel Prize in chemistry.
For CERN, the Fermilab-topping collisions will be only the end of the beginning of a 15-year, $10 billion quest to recreate laws and particles that prevailed just after the Big Bang, when the universe was less than a trillionth of a second old.
Particle colliders get their magic from Einstein’s equation of mass and energy. The more energy that these machines can pack into their little fireballs, in effect the farther back in time they can go, and the smaller and smaller things they can see.
The first modern accelerator, the cyclotron built by Ernest Lawrence at the University of California, Berkeley, in 1932, was a foot in diameter and boosted protons to just 1.25 million electron volts.
CERN, a 20-nation consortium, grew from the ashes of World War II and has provided a template for other pan-European organizations like the European Space Agency and the European Southern Observatory. With a budget and dues set by treaty, CERN enjoys a long-term stability that is the envy of American labs. For decades, CERN and Fermilab leapfrogged each other building bigger and bigger machines, but the game ended when the supercollider was canceled.
Despite the lack of competition, CERN’s collider has not had a bump-free ride. In 2007 the housing around one magnet exploded during a pressure test, necessitating the removal and redesign of nine 80-foot magnet assemblies. In September 2008, the junction between two magnets vaporized, shutting down the project for a year.
Testing revealed that the collider is riddled with thousands of defective electrical joints and dozens of underperforming magnets that will keep it from reaching its full potential until an overhaul scheduled for 2011. When it starts doing real physics after the holidays, the collider will be running at half power.
The collider was designed to investigate what happens at energies and temperatures so high that the reigning theory of particle physics called the Standard Model breaks down. In effect, the new machine’s job is to “break” the Standard Model and give physicists a glimpse of something deeper and more profound.
The future of particle physics could depend on whether the Large Hadron Collider finds anything.
If it yields nothing, in the words of the CERN physicist John Ellis, it would mean that theorists have been talking rubbish for the last 35 years. Actually, he used a stronger word.
December 15, 2009
Books on Science
A Deluge of Data Shapes a New Era in Computing
By JOHN MARKOFF
Skip to next paragraph
THE FOURTH PARADIGM
Data-Intensive Scientific Discovery. Edited by Tony Hey, Stewart Tansley and Kristin Tolle. Microsoft Research. 252 pages.
In a speech given just a few weeks before he was lost at sea off the California coast in January 2007, Jim Gray, a database software pioneer and a Microsoft researcher, sketched out an argument that computing was fundamentally transforming the practice of science.
Dr. Gray called the shift a “fourth paradigm.” The first three paradigms were experimental, theoretical and, more recently, computational science. He explained this paradigm as an evolving era in which an “exaflood” of observational data was threatening to overwhelm scientists. The only way to cope with it, he argued, was a new generation of scientific computing tools to manage, visualize and analyze the data flood.
In essence, computational power created computational science, which produced the overwhelming flow of data, which now requires a computing change. It is a positive feedback loop in which the data stream becomes the data flood and sculptures a new computing landscape.
In computing circles, Dr. Gray’s crusade was described as, “It’s the data, stupid.” It was a point of view that caused him to break ranks with the supercomputing nobility, who for decades focused on building machines that calculated at picosecond intervals.
He argued that government should instead focus on supporting cheaper clusters of computers to manage and process all this data. This is distributed computing, in which a nation full of personal computers can crunch the pools of data involved in the search for extraterrestrial intelligence, or protein folding.
The goal, Dr. Gray insisted, was not to have the biggest, fastest single computer, but rather “to have a world in which all of the science literature is online, all of the science data is online, and they interoperate with each other.” He was instrumental in making this a reality, particularly for astronomy, for which he helped build vast databases that wove much of the world’s data into interconnected repositories that have created, in effect, a worldwide telescope.
Now, as a testimony to his passion and vision, colleagues at Microsoft Research, the company’s laboratory that is focused on science and computer science, have published a tribute to Dr. Gray’s perspective in “The Fourth Paradigm: Data-Intensive Scientific Discovery.” It is a collection of essays written by Microsoft’s scientists and outside scientists, some of whose research is being financed by the software publisher.
The essays focus on research on the earth and environment, health and well-being, scientific infrastructure and the way in which computers and networks are transforming scholarly communication. The essays also chronicle a new generation of scientific instruments that are increasingly part sensor, part computer, and which are capable of producing and capturing vast floods of data. For example, the Australian Square Kilometre Array of radio telescopes, CERN’s Large Hadron Collider and the Pan-Starrs array of telescopes are each capable of generating several petabytes of digital information each day, although their research plans call for the generation of much smaller amounts of data, for financial and technical reasons. (A petabyte of data is roughly equivalent to 799 million copies of the novel “Moby Dick.”)
“The advent of inexpensive high-bandwidth sensors is transforming every field from data-poor to data-rich,” Edward Lazowska, a computer scientist and director of the University of Washington eScience Institute, said in an e-mail message. The resulting transformation is occurring in the social sciences, too.
“As recently as five years ago,” Dr. Lazowska said, “if you were a social scientist interested in how social groups form, evolve and dissipate, you would hire 30 college freshmen for $10 an hour and interview them in a focus group.”
“Today,” he added, “you have real-time access to the social structuring and restructuring of 100 million Facebook users.”
The shift is giving rise to a computer science perspective, referred to as “computational thinking” by Jeannette M. Wing, assistant director of the Computer and Information Science and Engineering Directorate at the National Science Foundation.
Dr. Wing has argued that ideas like recursion, parallelism and abstraction taken from computer science will redefine modern science. Implicit in the idea of a fourth paradigm is the ability, and the need, to share data. In sciences like physics and astronomy, the instruments are so expensive that data must be shared. Now the data explosion and the falling cost of computing and communications are creating pressure to share all scientific data.
“To explain the trends that you are seeing, you can’t just work on your own patch,” said Daron Green, director of external research for Microsoft Research. “I’ve got to do things I’ve never done before: I’ve got to share my data.”
That resonates well with the emerging computing trend known as “the cloud,” an approach being driven by Microsoft, Google and other companies that believe that, fueled by the Internet, the shift is toward centralization of computing facilities.
Both Microsoft and Google are hoping to entice scientists by offering cloud services tailored for scientific experimentation. Examples include Worldwide Telescope from Microsoft and Google Sky, intended to make a range of astronomical data available to all.
Similar digital instruments are emerging in other fields. In one chapter, “Toward a Computational Microscope for Neurobiology,” Eric Horvitz, an artificial intelligence researcher for Microsoft, and William Kristan, a neurobiologist at the University of California, San Diego, chart the development of a tool they say is intended to help understand the communications among neurons.
“We have access to too much data now to understand what’s going on,” Dr. Horvitz said. “My goal now is to develop a new kind of telescope or microscope.”
By imaging the ganglia of leeches being studied in Dr. Kristan’s laboratory, the researchers have been able to identify “decision” cells, responsible for summing up a variety of inputs and making an action, like crawling. Someday, Dr. Horvitz hopes to develop the tool into a three-dimensional display that makes it possible to overlay a set of inferences about brain behavior that can be dynamically tested.
The promise of the shift described in the fourth paradigm is a blossoming of science. Tony Hey, a veteran British computer scientist now at Microsoft, said it could solve a common problem of poor use of graduate students. “In the U.K.,” Dr. Hey said, “I saw many generations of graduates students really sacrificed to doing the low-level IT.”
The way science is done is changing, but is it a shift of the magnitude that Thomas Kuhn outlined in “The Structure of Scientific Revolutions”?
In his chapter, “I Have Seen the Paradigm Shift, and It Is Us,” John Wilbanks, the director of Science Commons, a nonprofit organization promoting the sharing of scientific information, argues for a more nuanced view of data explosion.
“Data is not sweeping away the old reality,” he writes. “Data is simply placing a set of burdens on the methods and the social habits we use to deal with and communicate our empiricism and our theory.”
http://www.nytimes.com/2009/12/15/scien ... nted=print
Books on Science
A Deluge of Data Shapes a New Era in Computing
By JOHN MARKOFF
Skip to next paragraph
THE FOURTH PARADIGM
Data-Intensive Scientific Discovery. Edited by Tony Hey, Stewart Tansley and Kristin Tolle. Microsoft Research. 252 pages.
In a speech given just a few weeks before he was lost at sea off the California coast in January 2007, Jim Gray, a database software pioneer and a Microsoft researcher, sketched out an argument that computing was fundamentally transforming the practice of science.
Dr. Gray called the shift a “fourth paradigm.” The first three paradigms were experimental, theoretical and, more recently, computational science. He explained this paradigm as an evolving era in which an “exaflood” of observational data was threatening to overwhelm scientists. The only way to cope with it, he argued, was a new generation of scientific computing tools to manage, visualize and analyze the data flood.
In essence, computational power created computational science, which produced the overwhelming flow of data, which now requires a computing change. It is a positive feedback loop in which the data stream becomes the data flood and sculptures a new computing landscape.
In computing circles, Dr. Gray’s crusade was described as, “It’s the data, stupid.” It was a point of view that caused him to break ranks with the supercomputing nobility, who for decades focused on building machines that calculated at picosecond intervals.
He argued that government should instead focus on supporting cheaper clusters of computers to manage and process all this data. This is distributed computing, in which a nation full of personal computers can crunch the pools of data involved in the search for extraterrestrial intelligence, or protein folding.
The goal, Dr. Gray insisted, was not to have the biggest, fastest single computer, but rather “to have a world in which all of the science literature is online, all of the science data is online, and they interoperate with each other.” He was instrumental in making this a reality, particularly for astronomy, for which he helped build vast databases that wove much of the world’s data into interconnected repositories that have created, in effect, a worldwide telescope.
Now, as a testimony to his passion and vision, colleagues at Microsoft Research, the company’s laboratory that is focused on science and computer science, have published a tribute to Dr. Gray’s perspective in “The Fourth Paradigm: Data-Intensive Scientific Discovery.” It is a collection of essays written by Microsoft’s scientists and outside scientists, some of whose research is being financed by the software publisher.
The essays focus on research on the earth and environment, health and well-being, scientific infrastructure and the way in which computers and networks are transforming scholarly communication. The essays also chronicle a new generation of scientific instruments that are increasingly part sensor, part computer, and which are capable of producing and capturing vast floods of data. For example, the Australian Square Kilometre Array of radio telescopes, CERN’s Large Hadron Collider and the Pan-Starrs array of telescopes are each capable of generating several petabytes of digital information each day, although their research plans call for the generation of much smaller amounts of data, for financial and technical reasons. (A petabyte of data is roughly equivalent to 799 million copies of the novel “Moby Dick.”)
“The advent of inexpensive high-bandwidth sensors is transforming every field from data-poor to data-rich,” Edward Lazowska, a computer scientist and director of the University of Washington eScience Institute, said in an e-mail message. The resulting transformation is occurring in the social sciences, too.
“As recently as five years ago,” Dr. Lazowska said, “if you were a social scientist interested in how social groups form, evolve and dissipate, you would hire 30 college freshmen for $10 an hour and interview them in a focus group.”
“Today,” he added, “you have real-time access to the social structuring and restructuring of 100 million Facebook users.”
The shift is giving rise to a computer science perspective, referred to as “computational thinking” by Jeannette M. Wing, assistant director of the Computer and Information Science and Engineering Directorate at the National Science Foundation.
Dr. Wing has argued that ideas like recursion, parallelism and abstraction taken from computer science will redefine modern science. Implicit in the idea of a fourth paradigm is the ability, and the need, to share data. In sciences like physics and astronomy, the instruments are so expensive that data must be shared. Now the data explosion and the falling cost of computing and communications are creating pressure to share all scientific data.
“To explain the trends that you are seeing, you can’t just work on your own patch,” said Daron Green, director of external research for Microsoft Research. “I’ve got to do things I’ve never done before: I’ve got to share my data.”
That resonates well with the emerging computing trend known as “the cloud,” an approach being driven by Microsoft, Google and other companies that believe that, fueled by the Internet, the shift is toward centralization of computing facilities.
Both Microsoft and Google are hoping to entice scientists by offering cloud services tailored for scientific experimentation. Examples include Worldwide Telescope from Microsoft and Google Sky, intended to make a range of astronomical data available to all.
Similar digital instruments are emerging in other fields. In one chapter, “Toward a Computational Microscope for Neurobiology,” Eric Horvitz, an artificial intelligence researcher for Microsoft, and William Kristan, a neurobiologist at the University of California, San Diego, chart the development of a tool they say is intended to help understand the communications among neurons.
“We have access to too much data now to understand what’s going on,” Dr. Horvitz said. “My goal now is to develop a new kind of telescope or microscope.”
By imaging the ganglia of leeches being studied in Dr. Kristan’s laboratory, the researchers have been able to identify “decision” cells, responsible for summing up a variety of inputs and making an action, like crawling. Someday, Dr. Horvitz hopes to develop the tool into a three-dimensional display that makes it possible to overlay a set of inferences about brain behavior that can be dynamically tested.
The promise of the shift described in the fourth paradigm is a blossoming of science. Tony Hey, a veteran British computer scientist now at Microsoft, said it could solve a common problem of poor use of graduate students. “In the U.K.,” Dr. Hey said, “I saw many generations of graduates students really sacrificed to doing the low-level IT.”
The way science is done is changing, but is it a shift of the magnitude that Thomas Kuhn outlined in “The Structure of Scientific Revolutions”?
In his chapter, “I Have Seen the Paradigm Shift, and It Is Us,” John Wilbanks, the director of Science Commons, a nonprofit organization promoting the sharing of scientific information, argues for a more nuanced view of data explosion.
“Data is not sweeping away the old reality,” he writes. “Data is simply placing a set of burdens on the methods and the social habits we use to deal with and communicate our empiricism and our theory.”
http://www.nytimes.com/2009/12/15/scien ... nted=print
Scientists map genes of two types of cancer
Lung, skin tumours catalogued
By Kate Kelland, Reuters
December 17, 2009
Scientists have identified all the changes in cells of two deadly cancers to produce the first entire cancer gene maps and say the findings mark a "transforming moment" in their understanding of the disease.
The studies by international scientists and Britain's Wellcome Trust Sanger Institute are the first comprehensive descriptions of tumour cell mutations and lay bare all the genetic changes behind melanoma skin cancer and lung cancer.
"What we are seeing today is going to transform the way that we see cancer," Mike Stratton of the Sanger Institute's cancer genome project told a briefing in London. "We have never seen cancer revealed in this form before."
The scientists sequenced all the DNA from both tumour tissue and normal tissue from a melanoma patient and a lung cancer patient using a technology called massively parallel sequencing. By comparing the cancer sequences with the healthy ones, they were able to pick up all the changes specific to cancer.
The lung tumour carried more than 23,000 mutations and the melanoma had more than 33,000.
Peter Campbell, also of the Sanger Institute, said the lung cancer study suggests a typical smoker develops one mutation for every 15 cigarettes smoked and the damage starts with the first puff. Lung cancer kills around one million people worldwide each year and 90 per cent of cases are caused by smoking.
"These catalogues of mutations are telling us about how the cancer has developed -- so they will inform us on prevention -- and they include all the drivers, which tell us about the processes that are disrupted in the cancer cell which we can try and influence through our treatments," Stratton said.
But the scientists said identifying all the drivers -- the mutations that cause cells to become cancerous -- would take far more work and it could be several years yet before any new targets are found for the development of new cancer drugs.
"Somewhere among the mutations we have found lurk those that drive the cells to become cancerous," said Andy Futreal, who worked on the research published in the Nature journal. "Tracking them down will be our major challenge for the next few years."
Scientists have identified some genetic mutations linked to cancers -- mutations of a gene called BRAF are found in melanoma and new drugs to block its cancer-causing activity are in development. Drugs such as Roche AG's Herceptin and AstraZeneca's Iressa also target tumour cells that carry specific mutations.
Stratton said the aim now was to produce genetic maps of all types of cancer. There are more than 100 cancers in all, and each genome mapping process requires several months of work and costs tens of thousands of dollars.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 2&sponsor=
Lung, skin tumours catalogued
By Kate Kelland, Reuters
December 17, 2009
Scientists have identified all the changes in cells of two deadly cancers to produce the first entire cancer gene maps and say the findings mark a "transforming moment" in their understanding of the disease.
The studies by international scientists and Britain's Wellcome Trust Sanger Institute are the first comprehensive descriptions of tumour cell mutations and lay bare all the genetic changes behind melanoma skin cancer and lung cancer.
"What we are seeing today is going to transform the way that we see cancer," Mike Stratton of the Sanger Institute's cancer genome project told a briefing in London. "We have never seen cancer revealed in this form before."
The scientists sequenced all the DNA from both tumour tissue and normal tissue from a melanoma patient and a lung cancer patient using a technology called massively parallel sequencing. By comparing the cancer sequences with the healthy ones, they were able to pick up all the changes specific to cancer.
The lung tumour carried more than 23,000 mutations and the melanoma had more than 33,000.
Peter Campbell, also of the Sanger Institute, said the lung cancer study suggests a typical smoker develops one mutation for every 15 cigarettes smoked and the damage starts with the first puff. Lung cancer kills around one million people worldwide each year and 90 per cent of cases are caused by smoking.
"These catalogues of mutations are telling us about how the cancer has developed -- so they will inform us on prevention -- and they include all the drivers, which tell us about the processes that are disrupted in the cancer cell which we can try and influence through our treatments," Stratton said.
But the scientists said identifying all the drivers -- the mutations that cause cells to become cancerous -- would take far more work and it could be several years yet before any new targets are found for the development of new cancer drugs.
"Somewhere among the mutations we have found lurk those that drive the cells to become cancerous," said Andy Futreal, who worked on the research published in the Nature journal. "Tracking them down will be our major challenge for the next few years."
Scientists have identified some genetic mutations linked to cancers -- mutations of a gene called BRAF are found in melanoma and new drugs to block its cancer-causing activity are in development. Drugs such as Roche AG's Herceptin and AstraZeneca's Iressa also target tumour cells that carry specific mutations.
Stratton said the aim now was to produce genetic maps of all types of cancer. There are more than 100 cancers in all, and each genome mapping process requires several months of work and costs tens of thousands of dollars.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 2&sponsor=
Hubble telescope detects oldest galaxies
Agence France-PresseJanuary 6, 2010
T he refurbished Hubble space telescope has set a new distance record by discovering the oldest galaxies ever seen, dating back 13 billion years, or 600 to 800 million years after the Big Bang, NASA said Tuesday.
The never-seen-before galaxies are key to interpreting the development of the first stars and the formation of the first galaxies that later evolved into the elliptical galaxies like our own Milky Way that now populate the universe, the space agency said.
The age and masses of the galaxies were calculated by combining new data from Hubble -- the first space telescope was refurbished by a shuttle mission in May -- and images from NASA's Spitzer space telescope, the agency said.
"The masses are just one per cent of those of the Milky Way," explains astronomic researcher Ivo Labbe of the Carnegie Observatories.
"To our surprise, the results show that these galaxies existed at 700 million years after the Big Bang and must have started forming stars hundreds of millions of years earlier, pushing back the time of the earliest star formation in the universe," he added.
"With the rejuvenated Hubble and its new instruments, we are now entering uncharted territory that is ripe for new discoveries," says team co-leader Garth Illingworth, of the University of California, Santa Cruz.
Hubble underwent repair during a shuttle mission in May that left it with a new camera and spectrograph as well as fixed and spruced up scientific instruments.
The repair job marked the end of NASA's human missions to the 19-year-old but beloved Hubble. Launched in 1990, the telescope was repaired and upgraded in 1993, 1997, 1999, 2002 and 2008.
The latest and final upgrade has extended the life of Hubble another five years.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 1&sponsor=
******
Amazing slideshow of the images from the hubble
Audio slideshow: Hubble's first 20 years
The world's most famous space telescope has been peering into some of the deepest recesses of the universe for two decades - and is now celebrating its 20th birthday.
Take a look at some of the sights it has seen in that time with Professor Alec Boksenberg from the Institute of Astronomy in Cambridge - who was on the European team that helped build Hubble.
http://news.bbc.co.uk/2/hi/science/nature/8638263.stm
Agence France-PresseJanuary 6, 2010
T he refurbished Hubble space telescope has set a new distance record by discovering the oldest galaxies ever seen, dating back 13 billion years, or 600 to 800 million years after the Big Bang, NASA said Tuesday.
The never-seen-before galaxies are key to interpreting the development of the first stars and the formation of the first galaxies that later evolved into the elliptical galaxies like our own Milky Way that now populate the universe, the space agency said.
The age and masses of the galaxies were calculated by combining new data from Hubble -- the first space telescope was refurbished by a shuttle mission in May -- and images from NASA's Spitzer space telescope, the agency said.
"The masses are just one per cent of those of the Milky Way," explains astronomic researcher Ivo Labbe of the Carnegie Observatories.
"To our surprise, the results show that these galaxies existed at 700 million years after the Big Bang and must have started forming stars hundreds of millions of years earlier, pushing back the time of the earliest star formation in the universe," he added.
"With the rejuvenated Hubble and its new instruments, we are now entering uncharted territory that is ripe for new discoveries," says team co-leader Garth Illingworth, of the University of California, Santa Cruz.
Hubble underwent repair during a shuttle mission in May that left it with a new camera and spectrograph as well as fixed and spruced up scientific instruments.
The repair job marked the end of NASA's human missions to the 19-year-old but beloved Hubble. Launched in 1990, the telescope was repaired and upgraded in 1993, 1997, 1999, 2002 and 2008.
The latest and final upgrade has extended the life of Hubble another five years.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 1&sponsor=
******
Amazing slideshow of the images from the hubble
Audio slideshow: Hubble's first 20 years
The world's most famous space telescope has been peering into some of the deepest recesses of the universe for two decades - and is now celebrating its 20th birthday.
Take a look at some of the sights it has seen in that time with Professor Alec Boksenberg from the Institute of Astronomy in Cambridge - who was on the European team that helped build Hubble.
http://news.bbc.co.uk/2/hi/science/nature/8638263.stm
Last edited by kmaherali on Fri Apr 23, 2010 12:02 pm, edited 1 time in total.
There is a related video linked at:
http://www.nytimes.com/2010/01/07/techn ... ?th&emc=th
January 7, 2010
Driven to Distraction
Despite Risks, Internet Creeps Onto Car Dashboards
By ASHLEE VANCE and MATT RICHTEL
LAS VEGAS — To the dismay of safety advocates already worried about driver distraction, automakers and high-tech companies have found a new place to put sophisticated Internet-connected computers: the front seat.
Technology giants like Intel and Google are turning their attention from the desktop to the dashboard, hoping to bring the power of the PC to the car. They see vast opportunity for profit in working with automakers to create the next generation of irresistible devices.
This week at the Consumer Electronics Show, the neon-drenched annual trade show here, these companies are demonstrating the breadth of their ambitions, like 10-inch screens above the gearshift showing high-definition videos, 3-D maps and Web pages.
The first wave of these “infotainment systems,” as the tech and car industries call them, will hit the market this year. While built-in navigation features were once costly options, the new systems are likely to be standard equipment in a wide range of cars before long. They prevent drivers from watching video and using some other functions while the car is moving, but they can still pull up content as varied as restaurant reviews and the covers of music albums with the tap of a finger.
Safety advocates say the companies behind these technologies are tone-deaf to mounting research showing the risks of distracted driving — and to a growing national debate about the use of mobile devices in cars and how to avoid the thousands of wrecks and injuries this distraction causes each year.
http://www.nytimes.com/2010/01/07/techn ... ?th&emc=th
January 7, 2010
Driven to Distraction
Despite Risks, Internet Creeps Onto Car Dashboards
By ASHLEE VANCE and MATT RICHTEL
LAS VEGAS — To the dismay of safety advocates already worried about driver distraction, automakers and high-tech companies have found a new place to put sophisticated Internet-connected computers: the front seat.
Technology giants like Intel and Google are turning their attention from the desktop to the dashboard, hoping to bring the power of the PC to the car. They see vast opportunity for profit in working with automakers to create the next generation of irresistible devices.
This week at the Consumer Electronics Show, the neon-drenched annual trade show here, these companies are demonstrating the breadth of their ambitions, like 10-inch screens above the gearshift showing high-definition videos, 3-D maps and Web pages.
The first wave of these “infotainment systems,” as the tech and car industries call them, will hit the market this year. While built-in navigation features were once costly options, the new systems are likely to be standard equipment in a wide range of cars before long. They prevent drivers from watching video and using some other functions while the car is moving, but they can still pull up content as varied as restaurant reviews and the covers of music albums with the tap of a finger.
Safety advocates say the companies behind these technologies are tone-deaf to mounting research showing the risks of distracted driving — and to a growing national debate about the use of mobile devices in cars and how to avoid the thousands of wrecks and injuries this distraction causes each year.
February 9, 2010
Editorial
A New Space Program
President Obama has called for scrapping NASA’s once-ambitious program to return astronauts to the Moon by 2020 as a first step toward reaching Mars. That effort, begun by former President George W. Bush, is behind schedule and its technology increasingly outdated.
Mr. Obama is instead calling on NASA to develop “game-changing” technologies to make long-distance space travel cheaper and faster, a prerequisite for reaching beyond the Moon to nearby asteroids or Mars. To save money and free the agency for more ambitious journeys, the plan also calls for transferring NASA’s more routine operations — carrying astronauts to the International Space Station — to private businesses.
If done right, the president’s strategy could pay off handsomely. If not, it could be the start of a long, slow decline from the nation’s pre-eminent position as a space-faring power.
We are particularly concerned that the White House has not identified a clear goal — Mars is our choice — or set even a notional deadline for getting there. The National Aeronautics and Space Administration and Congress need to keep the effort focused and adequately financed.
The most controversial element of the president’s plan is his proposal to scrap NASA’s mostly Moon-related technology programs that have been working to develop two new rockets, a new space capsule, a lunar landing capsule and systems for living on the lunar surface. Those efforts have been slowed by budgetary and technical problems. And at the current rate, the Moon landing would likely not occur until well after 2030. The technologies that looked reasonable when NASA first started in 2005 have already begun to look dated.
A lunar expedition would be of some value in learning how to live on the Martian surface but would not help us learn how to descend through Mars’ very different atmosphere or use that planet’s atmospheric resources effectively. Nor would it yield a rich trove of new scientific information or find new solutions for the difficulties of traveling deeper into space.
The president’s proposal calls for developing new technologies to make long-distance space travel possible: orbiting depots that could refuel rockets in space, lessening the weight they would have to carry from the ground; life-support systems that could operate indefinitely without resupply from Earth; new engines, propellants and materials for heavy-lift rockets; and advanced propulsion systems that could enable astronauts to reach Mars in a matter of weeks instead of roughly a year using chemical rockets.
Leaping to new generations of technology is inherently hard and NASA’s efforts may not bear fruit in any useful time period. To increase the odds of success, Congress may want to hold the agency’s feet to the fire and require that a specified percentage of its budget be devoted to technology development.
The idea of hiring private companies to ferry astronauts and cargo to the space station is also risky and based on little more than faith that the commercial sector may be able to move faster and more cheaply than NASA. The fledgling companies have yet to prove their expertise, and the bigger companies often deliver late and overbudget.
If they fail or fall behind schedule, NASA would have to rely on Russia or other foreign countries to take its astronauts and cargoes aloft. That is a risk worth taking. It has relied on the Russians before when NASA’s shuttle fleet was grounded for extensive repairs. It would seem too expensive for NASA to compete with a new rocket designed to reach low-Earth orbit — far better to accelerate development of a heavier-lift rocket needed for voyages beyond, as NASA now intends.
The new plan for long-distance space travel also needs clear goals and at least aspirational deadlines that can help drive technology development and make it clear to the world that the United States is not retiring from space exploration but rather is pushing toward the hardest goal within plausible reach.
We believe the target should be Mars — the planet most like Earth and of greatest scientific interest.
Many experts prefer a flexible path that would have astronauts first travel to intermediate destinations: a circle around the Moon to show the world that we can still do it; a trip to distant points where huge telescopes will be deployed and may need servicing; a visit to an asteroid, the kind of object we may some day need to deflect lest it collide with Earth. That makes sense to us so long as the goal of reaching Mars remains at the forefront.
At this point, the administration’s plans to reorient NASA are only a proposal that requires Congressional approval to proceed. Already many legislators from states that profit from the current NASA program are voicing opposition.
Less self-interested colleagues ought to embrace the notion of a truly ambitious space program with clear goals that stir all Americans’ imaginations and challenge this country’s scientists to think far beyond the Moon.
http://www.nytimes.com/2010/02/09/opini ... nted=print
Editorial
A New Space Program
President Obama has called for scrapping NASA’s once-ambitious program to return astronauts to the Moon by 2020 as a first step toward reaching Mars. That effort, begun by former President George W. Bush, is behind schedule and its technology increasingly outdated.
Mr. Obama is instead calling on NASA to develop “game-changing” technologies to make long-distance space travel cheaper and faster, a prerequisite for reaching beyond the Moon to nearby asteroids or Mars. To save money and free the agency for more ambitious journeys, the plan also calls for transferring NASA’s more routine operations — carrying astronauts to the International Space Station — to private businesses.
If done right, the president’s strategy could pay off handsomely. If not, it could be the start of a long, slow decline from the nation’s pre-eminent position as a space-faring power.
We are particularly concerned that the White House has not identified a clear goal — Mars is our choice — or set even a notional deadline for getting there. The National Aeronautics and Space Administration and Congress need to keep the effort focused and adequately financed.
The most controversial element of the president’s plan is his proposal to scrap NASA’s mostly Moon-related technology programs that have been working to develop two new rockets, a new space capsule, a lunar landing capsule and systems for living on the lunar surface. Those efforts have been slowed by budgetary and technical problems. And at the current rate, the Moon landing would likely not occur until well after 2030. The technologies that looked reasonable when NASA first started in 2005 have already begun to look dated.
A lunar expedition would be of some value in learning how to live on the Martian surface but would not help us learn how to descend through Mars’ very different atmosphere or use that planet’s atmospheric resources effectively. Nor would it yield a rich trove of new scientific information or find new solutions for the difficulties of traveling deeper into space.
The president’s proposal calls for developing new technologies to make long-distance space travel possible: orbiting depots that could refuel rockets in space, lessening the weight they would have to carry from the ground; life-support systems that could operate indefinitely without resupply from Earth; new engines, propellants and materials for heavy-lift rockets; and advanced propulsion systems that could enable astronauts to reach Mars in a matter of weeks instead of roughly a year using chemical rockets.
Leaping to new generations of technology is inherently hard and NASA’s efforts may not bear fruit in any useful time period. To increase the odds of success, Congress may want to hold the agency’s feet to the fire and require that a specified percentage of its budget be devoted to technology development.
The idea of hiring private companies to ferry astronauts and cargo to the space station is also risky and based on little more than faith that the commercial sector may be able to move faster and more cheaply than NASA. The fledgling companies have yet to prove their expertise, and the bigger companies often deliver late and overbudget.
If they fail or fall behind schedule, NASA would have to rely on Russia or other foreign countries to take its astronauts and cargoes aloft. That is a risk worth taking. It has relied on the Russians before when NASA’s shuttle fleet was grounded for extensive repairs. It would seem too expensive for NASA to compete with a new rocket designed to reach low-Earth orbit — far better to accelerate development of a heavier-lift rocket needed for voyages beyond, as NASA now intends.
The new plan for long-distance space travel also needs clear goals and at least aspirational deadlines that can help drive technology development and make it clear to the world that the United States is not retiring from space exploration but rather is pushing toward the hardest goal within plausible reach.
We believe the target should be Mars — the planet most like Earth and of greatest scientific interest.
Many experts prefer a flexible path that would have astronauts first travel to intermediate destinations: a circle around the Moon to show the world that we can still do it; a trip to distant points where huge telescopes will be deployed and may need servicing; a visit to an asteroid, the kind of object we may some day need to deflect lest it collide with Earth. That makes sense to us so long as the goal of reaching Mars remains at the forefront.
At this point, the administration’s plans to reorient NASA are only a proposal that requires Congressional approval to proceed. Already many legislators from states that profit from the current NASA program are voicing opposition.
Less self-interested colleagues ought to embrace the notion of a truly ambitious space program with clear goals that stir all Americans’ imaginations and challenge this country’s scientists to think far beyond the Moon.
http://www.nytimes.com/2010/02/09/opini ... nted=print
February 14, 2010
Do-It-Yourself Genetic Engineering
By JON MOOALLEM
IT ALL STARTED with a brawny, tattooed building contractor with a passion for exotic animals. He was taking biology classes at City College of San Francisco, a two-year community college, and when students started meeting informally early last year to think up a project for a coming science competition, he told them that he thought it would be cool if they re-engineered cells from electric eels into a source of alternative energy. Eventually the students scaled down that idea into something more feasible, though you would be forgiven if it still sounded like science fiction to you: they would build an electrical battery powered by bacteria. This also entailed building the bacteria itself — redesigning a living organism, using the tools of a radical new realm of genetic engineering called synthetic biology.
A City College team worked on the project all summer. Then in October, five students flew to Cambridge, Mass., to present it at M.I.T. and compete against more than 1,000 other students from 100 schools, including many top-flight institutions like Stanford and Harvard. City College offers courses in everything from linear algebra to an introduction to chairside assisting (for aspiring dental hygienists), all for an affordable $26 a credit. Its students were extreme but unrelenting underdogs in the annual weekend-long synthetic-biology showdown. The competition is called iGEM: International Genetically Engineered Machine Competition.
The team’s faculty adviser, Dirk VandePol, went to City College as a teenager. He is 41, with glasses, hair that flops over his forehead and, frequently, the body language of a man who knows he has left something important somewhere but can’t remember where or what. While the advisers to some iGEM teams rank among synthetic biology’s leading researchers, VandePol doesn’t even teach genetic engineering. He teaches introductory human biology — “the skeletal system and stuff,” he explained — and signed on to the team for the same reason that his students did: the promise of this burgeoning field thrills him, and he wanted a chance to be a part of it. “Synthetic biology is the coolest thing in the universe,” VandePol told me, with complete earnestness, when I visited the team last summer.
The first thing to understand about the new science of synthetic biology is that it’s not really a new science; it’s a brazen call to conduct an existing one much more ambitiously. For almost 40 years, genetic engineers have been decoding DNA and transplanting individual genes from one organism into another. (One company, for example, famously experimented with putting a gene from an arctic flounder into tomatoes to make a variety of frost-resistant tomatoes.) But synthetic biologists want to break out of this cut-and-paste paradigm altogether. They want to write brand-new genetic code, pulling together specific genes or portions of genes plucked from a wide range of organisms — or even constructed from scratch in a lab — and methodically lacing them into a single set of genetic instructions. Implant that new code into an organism, and you should be able to make its cells do and produce things that nothing in nature has ever done or produced before.
As commercial applications for this kind of science materialize and venture capitalists cut checks, the hope is that synthetic biologists can engineer new, living tools to address our most pressing problems. Already, for example, one of the field’s leading start-ups, a Bay Area company called LS9, has remade the inner workings of a sugar-eating bacterium so that its cells secrete a chemical compound that is almost identical to diesel fuel. The company calls it a “renewable petroleum.” Another firm, Amyris Biotechnologies, has similarly tricked out yeast to produce an antimalarial drug. (LS9, backed by Chevron, aims to bring its product to market in the next couple of years. Amyris’s drug could be available by the end of this year, through a partnership with Sanofi-Aventis.) Stephen Davies, a synthetic biologist and venture capitalist who served as a judge at iGEM, compares the buzz around the field to the advent of steam power during the Victorian era. “Right now,” he says, “synthetic biology feels like it might be able to power everything. People are trying things; kettles are exploding. Everyone’s attempting magic right and left.”
Genetic engineers have looked at nature as a set of finished products to tweak and improve — a tomato that could be made into a slightly better tomato. But synthetic biologists imagine nature as a manufacturing platform: all living things are just crates of genetic cogs; we should be able to spill all those cogs out on the floor and rig them into whatever new machinery we want. It’s a jarring shift, making the ways humankind has changed nature until now seem superficial. If you want to build a bookcase, you can find a nice tree, chop it down, mill it, sand the wood and hammer in some nails. “Or,” says Drew Endy, an iGEM founder and one of synthetic biology’s foremost visionaries, “you could program the DNA in the tree so that it grows into a bookshelf.”
More.....
http://www.nytimes.com/2010/02/14/magaz ... nted=print
Do-It-Yourself Genetic Engineering
By JON MOOALLEM
IT ALL STARTED with a brawny, tattooed building contractor with a passion for exotic animals. He was taking biology classes at City College of San Francisco, a two-year community college, and when students started meeting informally early last year to think up a project for a coming science competition, he told them that he thought it would be cool if they re-engineered cells from electric eels into a source of alternative energy. Eventually the students scaled down that idea into something more feasible, though you would be forgiven if it still sounded like science fiction to you: they would build an electrical battery powered by bacteria. This also entailed building the bacteria itself — redesigning a living organism, using the tools of a radical new realm of genetic engineering called synthetic biology.
A City College team worked on the project all summer. Then in October, five students flew to Cambridge, Mass., to present it at M.I.T. and compete against more than 1,000 other students from 100 schools, including many top-flight institutions like Stanford and Harvard. City College offers courses in everything from linear algebra to an introduction to chairside assisting (for aspiring dental hygienists), all for an affordable $26 a credit. Its students were extreme but unrelenting underdogs in the annual weekend-long synthetic-biology showdown. The competition is called iGEM: International Genetically Engineered Machine Competition.
The team’s faculty adviser, Dirk VandePol, went to City College as a teenager. He is 41, with glasses, hair that flops over his forehead and, frequently, the body language of a man who knows he has left something important somewhere but can’t remember where or what. While the advisers to some iGEM teams rank among synthetic biology’s leading researchers, VandePol doesn’t even teach genetic engineering. He teaches introductory human biology — “the skeletal system and stuff,” he explained — and signed on to the team for the same reason that his students did: the promise of this burgeoning field thrills him, and he wanted a chance to be a part of it. “Synthetic biology is the coolest thing in the universe,” VandePol told me, with complete earnestness, when I visited the team last summer.
The first thing to understand about the new science of synthetic biology is that it’s not really a new science; it’s a brazen call to conduct an existing one much more ambitiously. For almost 40 years, genetic engineers have been decoding DNA and transplanting individual genes from one organism into another. (One company, for example, famously experimented with putting a gene from an arctic flounder into tomatoes to make a variety of frost-resistant tomatoes.) But synthetic biologists want to break out of this cut-and-paste paradigm altogether. They want to write brand-new genetic code, pulling together specific genes or portions of genes plucked from a wide range of organisms — or even constructed from scratch in a lab — and methodically lacing them into a single set of genetic instructions. Implant that new code into an organism, and you should be able to make its cells do and produce things that nothing in nature has ever done or produced before.
As commercial applications for this kind of science materialize and venture capitalists cut checks, the hope is that synthetic biologists can engineer new, living tools to address our most pressing problems. Already, for example, one of the field’s leading start-ups, a Bay Area company called LS9, has remade the inner workings of a sugar-eating bacterium so that its cells secrete a chemical compound that is almost identical to diesel fuel. The company calls it a “renewable petroleum.” Another firm, Amyris Biotechnologies, has similarly tricked out yeast to produce an antimalarial drug. (LS9, backed by Chevron, aims to bring its product to market in the next couple of years. Amyris’s drug could be available by the end of this year, through a partnership with Sanofi-Aventis.) Stephen Davies, a synthetic biologist and venture capitalist who served as a judge at iGEM, compares the buzz around the field to the advent of steam power during the Victorian era. “Right now,” he says, “synthetic biology feels like it might be able to power everything. People are trying things; kettles are exploding. Everyone’s attempting magic right and left.”
Genetic engineers have looked at nature as a set of finished products to tweak and improve — a tomato that could be made into a slightly better tomato. But synthetic biologists imagine nature as a manufacturing platform: all living things are just crates of genetic cogs; we should be able to spill all those cogs out on the floor and rig them into whatever new machinery we want. It’s a jarring shift, making the ways humankind has changed nature until now seem superficial. If you want to build a bookcase, you can find a nice tree, chop it down, mill it, sand the wood and hammer in some nails. “Or,” says Drew Endy, an iGEM founder and one of synthetic biology’s foremost visionaries, “you could program the DNA in the tree so that it grows into a bookshelf.”
More.....
http://www.nytimes.com/2010/02/14/magaz ... nted=print
February 23, 2010
Computers Turn Flat Photos into 3-D Buildings
By JOHN MARKOFF
Rome wasn’t built in a day, but in cyberspace it might be.
Computer science researchers at the University of Washington and Cornell University are deploying a system that will blend teamwork and collaboration with powerful graphics algorithms to create three-dimensional renderings of buildings, neighborhoods and potentially even entire cities.
The new system, PhotoCity, grew from the original work of a Cornell computer scientist, Noah Snavely, who while working on his Ph.D. dissertation at the University of Washington, developed a set of algorithms that generated three-dimensional models from unstructured collections of two-dimensional photos.
The original project was dubbed Photo Tourism and it has since been commercialized as Microsoft’s Photosynth service, making it possible for users to upload collections of photos that can then be viewed in a quasi three-dimensional montage with a Web browser.
However, Photosynth collections are generally limited to dozens or hundreds of photos. The researchers wanted to push — or “scale” — their technology to be able to handle tens of thousands or even millions of photos. They also wanted to use computer processing power to transform the photos into true three-dimensional images, or what they refer to as a “dense point cloud.”
The visualization technology is already able to quickly process large collections of digital photos of an object like a building and render ghostly and evocative three-dimensional images. To do this they use a three-stage set of algorithms that begins by creating a “sparse point cloud” with a batch of photos, renders it as a denser image, capturing much of the original surface texture of the object, and then renders it in three dimensions.
To improve the quality of their rendering capabilities, the researchers plan to integrate their computing system with a social game that will permit competing teams to add images where they are most needed to improve the quality of the visual models.
The PhotoCity game is already being played by teams of students at the University of Washington and Cornell, and the researchers plan to open it to the public in an effort to collect three-dimensional renderings in cities like New York and San Francisco. Contestants will be able to use either an iPhone application that uses the phone’s camera, or upload collections of digital images.
In adopting what is known as a social computing or collective intelligence model, they are extending an earlier University of Washington research effort that combined computing and human skills to create a video game about protein folding.
The game, Foldit, was released in May 2008, allowing users to augment computing algorithms, solving visual problems where humans could find better solutions than computers. The game quickly gained a loyal following of amateur protein folders who became addicted to the challenges that bore a similarity to solving a Rubik’s Cube puzzle.
The emergence of such collaborative systems has great promise for harnessing the creative abilities of people in tandem with networked computers, said Peter Lee, a Defense Advanced Research Projects Agency program manager who recently organized a team-based contest to use the Internet to quickly locate a series of red balloons hidden around the United States.
“The obvious thing to do is to try to mobilize a lot of people and get them to go out and take snapshots that contribute to this 3-D reconstruction,” he said. “But maybe if enough people are involved someone will come up with a better idea of how to go about doing this.”
Indeed, it was J. C. R. Licklider, a legendary official at the Defense Advanced Research Projects Agency, who was a pioneer in proposing the idea of a “man-computer symbiosis.” While at Darpa, Dr. Licklider financed a series of research projects that led directly to the modern personal computer and today’s Internet.
To entice volunteers, the researchers have created a Web site: photocitygame.com. Anyone who wants to be a “custodian” of a particular building or place can begin by uploading pictures of the site. To maintain control they will need to be part of the group that contributes the most photos, in a capture-the-flag-like competition.
“One of the nice things for the players is they can own the points they create, whether it’s a building or a collection of buildings,” said Kathleen Tuite, a University of Washington graduate student and a computer graphics researcher who is one of the designers of PhotoCity. She said the researchers were considering the idea of offering real world prizes that would create incentives similar to Geocaching, the popular Internet GPS game.
“Eventually, the goal is to create a game without boundaries, that expands to fill the world,” Dr. Snavely said. “ For now, we’re focused on the scale of a college campus, or the heart of a city.”
http://www.nytimes.com/2010/02/23/scien ... nted=print
Computers Turn Flat Photos into 3-D Buildings
By JOHN MARKOFF
Rome wasn’t built in a day, but in cyberspace it might be.
Computer science researchers at the University of Washington and Cornell University are deploying a system that will blend teamwork and collaboration with powerful graphics algorithms to create three-dimensional renderings of buildings, neighborhoods and potentially even entire cities.
The new system, PhotoCity, grew from the original work of a Cornell computer scientist, Noah Snavely, who while working on his Ph.D. dissertation at the University of Washington, developed a set of algorithms that generated three-dimensional models from unstructured collections of two-dimensional photos.
The original project was dubbed Photo Tourism and it has since been commercialized as Microsoft’s Photosynth service, making it possible for users to upload collections of photos that can then be viewed in a quasi three-dimensional montage with a Web browser.
However, Photosynth collections are generally limited to dozens or hundreds of photos. The researchers wanted to push — or “scale” — their technology to be able to handle tens of thousands or even millions of photos. They also wanted to use computer processing power to transform the photos into true three-dimensional images, or what they refer to as a “dense point cloud.”
The visualization technology is already able to quickly process large collections of digital photos of an object like a building and render ghostly and evocative three-dimensional images. To do this they use a three-stage set of algorithms that begins by creating a “sparse point cloud” with a batch of photos, renders it as a denser image, capturing much of the original surface texture of the object, and then renders it in three dimensions.
To improve the quality of their rendering capabilities, the researchers plan to integrate their computing system with a social game that will permit competing teams to add images where they are most needed to improve the quality of the visual models.
The PhotoCity game is already being played by teams of students at the University of Washington and Cornell, and the researchers plan to open it to the public in an effort to collect three-dimensional renderings in cities like New York and San Francisco. Contestants will be able to use either an iPhone application that uses the phone’s camera, or upload collections of digital images.
In adopting what is known as a social computing or collective intelligence model, they are extending an earlier University of Washington research effort that combined computing and human skills to create a video game about protein folding.
The game, Foldit, was released in May 2008, allowing users to augment computing algorithms, solving visual problems where humans could find better solutions than computers. The game quickly gained a loyal following of amateur protein folders who became addicted to the challenges that bore a similarity to solving a Rubik’s Cube puzzle.
The emergence of such collaborative systems has great promise for harnessing the creative abilities of people in tandem with networked computers, said Peter Lee, a Defense Advanced Research Projects Agency program manager who recently organized a team-based contest to use the Internet to quickly locate a series of red balloons hidden around the United States.
“The obvious thing to do is to try to mobilize a lot of people and get them to go out and take snapshots that contribute to this 3-D reconstruction,” he said. “But maybe if enough people are involved someone will come up with a better idea of how to go about doing this.”
Indeed, it was J. C. R. Licklider, a legendary official at the Defense Advanced Research Projects Agency, who was a pioneer in proposing the idea of a “man-computer symbiosis.” While at Darpa, Dr. Licklider financed a series of research projects that led directly to the modern personal computer and today’s Internet.
To entice volunteers, the researchers have created a Web site: photocitygame.com. Anyone who wants to be a “custodian” of a particular building or place can begin by uploading pictures of the site. To maintain control they will need to be part of the group that contributes the most photos, in a capture-the-flag-like competition.
“One of the nice things for the players is they can own the points they create, whether it’s a building or a collection of buildings,” said Kathleen Tuite, a University of Washington graduate student and a computer graphics researcher who is one of the designers of PhotoCity. She said the researchers were considering the idea of offering real world prizes that would create incentives similar to Geocaching, the popular Internet GPS game.
“Eventually, the goal is to create a game without boundaries, that expands to fill the world,” Dr. Snavely said. “ For now, we’re focused on the scale of a college campus, or the heart of a city.”
http://www.nytimes.com/2010/02/23/scien ... nted=print
High-tech fair features gadgets that operate on brain waves
Mind-reading computers turn heads
By Richard Carter, Agence France-PresseMarch 5, 2010
http://www.calgaryherald.com/story_prin ... 2&sponsor=
An exhibitor plays pinball using a device to measure neurons in the brain at the CeBIT fair, in Hannover, Germany. Some 4,157 companies from 68 countries are displaying their latest gadgets at the fair.
Photograph by: Daniel Mihailescu, Getty Images, Agence France-PresseDevices allowing people to write letters or play pinball using just the power of their brains have become a major draw at the world's biggest high-tech fair.
Huge crowds at the Ce-BIT fair gathered around a man sitting at a pinball table, wearing a cap covered in electrodes attached to his head, who controlled the flippers with great proficiency without using hands.
"He thinks: left-hand or right-hand and the electrodes monitor the brain waves associated with that thought, send the information to a computer, which then moves the flippers," said Michael Tangermann, from the Berlin Brain Computer Interface.
But the technology is much more than a fun gadget, it could one day save your life. Scientists are researching ways to monitor motorists' brain waves to improve reaction times in a crash.
In an emergency stop situation, the brain activity kicks in on average around 200 milliseconds before even an alert driver can hit the brake.
There is no question of braking automatically for a driver -- "we would never take away that kind of control," Tangermann said.
"However, there are various things the car can do in that crucial time -- tighten the seat belt, for example," he added.
Using this brain-wave monitoring technology, a car can also tell whether the driver is drowsy, potentially warning him or her to take a break.
At the g. tec stall, visitors watched a man with a similar "electrode cap" sit in front of a screen with a large keyboard, with letters flashing in an ordered sequence.
The user concentrates hard when the chosen letter flashes and the brain waves stimulated at this exact moment are registered by the computer and the letter appears on the screen. The technology takes a long time at present -- it took the man around four minutes to write a five-letter word -- but researchers hope to speed it up in the near future.
Another device allows users to control robots by brain power. The small box has lights flashing at different frequencies at the four points of the compass.
The user concentrates on the corresponding light, depending on whether he wants the robot to move up, down, left or right, and the brainwaves generated by viewing that frequency are monitored and the robot is controlled.
The technology is being perfected for physically disabled people, who can communicate and operate other devices using their brain.
"In future, people will be able to control wheelchairs, open doors and turn on their televisions with their minds," said Clemens Holzner from g. tec.
The CeBIT runs until Saturday.
© Copyright (c) The Calgary Herald
Mind-reading computers turn heads
By Richard Carter, Agence France-PresseMarch 5, 2010
http://www.calgaryherald.com/story_prin ... 2&sponsor=
An exhibitor plays pinball using a device to measure neurons in the brain at the CeBIT fair, in Hannover, Germany. Some 4,157 companies from 68 countries are displaying their latest gadgets at the fair.
Photograph by: Daniel Mihailescu, Getty Images, Agence France-PresseDevices allowing people to write letters or play pinball using just the power of their brains have become a major draw at the world's biggest high-tech fair.
Huge crowds at the Ce-BIT fair gathered around a man sitting at a pinball table, wearing a cap covered in electrodes attached to his head, who controlled the flippers with great proficiency without using hands.
"He thinks: left-hand or right-hand and the electrodes monitor the brain waves associated with that thought, send the information to a computer, which then moves the flippers," said Michael Tangermann, from the Berlin Brain Computer Interface.
But the technology is much more than a fun gadget, it could one day save your life. Scientists are researching ways to monitor motorists' brain waves to improve reaction times in a crash.
In an emergency stop situation, the brain activity kicks in on average around 200 milliseconds before even an alert driver can hit the brake.
There is no question of braking automatically for a driver -- "we would never take away that kind of control," Tangermann said.
"However, there are various things the car can do in that crucial time -- tighten the seat belt, for example," he added.
Using this brain-wave monitoring technology, a car can also tell whether the driver is drowsy, potentially warning him or her to take a break.
At the g. tec stall, visitors watched a man with a similar "electrode cap" sit in front of a screen with a large keyboard, with letters flashing in an ordered sequence.
The user concentrates hard when the chosen letter flashes and the brain waves stimulated at this exact moment are registered by the computer and the letter appears on the screen. The technology takes a long time at present -- it took the man around four minutes to write a five-letter word -- but researchers hope to speed it up in the near future.
Another device allows users to control robots by brain power. The small box has lights flashing at different frequencies at the four points of the compass.
The user concentrates on the corresponding light, depending on whether he wants the robot to move up, down, left or right, and the brainwaves generated by viewing that frequency are monitored and the robot is controlled.
The technology is being perfected for physically disabled people, who can communicate and operate other devices using their brain.
"In future, people will be able to control wheelchairs, open doors and turn on their televisions with their minds," said Clemens Holzner from g. tec.
The CeBIT runs until Saturday.
© Copyright (c) The Calgary Herald
Researchers create 3-D invisibility cloak
Bump on gold surface concealed by photonic crystal
Agence France-PresseMarch 19, 2010
European researchers have taken the world a step closer to fictional wizard Harry Potter's invisibility cape after they made an object disappear using a three-dimensional "cloak," a study published Thursday in the U.S.-based journal Science showed.
Scientists from Karlsruhe Institute of Technology in Germany and Imperial College London used the cloak, made using photonic crystals with a structure resembling piles of wood, to conceal a small bump on a gold surface, they wrote in Science.
"It's kind of like hiding a small object underneath a carpet -- except this time the carpet also disappears," they said.
"We put an object under a microscopic structure, a little like a reflective carpet," said Nicholas Stenger, one of the researchers who worked on the project.
"When we looked at it through a lens and did spectroscopy, no matter what angle we looked at the object from, we saw nothing. The bump became invisible," said Stenger.
The cloak they used to make the microscopic bump disappear was composed of special lenses that work by bending light waves to suppress light as it scattered from the bump, the study says.
The invisibility cloak was minute, measuring 100 microns by 30 microns -- one micron being one-thousandth of a millimetre -- and the bump it hid was 10 times smaller, said Stenger.
The researchers are working now to recreate the disappearing bump, but on a larger scale. However, Stenger said Harry Potter's invisibility cape would not be hanging in would-be wizards' wardrobes in the near future.
"Theoretically, it would be possible to do this on a large scale, but technically, it's totally impossible with the knowledge we have now," he said.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 1&sponsor=
Bump on gold surface concealed by photonic crystal
Agence France-PresseMarch 19, 2010
European researchers have taken the world a step closer to fictional wizard Harry Potter's invisibility cape after they made an object disappear using a three-dimensional "cloak," a study published Thursday in the U.S.-based journal Science showed.
Scientists from Karlsruhe Institute of Technology in Germany and Imperial College London used the cloak, made using photonic crystals with a structure resembling piles of wood, to conceal a small bump on a gold surface, they wrote in Science.
"It's kind of like hiding a small object underneath a carpet -- except this time the carpet also disappears," they said.
"We put an object under a microscopic structure, a little like a reflective carpet," said Nicholas Stenger, one of the researchers who worked on the project.
"When we looked at it through a lens and did spectroscopy, no matter what angle we looked at the object from, we saw nothing. The bump became invisible," said Stenger.
The cloak they used to make the microscopic bump disappear was composed of special lenses that work by bending light waves to suppress light as it scattered from the bump, the study says.
The invisibility cloak was minute, measuring 100 microns by 30 microns -- one micron being one-thousandth of a millimetre -- and the bump it hid was 10 times smaller, said Stenger.
The researchers are working now to recreate the disappearing bump, but on a larger scale. However, Stenger said Harry Potter's invisibility cape would not be hanging in would-be wizards' wardrobes in the near future.
"Theoretically, it would be possible to do this on a large scale, but technically, it's totally impossible with the knowledge we have now," he said.
© Copyright (c) The Calgary Herald
http://www.calgaryherald.com/story_prin ... 1&sponsor=
Scientists celebrate Big Bang simulation
Project could alter our views of the universe
By Robert Evans, Reuters; with files from Canwest News Service;March 31, 2010
http://www.calgaryherald.com/story_prin ... 8&sponsor=
A graphic showing a subatomic collision at full power is seen in the control room of CERN's Large Hadron Collider near Geneva, on Tuesday. One scientists called it "the breakthrough movement we have all been waiting for."
Photograph by: Denis Balibouse, Reuters, Reuters; with files from Canwest News Service;
Physicists smashed subatomic particles into each other with record energy on Tuesday, creating thousands of mini-Big Bangs like the primeval explosion that gave birth to the universe 13.7 billion years ago.
Scientists and engineers in control rooms across the sprawling European Centre for Nuclear Research (CERN) near Geneva burst into applause as the $9.4-billion project to probe the origins of the cosmos scored its first big success.
Their cheers were echoed by physicists from Tokyo to Toronto.
"This opens the door to a totally new era of discovery," said Sergio Bertolucci, CERN's director of research. "It is a step into the unknown where we will find things we thought were there and perhaps things we didn't know existed."
"It just shows what we can do in pushing knowledge forward on where we came from, how the early universe evolved," said CERN director general Rolf Heuer, speaking, like Bertolucci, on a video relay from Tokyo.
Isabel Trigger and her husband Rob McPherson, key players on the Canadian team involved with the unprecedented international experiment, were ecstatic to see the Large Hadron Collider project finally smashing subatomic particles.
"It really is fantastic," Trigger said from the TRIUMF national physics at the University of B.C. after Tuesday's collision in Europe.
"This is the breakthrough movement we have all been waiting for," said McPherson, a professor at University of Victoria and principal investor of the Canadian team that helped design, build and commission the most complicated machine ever.
Colourful images of the collisions at the centre of the LHC, which will continue for over a decade, were flashed onto screens across CERN.
CERN scientists say the images reflect what happened a fraction of a second after the Big Bang as matter and energy was spewed out, leading to the formation of galaxies, stars and planets, and eventually the appearance of life.
Over the coming months and years, 10,000 researchers in laboratories around the globe, as well as at CERN, will analyze the huge volumes of data that will be produced from billions of LHC particle collisions to see how that happened.
Among stuff of the universe they hope to track down are invisible dark material making up 25 per cent of the cosmos, a particle dubbed the Higgs boson that gives mass to matter, and perhaps new dimensions to add to the four already known.
"These are the known unknowns, but there are unknown unknowns out there which could make us radically revise our view of how the universe works," Bertolucci said.
After two efforts earlier in the day were aborted due to technical glitches, the LHC slammed beams of particles together at a collision energy of 7 TeV, or 7 million million electron volts.
This was 3 1/2 times more than ever achieved in a particle accelerator. The particle beams were travelling at a fraction under the speed of light when they hit each other in a tunnel 100 metres under the Swiss-French border.
© Copyright (c) The Calgary Herald
PreviousNext
A graphic showing a subatomic collision at full power is seen in the control room of CERN's Large Hadron Collider near Geneva, on Tuesday. One scientists called it "the breakthrough movement we have all been waiting for."
Photograph by: Denis Balibouse, Reuters, Reuters; with files from Canwest News Service;
Project could alter our views of the universe
By Robert Evans, Reuters; with files from Canwest News Service;March 31, 2010
http://www.calgaryherald.com/story_prin ... 8&sponsor=
A graphic showing a subatomic collision at full power is seen in the control room of CERN's Large Hadron Collider near Geneva, on Tuesday. One scientists called it "the breakthrough movement we have all been waiting for."
Photograph by: Denis Balibouse, Reuters, Reuters; with files from Canwest News Service;
Physicists smashed subatomic particles into each other with record energy on Tuesday, creating thousands of mini-Big Bangs like the primeval explosion that gave birth to the universe 13.7 billion years ago.
Scientists and engineers in control rooms across the sprawling European Centre for Nuclear Research (CERN) near Geneva burst into applause as the $9.4-billion project to probe the origins of the cosmos scored its first big success.
Their cheers were echoed by physicists from Tokyo to Toronto.
"This opens the door to a totally new era of discovery," said Sergio Bertolucci, CERN's director of research. "It is a step into the unknown where we will find things we thought were there and perhaps things we didn't know existed."
"It just shows what we can do in pushing knowledge forward on where we came from, how the early universe evolved," said CERN director general Rolf Heuer, speaking, like Bertolucci, on a video relay from Tokyo.
Isabel Trigger and her husband Rob McPherson, key players on the Canadian team involved with the unprecedented international experiment, were ecstatic to see the Large Hadron Collider project finally smashing subatomic particles.
"It really is fantastic," Trigger said from the TRIUMF national physics at the University of B.C. after Tuesday's collision in Europe.
"This is the breakthrough movement we have all been waiting for," said McPherson, a professor at University of Victoria and principal investor of the Canadian team that helped design, build and commission the most complicated machine ever.
Colourful images of the collisions at the centre of the LHC, which will continue for over a decade, were flashed onto screens across CERN.
CERN scientists say the images reflect what happened a fraction of a second after the Big Bang as matter and energy was spewed out, leading to the formation of galaxies, stars and planets, and eventually the appearance of life.
Over the coming months and years, 10,000 researchers in laboratories around the globe, as well as at CERN, will analyze the huge volumes of data that will be produced from billions of LHC particle collisions to see how that happened.
Among stuff of the universe they hope to track down are invisible dark material making up 25 per cent of the cosmos, a particle dubbed the Higgs boson that gives mass to matter, and perhaps new dimensions to add to the four already known.
"These are the known unknowns, but there are unknown unknowns out there which could make us radically revise our view of how the universe works," Bertolucci said.
After two efforts earlier in the day were aborted due to technical glitches, the LHC slammed beams of particles together at a collision energy of 7 TeV, or 7 million million electron volts.
This was 3 1/2 times more than ever achieved in a particle accelerator. The particle beams were travelling at a fraction under the speed of light when they hit each other in a tunnel 100 metres under the Swiss-French border.
© Copyright (c) The Calgary Herald
PreviousNext
A graphic showing a subatomic collision at full power is seen in the control room of CERN's Large Hadron Collider near Geneva, on Tuesday. One scientists called it "the breakthrough movement we have all been waiting for."
Photograph by: Denis Balibouse, Reuters, Reuters; with files from Canwest News Service;
There is a related video linked at:
http://www.nytimes.com/2010/04/27/opini ... ?th&emc=th
April 27, 2010
Editorial
View of the Sun
There has always been something miraculous about transmissions from space — those thin datastreams trickling toward Earth from research spacecraft. Over the years, the transmissions have grown more and more robust, richer in information, giving us dazzling and detailed panoramas of Saturn’s moons and rings and the surface of Mars. But there has never really been anything like the Solar Dynamics Observatory. Launched in mid-February, it is shipping a torrent of data our way from its orbit above the Sun.
Last week, the Goddard Space Flight Center took the craft online, releasing the first videos and still images it shot. The quality of these images is extraordinary, 10 times the resolution of high-definition television, according to NASA.
We have seen the surface of the Sun before, but never with this clarity. Every 10 seconds, the satellite photographs the solar disk in eight different wavelengths, and what emerges — even in these earliest images — is both stirring and disorienting. The Sun is the most constant object in our lives, but what we see in these videos is a livid, roiling star, mottled and seething on every wavelength. It is a thing of intense, disturbing beauty.
The Solar Dynamics Observatory follows on the work of other important solar projects, including the Solar and Heliographic Observatory and the twin satellites of the project known as Stereo, for Solar Terrestrial Relations Observatory, which has its own iPhone app. For the next five years and more, this new satellite will be studying the patterns of solar energy that affect life on Earth, like the solar storm that lit up the aurora borealis in the past few weeks and can disrupt navigation and communications. And, in a sense, it creates a new solar effect, which is the ability of humans to peer directly into the most familiar of stars and realize how alien it is.
*****
April 26, 2010
The Search for Genes Leads to Unexpected Places
By CARL ZIMMER
Edward M. Marcotte is looking for drugs that can kill tumors by stopping blood vessel growth, and he and his colleagues at the University of Texas at Austin recently found some good targets — five human genes that are essential for that growth. Now they’re hunting for drugs that can stop those genes from working. Strangely, though, Dr. Marcotte did not discover the new genes in the human genome, nor in lab mice or even fruit flies. He and his colleagues found the genes in yeast.
“On the face of it, it’s just crazy,” Dr. Marcotte said. After all, these single-cell fungi don’t make blood vessels. They don’t even make blood. In yeast, it turns out, these five genes work together on a completely unrelated task: fixing cell walls.
Crazier still, Dr. Marcotte and his colleagues have discovered hundreds of other genes involved in human disorders by looking at distantly related species. They have found genes associated with deafness in plants, for example, and genes associated with breast cancer in nematode worms. The researchers reported their results recently in The Proceedings of the National Academy of Sciences.
More....
http://www.nytimes.com/2010/04/27/scien ... &th&emc=th
http://www.nytimes.com/2010/04/27/opini ... ?th&emc=th
April 27, 2010
Editorial
View of the Sun
There has always been something miraculous about transmissions from space — those thin datastreams trickling toward Earth from research spacecraft. Over the years, the transmissions have grown more and more robust, richer in information, giving us dazzling and detailed panoramas of Saturn’s moons and rings and the surface of Mars. But there has never really been anything like the Solar Dynamics Observatory. Launched in mid-February, it is shipping a torrent of data our way from its orbit above the Sun.
Last week, the Goddard Space Flight Center took the craft online, releasing the first videos and still images it shot. The quality of these images is extraordinary, 10 times the resolution of high-definition television, according to NASA.
We have seen the surface of the Sun before, but never with this clarity. Every 10 seconds, the satellite photographs the solar disk in eight different wavelengths, and what emerges — even in these earliest images — is both stirring and disorienting. The Sun is the most constant object in our lives, but what we see in these videos is a livid, roiling star, mottled and seething on every wavelength. It is a thing of intense, disturbing beauty.
The Solar Dynamics Observatory follows on the work of other important solar projects, including the Solar and Heliographic Observatory and the twin satellites of the project known as Stereo, for Solar Terrestrial Relations Observatory, which has its own iPhone app. For the next five years and more, this new satellite will be studying the patterns of solar energy that affect life on Earth, like the solar storm that lit up the aurora borealis in the past few weeks and can disrupt navigation and communications. And, in a sense, it creates a new solar effect, which is the ability of humans to peer directly into the most familiar of stars and realize how alien it is.
*****
April 26, 2010
The Search for Genes Leads to Unexpected Places
By CARL ZIMMER
Edward M. Marcotte is looking for drugs that can kill tumors by stopping blood vessel growth, and he and his colleagues at the University of Texas at Austin recently found some good targets — five human genes that are essential for that growth. Now they’re hunting for drugs that can stop those genes from working. Strangely, though, Dr. Marcotte did not discover the new genes in the human genome, nor in lab mice or even fruit flies. He and his colleagues found the genes in yeast.
“On the face of it, it’s just crazy,” Dr. Marcotte said. After all, these single-cell fungi don’t make blood vessels. They don’t even make blood. In yeast, it turns out, these five genes work together on a completely unrelated task: fixing cell walls.
Crazier still, Dr. Marcotte and his colleagues have discovered hundreds of other genes involved in human disorders by looking at distantly related species. They have found genes associated with deafness in plants, for example, and genes associated with breast cancer in nematode worms. The researchers reported their results recently in The Proceedings of the National Academy of Sciences.
More....
http://www.nytimes.com/2010/04/27/scien ... &th&emc=th
May 14, 2010
Goddess English of Uttar Pradesh
By MANU JOSEPH
Mumbai, India
A FORTNIGHT ago, in a poor village in Uttar Pradesh, in northern India, work began on a temple dedicated to Goddess English. Standing on a wooden desk was the idol of English — a bronze figure in robes, wearing a wide-brimmed hat and holding aloft a pen. About 1,000 villagers had gathered for the groundbreaking, most of them Dalits, the untouchables at the bottom of India’s caste system. A social activist promoting the study of English, dressed in a Western suit despite the hot sun and speaking as if he were imparting religious wisdom, said, “Learn A, B, C, D.” The temple is a gesture of defiance from the Dalits to the nation’s elite as well as a message to the Dalit young — English can save you.
A few days later, the Internet Corporation for Assigned Names and Numbers, a body that oversees domain names on the Web, announced a different kind of liberation: it has taken the first steps to free the online world from the Latin script, which English and most Web addresses are written in. In some parts of the world, Web addresses can already be written in non-Latin scripts, though until this change, all needed the Latin alphabet for country codes, like “.sa” for Saudi Arabia. But now that nation, along with Egypt and the United Arab Emirates, has been granted a country code in the Arabic alphabet, and Russia has gotten a Cyrillic one. Soon, others will follow.
Icann calls it a “historic” development, and that is true, but only because a great cliché has finally been defeated. The Internet as a unifier of humanity was always literary nonsense, on par with “truth will triumph.”
The universality of the Latin script online was an accident of its American invention, not a global intention. The world does not want to be unified. What is the value of belonging if you belong to all? It is a fragmented world by choice, and so it was always a fragmented Web. Now we can stop pretending — but that doesn’t mean this is a change worth celebrating.
Many have argued that the introduction of domain names and country codes in non-Latin scripts will help the Web finally reach the world’s poor. But it is really hard to believe that what separates an Egyptian or a Tamil peasant from the Internet is the requirement to type in a few foreign characters. There are far greater obstacles. It is even harder to believe that all the people who are demanding their freedom from the Latin script are doing it for humanitarian reasons. A big part of the issue here is nationalism, and the East’s imagination of the West as an adversary. This is just the latest episode in an ancient campaign.
A decade ago I met Mahatma Gandhi’s great-grandson, Tushar Gandhi, a jolly, endearing, meat-eating man. He was distraught that the Indians who were creating Web sites were choosing the dot-com domain over the more patriotic dot-in. He was trying to convince Indians to flaunt their nationality. He told me: “As long as we live in this world, there will be boundaries. And we need to be proud of what we call home.”
It is the same sentiment that is now inspiring small groups of Indians to demand top-level domain names (the suffix that follows the dot in a Web address) in their own native scripts, like Tamil. The Tamil language is spoken in the south Indian state of Tamil Nadu, where I spent the first 20 years of my life, and where I have seen fierce protests against the colonizing power of Hindi. The International Forum for Information Technology in Tamil, a tech advocacy and networking group, has petitioned Icann for top-level domain names in the Tamil script. But if it cares about increasing the opportunities available to poor Tamils, it should be promoting English, not Tamil.
There’s no denying that at the heart of India’s new prosperity is a foreign language, and that the opportunistic acceptance of English has improved the lives of millions of Indians. There are huge benefits in exploiting a stronger cultural force instead of defying it. Imagine what would have happened if the 12th-century Europeans who first encountered Hindu-Arabic numerals (0, 1, 2, 3) had rejected them as a foreign oddity and persisted with the cumbersome Roman numerals (IV, V). The extraordinary advances in mathematics made by Europeans would probably have been impossible.
But then the world is what it is. There is an expression popularized by the spread of the Internet: the global village. Though intended as a celebration of the modern world’s inclusiveness, it is really an accurate condemnation of that world. After all, a village is a petty place — filled with old grudges, comical self-importance and imagined fears.
Manu Joseph, the deputy editor of the Indian newsweekly OPEN, is the author of the forthcoming novel “Serious Men.”
http://www.nytimes.com/2010/05/16/opini ... ?th&emc=th
Goddess English of Uttar Pradesh
By MANU JOSEPH
Mumbai, India
A FORTNIGHT ago, in a poor village in Uttar Pradesh, in northern India, work began on a temple dedicated to Goddess English. Standing on a wooden desk was the idol of English — a bronze figure in robes, wearing a wide-brimmed hat and holding aloft a pen. About 1,000 villagers had gathered for the groundbreaking, most of them Dalits, the untouchables at the bottom of India’s caste system. A social activist promoting the study of English, dressed in a Western suit despite the hot sun and speaking as if he were imparting religious wisdom, said, “Learn A, B, C, D.” The temple is a gesture of defiance from the Dalits to the nation’s elite as well as a message to the Dalit young — English can save you.
A few days later, the Internet Corporation for Assigned Names and Numbers, a body that oversees domain names on the Web, announced a different kind of liberation: it has taken the first steps to free the online world from the Latin script, which English and most Web addresses are written in. In some parts of the world, Web addresses can already be written in non-Latin scripts, though until this change, all needed the Latin alphabet for country codes, like “.sa” for Saudi Arabia. But now that nation, along with Egypt and the United Arab Emirates, has been granted a country code in the Arabic alphabet, and Russia has gotten a Cyrillic one. Soon, others will follow.
Icann calls it a “historic” development, and that is true, but only because a great cliché has finally been defeated. The Internet as a unifier of humanity was always literary nonsense, on par with “truth will triumph.”
The universality of the Latin script online was an accident of its American invention, not a global intention. The world does not want to be unified. What is the value of belonging if you belong to all? It is a fragmented world by choice, and so it was always a fragmented Web. Now we can stop pretending — but that doesn’t mean this is a change worth celebrating.
Many have argued that the introduction of domain names and country codes in non-Latin scripts will help the Web finally reach the world’s poor. But it is really hard to believe that what separates an Egyptian or a Tamil peasant from the Internet is the requirement to type in a few foreign characters. There are far greater obstacles. It is even harder to believe that all the people who are demanding their freedom from the Latin script are doing it for humanitarian reasons. A big part of the issue here is nationalism, and the East’s imagination of the West as an adversary. This is just the latest episode in an ancient campaign.
A decade ago I met Mahatma Gandhi’s great-grandson, Tushar Gandhi, a jolly, endearing, meat-eating man. He was distraught that the Indians who were creating Web sites were choosing the dot-com domain over the more patriotic dot-in. He was trying to convince Indians to flaunt their nationality. He told me: “As long as we live in this world, there will be boundaries. And we need to be proud of what we call home.”
It is the same sentiment that is now inspiring small groups of Indians to demand top-level domain names (the suffix that follows the dot in a Web address) in their own native scripts, like Tamil. The Tamil language is spoken in the south Indian state of Tamil Nadu, where I spent the first 20 years of my life, and where I have seen fierce protests against the colonizing power of Hindi. The International Forum for Information Technology in Tamil, a tech advocacy and networking group, has petitioned Icann for top-level domain names in the Tamil script. But if it cares about increasing the opportunities available to poor Tamils, it should be promoting English, not Tamil.
There’s no denying that at the heart of India’s new prosperity is a foreign language, and that the opportunistic acceptance of English has improved the lives of millions of Indians. There are huge benefits in exploiting a stronger cultural force instead of defying it. Imagine what would have happened if the 12th-century Europeans who first encountered Hindu-Arabic numerals (0, 1, 2, 3) had rejected them as a foreign oddity and persisted with the cumbersome Roman numerals (IV, V). The extraordinary advances in mathematics made by Europeans would probably have been impossible.
But then the world is what it is. There is an expression popularized by the spread of the Internet: the global village. Though intended as a celebration of the modern world’s inclusiveness, it is really an accurate condemnation of that world. After all, a village is a petty place — filled with old grudges, comical self-importance and imagined fears.
Manu Joseph, the deputy editor of the Indian newsweekly OPEN, is the author of the forthcoming novel “Serious Men.”
http://www.nytimes.com/2010/05/16/opini ... ?th&emc=th
Synthetic genome inspires both awe and apprehension
By Margaret Munro, Canwest News Service
May 21, 2010
In a feat of genetic engineering heralded by philosophic quotations and dark fears of a Frankenstein future, a team of scientists in a Maryland laboratory have brought to life the world's first synthetic cells.
The microbes -- a tiny clump of blue cells -- came to life about a month ago. They are controlled by a chromosome made by a team led by maverick geneticist Craig Venter, who has dreamed of creating artificial life for 15 years.
Venter and his colleagues have now accomplished the feat and inscribed their names -- along with a few lines of philosophy -- in the life-giving chromosome.
"We ended up with the world's first synthetic cell [that is] powered and controlled totally by a synthetic chromosome, made from four bottles of chemicals," Venter said.
The genetic whiz, who is also working with some of the world's biggest companies to try to put synthetic microbes to work, has taken to describing life as "a software process" that can be "booted up."
"It's certainly changed my views of the definition of life and how life works," said Venter, who unveiled his synthetic cells Thursday in the journal Science.
He described the cells as "the first self-replicating species we've had on the planet whose parent is a computer."
Praise and worry
The creation is inspiring both awe and angst.
"It is a remarkable technological feat," said University of Toronto bioengineer Elizabeth Edwards.
"It's paradigm-shifting," said University of Calgary bioethicist and biochemist Gregor Wolbring, adding the fast-moving field of synthetic biology is ushering in "cyber" cells and life.
It could be as "transformative" as the computer revolution, said Andrew Hessel, of the Pink Army Cooperative, an Albertabased initiative promoting doit-yourself bioengineering.
Hessel said Venter deserves the Nobel Prize for his pioneering work in creating "a new branch on the evolutionary tree" -- one where humans shape and control new species.
While Hessel foresees great things, others see looming disaster.
Historic breakthrough
The arrival of the synthetic microbes "should be a wake-up call that a technological step-change of historic and alarming proportions has now occurred," said Pat Mooney, the Ottawa-based executive director of the watchdog agency ETC Group, which follows Venter's work closely.
"Like splitting the atom or cloning Dolly, the world is now going to have to deal with the social, economic and political fallout from commercially-driven scientific hubris in ways we can't yet imagine," said Mooney.
He raises the spectre of "new forms of living pollution and bioweapons" and said Venter's partnership with companies such as BP and ExxonMobil "threatens biodiversity on a large scale."
Synthetic microbes and cells could consume large amounts of plant life as feedstock for the next generation of biofuels and bio-based chemicals, said Mooney, who is calling for a moratorium on synthetic biology until oversight mechanisms are in place.
Mooney and his colleagues say the synthetic microbes should not be allowed out of the lab.
Venter's ambitions
Venter has big plans for synthetic life, and has filed for patents on some of the techniques his team is using. He is collaborating with Exxon Mobil to create algae to capture carbon dioxide and construct hydrocarbons "to try to replace taking oil out of the ground."
New organisms could be designed to make chemicals and "food substances" and clean up water supplies, he said, noting that the most immediate application is a project to speed up flu-vaccine production.
"We are entering a new era," said Venter. "We are limited mostly by our imaginations."
His team of 20 scientists at the J. Craig Venter Institute in Maryland is said to have spent close to $40 million on the project to provide "proof of principle for producing cells based upon genome sequences designed in the computer."
They had hoped to bring their synthetic chromosome to life years ago, but Venter said they ran in serious "roadblocks" stringing the DNA together and getting the chromosome to work.
They designed the chromosome from a DNA sequence for a simple species of bacteria. It was stored in a computer and used as instructions for assembly of the chromosome, which was made of four compounds that are the basic building blocks of DNA.
Venter likens it to building something out of Lego pieces. They made short bits of the DNA and then inserted them into yeast cells, where DNA-repair enzymes linked the strings together. After three rounds of assembly, they had a synthetic genome that was more than a million building blocks long.
Then they inserted the synthetic chromosome into cells of naturally occurring bacteria. One tiny mistake in the synthetic genome caused a weeks-long delay, but last month the genome "booted up" some recipient cells. It took control of the cells, which began to replicate and generated a colony of blue bacteria -- a blue marker turns on in cells using the new genome.
Venter's team has been publishing reports on their progress over the years, and observers such as Edwards in Toronto note that they haven't created a truly synthetic life form because the genome was inserted into existing cells.
"It is important to note that they synthesized a genome, not a whole cell," said Edwards.
But she said it's still a remarkable accomplishment as they have devised "clever ways of assembling and manipulating large molecules of DNA without breaking them up."
And as Venter notes, the artificial genome took control of the cells to create what he calls "synthetic cells."
Venter said "it's pretty stunning" to replace the DNA software in a cell. "The cell instantly starts reading that new software, starts making a whole different set of proteins, and within a short while all the characteristics of the first species disappear and a new species emerges from the software that controls that cell going forward."
He said the new synthetic cells are based on a "minor" pathogen that can infect goats. He said they tried to eliminate the disease-causing genes.
'Cannot' escape lab
"It will not grow outside of the laboratory unless it is deliberately injected or sprayed into a goat," he said, noting the project had an extensive bioethical review before it began.
The synthetic chromosome includes several "watermarks" that make it clear that it was made in a lab, including the scientists' names and three quotations -- "adding a little philosophy into the genetic code," said Venter.
As synthetic biology advances, he said, more sophisticated containment systems will develop, such as "suicide" genes -- genetic fail-safes that would limit the organisms' life spans or kill them off if they should leave a controlled environment.
"There are a number of approaches we and other labs are developing to guarantee absolute containment," he said.
From simple to complex
While this first creation is a simple bacteria, he expects to make more complex synthetic cells.
"Higher animals, multi-cellular systems are, I think, projects for the much more distant future," said Venter.
While some critics want a moratorium, Edwards said the work is "not really any more concerning than the kinds of DNA manipulations one can already do.
"And it is great that this kind of research is done openly so that we can have intelligent dialogue about what it means," she said.
---
SPELLING IN DNA
The new synthetic genome includes designed segments of DNA that use the genetic "alphabet" of genes and proteins to spell out words and phrases.
In addition to the new code, the new genome includes a web address to send e-mails to-if you manage to read the code -plus the names of 46 authors and other key contributors, and three quotations:
"To live, to err, to fall, to triumph, to recreate life out of life."
--James Joyce
"See things not as they are, but as they might be."
--Felix Adler, ethical philosopher, as quoted in the Robert Oppenheimer biography, American Prometheus.
"What I cannot build, I cannot understand."
--Richard Feynman, physicist
© Copyright (c) The Vancouver Sun
http://www.vancouversun.com/technology/ ... story.html
By Margaret Munro, Canwest News Service
May 21, 2010
In a feat of genetic engineering heralded by philosophic quotations and dark fears of a Frankenstein future, a team of scientists in a Maryland laboratory have brought to life the world's first synthetic cells.
The microbes -- a tiny clump of blue cells -- came to life about a month ago. They are controlled by a chromosome made by a team led by maverick geneticist Craig Venter, who has dreamed of creating artificial life for 15 years.
Venter and his colleagues have now accomplished the feat and inscribed their names -- along with a few lines of philosophy -- in the life-giving chromosome.
"We ended up with the world's first synthetic cell [that is] powered and controlled totally by a synthetic chromosome, made from four bottles of chemicals," Venter said.
The genetic whiz, who is also working with some of the world's biggest companies to try to put synthetic microbes to work, has taken to describing life as "a software process" that can be "booted up."
"It's certainly changed my views of the definition of life and how life works," said Venter, who unveiled his synthetic cells Thursday in the journal Science.
He described the cells as "the first self-replicating species we've had on the planet whose parent is a computer."
Praise and worry
The creation is inspiring both awe and angst.
"It is a remarkable technological feat," said University of Toronto bioengineer Elizabeth Edwards.
"It's paradigm-shifting," said University of Calgary bioethicist and biochemist Gregor Wolbring, adding the fast-moving field of synthetic biology is ushering in "cyber" cells and life.
It could be as "transformative" as the computer revolution, said Andrew Hessel, of the Pink Army Cooperative, an Albertabased initiative promoting doit-yourself bioengineering.
Hessel said Venter deserves the Nobel Prize for his pioneering work in creating "a new branch on the evolutionary tree" -- one where humans shape and control new species.
While Hessel foresees great things, others see looming disaster.
Historic breakthrough
The arrival of the synthetic microbes "should be a wake-up call that a technological step-change of historic and alarming proportions has now occurred," said Pat Mooney, the Ottawa-based executive director of the watchdog agency ETC Group, which follows Venter's work closely.
"Like splitting the atom or cloning Dolly, the world is now going to have to deal with the social, economic and political fallout from commercially-driven scientific hubris in ways we can't yet imagine," said Mooney.
He raises the spectre of "new forms of living pollution and bioweapons" and said Venter's partnership with companies such as BP and ExxonMobil "threatens biodiversity on a large scale."
Synthetic microbes and cells could consume large amounts of plant life as feedstock for the next generation of biofuels and bio-based chemicals, said Mooney, who is calling for a moratorium on synthetic biology until oversight mechanisms are in place.
Mooney and his colleagues say the synthetic microbes should not be allowed out of the lab.
Venter's ambitions
Venter has big plans for synthetic life, and has filed for patents on some of the techniques his team is using. He is collaborating with Exxon Mobil to create algae to capture carbon dioxide and construct hydrocarbons "to try to replace taking oil out of the ground."
New organisms could be designed to make chemicals and "food substances" and clean up water supplies, he said, noting that the most immediate application is a project to speed up flu-vaccine production.
"We are entering a new era," said Venter. "We are limited mostly by our imaginations."
His team of 20 scientists at the J. Craig Venter Institute in Maryland is said to have spent close to $40 million on the project to provide "proof of principle for producing cells based upon genome sequences designed in the computer."
They had hoped to bring their synthetic chromosome to life years ago, but Venter said they ran in serious "roadblocks" stringing the DNA together and getting the chromosome to work.
They designed the chromosome from a DNA sequence for a simple species of bacteria. It was stored in a computer and used as instructions for assembly of the chromosome, which was made of four compounds that are the basic building blocks of DNA.
Venter likens it to building something out of Lego pieces. They made short bits of the DNA and then inserted them into yeast cells, where DNA-repair enzymes linked the strings together. After three rounds of assembly, they had a synthetic genome that was more than a million building blocks long.
Then they inserted the synthetic chromosome into cells of naturally occurring bacteria. One tiny mistake in the synthetic genome caused a weeks-long delay, but last month the genome "booted up" some recipient cells. It took control of the cells, which began to replicate and generated a colony of blue bacteria -- a blue marker turns on in cells using the new genome.
Venter's team has been publishing reports on their progress over the years, and observers such as Edwards in Toronto note that they haven't created a truly synthetic life form because the genome was inserted into existing cells.
"It is important to note that they synthesized a genome, not a whole cell," said Edwards.
But she said it's still a remarkable accomplishment as they have devised "clever ways of assembling and manipulating large molecules of DNA without breaking them up."
And as Venter notes, the artificial genome took control of the cells to create what he calls "synthetic cells."
Venter said "it's pretty stunning" to replace the DNA software in a cell. "The cell instantly starts reading that new software, starts making a whole different set of proteins, and within a short while all the characteristics of the first species disappear and a new species emerges from the software that controls that cell going forward."
He said the new synthetic cells are based on a "minor" pathogen that can infect goats. He said they tried to eliminate the disease-causing genes.
'Cannot' escape lab
"It will not grow outside of the laboratory unless it is deliberately injected or sprayed into a goat," he said, noting the project had an extensive bioethical review before it began.
The synthetic chromosome includes several "watermarks" that make it clear that it was made in a lab, including the scientists' names and three quotations -- "adding a little philosophy into the genetic code," said Venter.
As synthetic biology advances, he said, more sophisticated containment systems will develop, such as "suicide" genes -- genetic fail-safes that would limit the organisms' life spans or kill them off if they should leave a controlled environment.
"There are a number of approaches we and other labs are developing to guarantee absolute containment," he said.
From simple to complex
While this first creation is a simple bacteria, he expects to make more complex synthetic cells.
"Higher animals, multi-cellular systems are, I think, projects for the much more distant future," said Venter.
While some critics want a moratorium, Edwards said the work is "not really any more concerning than the kinds of DNA manipulations one can already do.
"And it is great that this kind of research is done openly so that we can have intelligent dialogue about what it means," she said.
---
SPELLING IN DNA
The new synthetic genome includes designed segments of DNA that use the genetic "alphabet" of genes and proteins to spell out words and phrases.
In addition to the new code, the new genome includes a web address to send e-mails to-if you manage to read the code -plus the names of 46 authors and other key contributors, and three quotations:
"To live, to err, to fall, to triumph, to recreate life out of life."
--James Joyce
"See things not as they are, but as they might be."
--Felix Adler, ethical philosopher, as quoted in the Robert Oppenheimer biography, American Prometheus.
"What I cannot build, I cannot understand."
--Richard Feynman, physicist
© Copyright (c) The Vancouver Sun
http://www.vancouversun.com/technology/ ... story.html
May 27, 2010
The Earth’s Secrets, Hidden in the Skies
By DANIEL N. BAKER
BOULDER, Colo.
ONE of the greatest advances in space technology has been the military’s Global Positioning System satellites, which provide remarkably accurate navigation information for everything from smart phones and cars to pet collars.
But the navigational data is only one part of the program’s mission. The Nuclear Detonation Detection System, an array of sensors also on board the satellites, watches the world for nuclear explosions. In the process, it collects mounds of environmental data which, in the hands of climate scientists, could add greatly to our understanding of global warming.
Unlike the G.P.S. information, however, much of the detection system data is hidden behind bureaucratic walls by national security agencies, which treat it as classified, even though it isn’t, and even though there’s no compelling national security reason to do so.
The history of the G.P.S. system shows the impact satellite data can have on commercial and scientific progress. Since it was first made publicly available in the 1980s, G.P.S. has revolutionized industries from telecommunications to agriculture. Estimates place its economic value in the tens to hundreds of billions of dollars each year. And that’s not counting its impact on everyday activities like hiking, boating and golf.
Then there’s the science: using the G.P.S. radio waves that travel through the earth’s atmosphere, researchers can better understand its temperature, density, water content and other properties, data that is critical to work on climate change and pollution.
Meanwhile, in the process of watching for a nuclear detonation, the detection system’s sensors — designed to observe visible light, high-frequency radio waves, X-rays, gamma rays and other data that might point to a nuclear explosion — stream an amazing array of data on powerful lightning strikes, space hazards like meteoroids and man-made debris and severe solar and space weather events.
It’s a daily trove of scientifically useful data that is not duplicated by any other sensor systems, military or civilian. True, other agencies collect similar data; sadly, it’s not nearly as comprehensive or global as the detection system’s information.
Unless a nuclear explosion takes place, the data has no immediate relevance to national security. Yet bureaucratic inertia has kept in place the presumption that because some of the data might be sensitive, all of it has to be protected; as a result, a thicket of paperwork and procedures deters all but the most resourceful and patient scientists from gaining access to it.
Making the data more available would be remarkably simple. The Departments of Energy and Defense, which operate the satellites’ detection functions, should apply the same standards used for G.P.S.: All but the most sensitive data is disseminated automatically, so that anyone in the world can tap into the flow of information beaming down from the satellites.
Opening this data would have many benefits. It could, for example, improve meteorologists’ ability to monitor destructive weather like “super” thunderstorms, hurricanes and solar storms, which can disable the electric power grid.
It would also allow scientists and engineers at national laboratories like Los Alamos, Sandia and Lawrence Livermore to greatly expand their research on climate change and other critical topics. While some scientists can already get access to the data, current restrictions mean they can’t easily share it. Making the data truly public would allow full peer review of their findings, leading to higher-quality research.
Much as America’s scientific leadership and policy of open inquiry did wonders for its prestige during the cold war, making most of the detection system data available to the global public would show friends, allies and adversaries that the United States is willing to use even its most advanced defense assets for the betterment of humanity.
American taxpayers support a truly remarkable monitoring system whose information could significantly improve our health, security and well-being. We should use this hidden treasure to make the world a better and safer place.
Daniel N. Baker is a professor of astrophysical and planetary sciences and the director of the Laboratory for Atmospheric and Space Physics at the University of Colorado at Boulder.
http://www.nytimes.com/2010/05/28/opini ... ?th&emc=th
The Earth’s Secrets, Hidden in the Skies
By DANIEL N. BAKER
BOULDER, Colo.
ONE of the greatest advances in space technology has been the military’s Global Positioning System satellites, which provide remarkably accurate navigation information for everything from smart phones and cars to pet collars.
But the navigational data is only one part of the program’s mission. The Nuclear Detonation Detection System, an array of sensors also on board the satellites, watches the world for nuclear explosions. In the process, it collects mounds of environmental data which, in the hands of climate scientists, could add greatly to our understanding of global warming.
Unlike the G.P.S. information, however, much of the detection system data is hidden behind bureaucratic walls by national security agencies, which treat it as classified, even though it isn’t, and even though there’s no compelling national security reason to do so.
The history of the G.P.S. system shows the impact satellite data can have on commercial and scientific progress. Since it was first made publicly available in the 1980s, G.P.S. has revolutionized industries from telecommunications to agriculture. Estimates place its economic value in the tens to hundreds of billions of dollars each year. And that’s not counting its impact on everyday activities like hiking, boating and golf.
Then there’s the science: using the G.P.S. radio waves that travel through the earth’s atmosphere, researchers can better understand its temperature, density, water content and other properties, data that is critical to work on climate change and pollution.
Meanwhile, in the process of watching for a nuclear detonation, the detection system’s sensors — designed to observe visible light, high-frequency radio waves, X-rays, gamma rays and other data that might point to a nuclear explosion — stream an amazing array of data on powerful lightning strikes, space hazards like meteoroids and man-made debris and severe solar and space weather events.
It’s a daily trove of scientifically useful data that is not duplicated by any other sensor systems, military or civilian. True, other agencies collect similar data; sadly, it’s not nearly as comprehensive or global as the detection system’s information.
Unless a nuclear explosion takes place, the data has no immediate relevance to national security. Yet bureaucratic inertia has kept in place the presumption that because some of the data might be sensitive, all of it has to be protected; as a result, a thicket of paperwork and procedures deters all but the most resourceful and patient scientists from gaining access to it.
Making the data more available would be remarkably simple. The Departments of Energy and Defense, which operate the satellites’ detection functions, should apply the same standards used for G.P.S.: All but the most sensitive data is disseminated automatically, so that anyone in the world can tap into the flow of information beaming down from the satellites.
Opening this data would have many benefits. It could, for example, improve meteorologists’ ability to monitor destructive weather like “super” thunderstorms, hurricanes and solar storms, which can disable the electric power grid.
It would also allow scientists and engineers at national laboratories like Los Alamos, Sandia and Lawrence Livermore to greatly expand their research on climate change and other critical topics. While some scientists can already get access to the data, current restrictions mean they can’t easily share it. Making the data truly public would allow full peer review of their findings, leading to higher-quality research.
Much as America’s scientific leadership and policy of open inquiry did wonders for its prestige during the cold war, making most of the detection system data available to the global public would show friends, allies and adversaries that the United States is willing to use even its most advanced defense assets for the betterment of humanity.
American taxpayers support a truly remarkable monitoring system whose information could significantly improve our health, security and well-being. We should use this hidden treasure to make the world a better and safer place.
Daniel N. Baker is a professor of astrophysical and planetary sciences and the director of the Laboratory for Atmospheric and Space Physics at the University of Colorado at Boulder.
http://www.nytimes.com/2010/05/28/opini ... ?th&emc=th
June 10, 2010
Mind Over Mass Media
By STEVEN PINKER
Truro, Mass.
NEW forms of media have always caused moral panics: the printing press, newspapers, paperbacks and television were all once denounced as threats to their consumers’ brainpower and moral fiber.
So too with electronic technologies. PowerPoint, we’re told, is reducing discourse to bullet points. Search engines lower our intelligence, encouraging us to skim on the surface of knowledge rather than dive to its depths. Twitter is shrinking our attention spans.
But such panics often fail basic reality checks. When comic books were accused of turning juveniles into delinquents in the 1950s, crime was falling to record lows, just as the denunciations of video games in the 1990s coincided with the great American crime decline. The decades of television, transistor radios and rock videos were also decades in which I.Q. scores rose continuously.
For a reality check today, take the state of science, which demands high levels of brainwork and is measured by clear benchmarks of discovery. These days scientists are never far from their e-mail, rarely touch paper and cannot lecture without PowerPoint. If electronic media were hazardous to intelligence, the quality of science would be plummeting. Yet discoveries are multiplying like fruit flies, and progress is dizzying. Other activities in the life of the mind, like philosophy, history and cultural criticism, are likewise flourishing, as anyone who has lost a morning of work to the Web site Arts & Letters Daily can attest.
Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.
Experience does not revamp the basic information-processing capacities of the brain. Speed-reading programs have long claimed to do just that, but the verdict was rendered by Woody Allen after he read “War and Peace” in one sitting: “It was about Russia.” Genuine multitasking, too, has been exposed as a myth, not just by laboratory studies but by the familiar sight of an S.U.V. undulating between lanes as the driver cuts deals on his cellphone.
Moreover, as the psychologists Christopher Chabris and Daniel Simons show in their new book “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us,” the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else. Music doesn’t make you better at math, conjugating Latin doesn’t make you more logical, brain-training games don’t make you smarter. Accomplished people don’t bulk up their brains with intellectual calisthenics; they immerse themselves in their fields. Novelists read lots of novels, scientists read lots of science.
The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.
Yes, the constant arrival of information packets can be distracting or addictive, especially to people with attention deficit disorder. But distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life. Turn off e-mail or Twitter when you work, put away your Blackberry at dinner time, ask your spouse to call you to bed at a designated hour.
And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.
The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.
Steven Pinker, a professor of psychology at Harvard, is the author of “The Stuff of Thought.”
http://www.nytimes.com/2010/06/11/opini ... 0HDraBsJ0w
*****
June 11, 2010
Friends, Neighbors and Facebook
By CHARLES M. BLOW
Mister Rogers would be so disappointed in me.
Aside from the people who live in my building, I know the name of only one person who lives on my block: Roger Cohen, a Times colleague.
I want to blame it on the fact that I’m absolutely awful with names and can be quite socially awkward. But that has ever been thus. Then I thought that maybe it was a city thing, but that explanation goes but so far. I’m actually beginning to believe that it’s bigger than me, bigger than my block, bigger than this city. I increasingly believe that less neighborliness is becoming intrinsic to the modern American experience — a most unfortunate development.
A report issued Wednesday by the Pew Research Center found that only 43 percent of Americans know all or most of their neighbors by name. Twenty-nine percent know only some, and 28 percent know none. (Oh, my God! When Roger dashes off to Paris this summer, I’ll become a “none.”)
Yet I have thousands of “friends” and “followers” on the social-networking sites in which I vigorously participate. (In real life, I maintain a circle of friends so small that I could barely arrange a circle.) Something is wrong with this picture.
I am by no means a woe-is-us, sky-is-falling, evil-is-the-Internet type. In fact, I think that a free flow of information has led to greater civic engagement. Yippee! However, I am very much aware that social networks are rewiring our relationships and that our keyboard communities are affecting the attachments in our actual ones.
For instance, a Pew report issued in November 2009 and entitled “Social Isolation and New Technology” found that “users of social networking services are 26 percent less likely to use their neighbors as a source of companionship.”
And a May study by researchers at the University of Michigan found that “college kids today are about 40 percent lower in empathy than their counterparts of 20 or 30 years ago.” The reason? One factor could be social networking. As one researcher put it, “The ease of having ‘friends’ online might make people more likely to just tune out when they don’t feel like responding to others’ problems, a behavior that could carry over offline.”
Furthermore, an article in The New York Times on Thursday laid out new research that revealed that “feelings of hurt, jealousy and competition are widespread” among children of parents who obsess over cellphones, instant messaging and Twitter at the expense of familial engagement.
There’s no need to pine for a return to the pre-Facebook, cardigan-swaddled idealism of Mister Rogers and his charming “neighbors” and “friends,” but it is important for us to remember that tangible, meaningful engagement with those around us builds better selves and stronger communities. I should post that on Twitter.
http://www.nytimes.com/2010/06/12/opini ... ?th&emc=th
Mind Over Mass Media
By STEVEN PINKER
Truro, Mass.
NEW forms of media have always caused moral panics: the printing press, newspapers, paperbacks and television were all once denounced as threats to their consumers’ brainpower and moral fiber.
So too with electronic technologies. PowerPoint, we’re told, is reducing discourse to bullet points. Search engines lower our intelligence, encouraging us to skim on the surface of knowledge rather than dive to its depths. Twitter is shrinking our attention spans.
But such panics often fail basic reality checks. When comic books were accused of turning juveniles into delinquents in the 1950s, crime was falling to record lows, just as the denunciations of video games in the 1990s coincided with the great American crime decline. The decades of television, transistor radios and rock videos were also decades in which I.Q. scores rose continuously.
For a reality check today, take the state of science, which demands high levels of brainwork and is measured by clear benchmarks of discovery. These days scientists are never far from their e-mail, rarely touch paper and cannot lecture without PowerPoint. If electronic media were hazardous to intelligence, the quality of science would be plummeting. Yet discoveries are multiplying like fruit flies, and progress is dizzying. Other activities in the life of the mind, like philosophy, history and cultural criticism, are likewise flourishing, as anyone who has lost a morning of work to the Web site Arts & Letters Daily can attest.
Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.
Experience does not revamp the basic information-processing capacities of the brain. Speed-reading programs have long claimed to do just that, but the verdict was rendered by Woody Allen after he read “War and Peace” in one sitting: “It was about Russia.” Genuine multitasking, too, has been exposed as a myth, not just by laboratory studies but by the familiar sight of an S.U.V. undulating between lanes as the driver cuts deals on his cellphone.
Moreover, as the psychologists Christopher Chabris and Daniel Simons show in their new book “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us,” the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else. Music doesn’t make you better at math, conjugating Latin doesn’t make you more logical, brain-training games don’t make you smarter. Accomplished people don’t bulk up their brains with intellectual calisthenics; they immerse themselves in their fields. Novelists read lots of novels, scientists read lots of science.
The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.
Yes, the constant arrival of information packets can be distracting or addictive, especially to people with attention deficit disorder. But distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life. Turn off e-mail or Twitter when you work, put away your Blackberry at dinner time, ask your spouse to call you to bed at a designated hour.
And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.
The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.
Steven Pinker, a professor of psychology at Harvard, is the author of “The Stuff of Thought.”
http://www.nytimes.com/2010/06/11/opini ... 0HDraBsJ0w
*****
June 11, 2010
Friends, Neighbors and Facebook
By CHARLES M. BLOW
Mister Rogers would be so disappointed in me.
Aside from the people who live in my building, I know the name of only one person who lives on my block: Roger Cohen, a Times colleague.
I want to blame it on the fact that I’m absolutely awful with names and can be quite socially awkward. But that has ever been thus. Then I thought that maybe it was a city thing, but that explanation goes but so far. I’m actually beginning to believe that it’s bigger than me, bigger than my block, bigger than this city. I increasingly believe that less neighborliness is becoming intrinsic to the modern American experience — a most unfortunate development.
A report issued Wednesday by the Pew Research Center found that only 43 percent of Americans know all or most of their neighbors by name. Twenty-nine percent know only some, and 28 percent know none. (Oh, my God! When Roger dashes off to Paris this summer, I’ll become a “none.”)
Yet I have thousands of “friends” and “followers” on the social-networking sites in which I vigorously participate. (In real life, I maintain a circle of friends so small that I could barely arrange a circle.) Something is wrong with this picture.
I am by no means a woe-is-us, sky-is-falling, evil-is-the-Internet type. In fact, I think that a free flow of information has led to greater civic engagement. Yippee! However, I am very much aware that social networks are rewiring our relationships and that our keyboard communities are affecting the attachments in our actual ones.
For instance, a Pew report issued in November 2009 and entitled “Social Isolation and New Technology” found that “users of social networking services are 26 percent less likely to use their neighbors as a source of companionship.”
And a May study by researchers at the University of Michigan found that “college kids today are about 40 percent lower in empathy than their counterparts of 20 or 30 years ago.” The reason? One factor could be social networking. As one researcher put it, “The ease of having ‘friends’ online might make people more likely to just tune out when they don’t feel like responding to others’ problems, a behavior that could carry over offline.”
Furthermore, an article in The New York Times on Thursday laid out new research that revealed that “feelings of hurt, jealousy and competition are widespread” among children of parents who obsess over cellphones, instant messaging and Twitter at the expense of familial engagement.
There’s no need to pine for a return to the pre-Facebook, cardigan-swaddled idealism of Mister Rogers and his charming “neighbors” and “friends,” but it is important for us to remember that tangible, meaningful engagement with those around us builds better selves and stronger communities. I should post that on Twitter.
http://www.nytimes.com/2010/06/12/opini ... ?th&emc=th