mix

mix150.com MIX150 DOWNLOAD GAMES PLAYSTATION RIP FILMS

Saturday, October 22, 2011


The Electric Insurance Society has disclosed their new port app titled Ever On, which faculty work its customers to acquire operation to a numerate of self-service features finished the use of their Apple smartphone or tablet style.
This app is designed for quality of use and allows policyholders to quickly submit an motorcar need, change contract information with other somebody, stoppage the position of sum, disposition for services relating to edge help, schedule an decision for enclose reparation, as fit as more opposite features.


It also includes a unique feature that is quite innovative, in that it helps customers to locate an exciting object charging displace. According to the vice chairwoman of income and marketing for Electric Insurance Affiliate, electric vehicles are comely progressively ordinary on the anchorage in the Joint States, and Imprecise Car testament presently human a sizeable fast of these vehicles. "As a GE affiliate organization and bourgeois of its fast shelter, it's a supposition that we should give an promiscuous way to conclude a charging facility."
As Exciting Contract instrument be start its own motorcar object charging displace in 2012, the app gift also shortly be providing directions to that location as compartment. It faculty be called the GE WattStation, and it leave be accessible both to the policyholders with the circle and the world in solon.
Tom Stumpek, the CIO at Auto Contract, has focussed on emphasizing the utilization of apps for motile devices. He said that existence able to give partners and policyholders with changeable commercialism solutions is an requirement strategy in the playacting. He else that anytime tools available online utilise them the chance to withdraw proceeding exactly when it is required, and gives the insurer a meaningful coupling advantage.

Friday, October 21, 2011

Better 'photon loops' may be key to computer and physics advances

ScienceDaily (Aug. 22, 2011) — Surprisingly, transmitting information-rich photons thousands of miles through fiber-optic cable is far easier than reliably sending them just a few nanometers through a computer circuit. However, it may soon be possible to steer these particles of light accurately through microchips because of research performed at the Joint Quantum Institute of the National Institute of Standards and Technology (NIST) and the University of Maryland, together with Harvard University.

The scientists behind the effort say the work not only may lead to more efficient information processors on our desktops, but also could offer a way to explore a particularly strange effect of the quantum world known as the quantum Hall effect in which electrons can interfere with themselves as they travel in a magnetic field. The corresponding physics is rich enough that its investigation has already resulted in three Nobel Prizes, but many intriguing theoretical predictions about it have yet to be observed.

The advent of optical fibers a few decades ago made it possible for dozens of independent phone conversations to travel long distances along a single glass cable by, essentially, assigning each conversation to a different color-each narrow strand of glass carrying dramatic amounts of information with little interference.

Ironically, while it is easy to send photons far across a town or across the ocean, scientists have a harder time directing them to precise locations across short distances-say, a few hundred nanometers-and this makes it difficult to employ photons as information carriers inside computer chips.

"We run into problems when trying to use photons in microcircuits because of slight defects in the materials chips are made from," says Jacob Taylor, a theoretical physicist at NIST and JQI. "Defects crop up a lot, and they deflect photons in ways that mess up the signal."

These defects are particularly problematic when they occur in photon delay devices, which slow the photons down to store them briefly until the chip needs the information they contain. Delay devices are usually constructed from a single row of tiny resonators, so a defect among them can ruin the information in the photon stream. But the research team perceived that using multiple rows of resonators would build alternate pathways into the delay devices, allowing the photons to find their way around defects easily.

As delay devices are a vital part of computer circuits, the alternate-pathway technique may help overcome obstacles blocking the development of photon-based chips, which are still a dream of computer manufacturers. While that application would be exciting, lead author Mohammad Hafezi says the prospect of investigating the quantum Hall effect with the same technology also has great scientific appeal.

"The photons in these devices exhibit the same type of interference as electrons subjected to the quantum Hall effect," says Hafezi, a research associate at JQI. "We hope these devices will allow us to sidestep some of the problems with observing the physics directly, instead allowing us to explore them by analogy."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by National Institute of Standards and Technology (NIST).

Journal Reference:

Mohammad Hafezi, Eugene A. Demler, Mikhail D. Lukin, Jacob M. Taylor. Robust optical delay lines with topological protection. Nature Physics, 2011; DOI: 10.1038/NPHYS2063

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Feeding the five thousand -- or was it three? Researchers claim most crowd estimations are unreliable

ScienceDaily (Aug. 26, 2011) — The public should view crowd estimation with skepticism, say the authors of a study published in Significance, the magazine of the Royal Statistical Society and the American Statistical Association, as they suggest more reliable alternatives to current estimating methods.

Estimates of crowd sizes vary greatly, and the success of an event is often measured by the size of the crowd. Organizers of the 2007 "Stop the War" demonstration in London reported crowds of 60,000, whereas the police reported just 10,000. The US Government's estimate of the crowds at Obama's inauguration ceremony was 1.8 million, while other estimates were much less, closer to one million. "In the absence of any accurate estimation methods, the public are left with a view of the truth colored by the beliefs of the people making the estimates," claims Professor Paul Yip, of the University of Hong Kong, one of the authors of the study.

Such a huge discrepancy in estimates is currently not unusual and suggests the use of crowd sizes as a political tool. Larger crowd sizes are a means of recruiting others to the cause, and it is more difficult for the authorities to ignore demands. "The authorities are sometimes put in a difficult position," says Yip. "It is important to highlight the shortcomings of existing estimating methods."

In this latest study, the authors reveal several more accurate, more reliable methods of estimating crowd sizes. Currently, even when searching for the truth, there is a wide margin of error. The authors recommend organizers and authorities use an area x density estimating method for static crowds, which reduces the margin of error to less than 10%. Furthermore, they have devised an entirely new method of reliably estimating mobile crowds. Two inspection points are placed along the route where the number of participants is estimated, not too close together and with one near the end. In applying this two-inspection-point method to the Hong Kong 1st July march (a demonstration of widely-varying claimed size and of great political sensitivity) since 2003, more reliable estimates can then be obtained.

"It is important to rectify the myth of counting people. The public would be better served by estimates less open to political bias. Our study shows that crowd estimates with a margin of error of less than 10% can be achieved with the proposed method," Yip concludes.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Wiley-Blackwell, via AlphaGalileo.

Journal Reference:

Ray Watson, Paul Yip. How many were there when it mattered? Estimating the sizes of crowds. Significance, September 2011: 104-107 DOI: 10.1111/j.1740-9713.2011.00502.x

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Thursday, October 20, 2011

Apps for Education

The TED (Technology, Education, and Design) podcasts are some of the most intellectually stimulating presentations available to the world today. With the TED mobile apps, you can view talks from notable speakers on topics across technology, entertainment, design, business, science, global issues, and more. Watch Amazon.com founder Jeff Bezos' Princeton University address, for example, or Wired editor Chris Anderson's exploration of the long tail of technology. Available in the iPhone, Windows Mobile, and Android markets and also in HTML5 mobile-browser version


View the original article here

Tactile technology for video games guaranteed to send shivers down your spine

ScienceDaily (Aug. 10, 2011) — A new tactile technology developed at Disney Research, Pittsburgh (DRP), called Surround Haptics, makes it possible for video game players and film viewers to feel a wide variety of sensations, from the smoothness of a finger being drawn against skin to the jolt of a collision.

The technology is based on rigorous psychophysical experiments and new models of tactile perception. Disney will demonstrate Surround Haptics Aug. 7-11 at the Emerging Technology Exhibition at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques in Vancouver.

In the demonstration developed in collaboration with Carnegie Mellon University and others, the technology will enhance a high-intensity driving simulator game in collaboration with Disney's Black Rock Studio. With players seated in a chair outfitted with inexpensive vibrating actuators, Surround Haptics will enable them to feel road imperfections and objects falling on the car, sense skidding, braking and acceleration, and experience ripples of sensation when cars collide or jump and land.

"Although we have only implemented Surround Haptics with a gaming chair to date, the technology can be easily embedded into clothing, gloves, sports equipment and mobile computing devices," said Ivan Poupyrev, senior research scientist at DRP, who invented and developed Surround Haptics with Ali Israr, also of DRP. "This technology has the capability of enhancing the perception of flying or falling, of shrinking or growing, of feeling bugs creeping on your skin. The possibilities are endless."

The DRP researchers have accomplished this feat by designing an algorithm for controlling an array of vibrating actuators in such a way as to create "virtual actuators" anywhere within the grid of actuators. A virtual actuator, Poupyrev said, can be created between any two physical actuators; the user has the illusion of feeling only the virtual actuator.

As a result, users don't feel the general buzzing or pulsing typical of most haptic devices today, but can feel discrete, continuous motions such as a finger tracing a pattern on skin.

The phenomenon of phantom sensations created by actuators has been known for more than 50 years, but its use in tactile displays has been limited because of an incomplete understanding of control mechanisms. DRP researchers were able to develop their control algorithm by systematically measuring users' ability to feel physical actuators vs. virtual actuators under a variety of stimulation levels. They then developed control models that were validated by further psychophysical experiments.

In addition to enhancing user experiences with interactive games, movies and music, Surround Haptics' underlying technology promises to provide new tactile means of communication for the blind, emergency workers, vehicle operators, athletes and others.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Carnegie Mellon University.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

'Iron Man' suit tech within grasp: B.C. scientist

 Marvel Entertainment/Paramount Pictures

High-tech robotic suits, similar to the one portrayed in the 2008 Hollywood blockbuster Iron Man, could become a reality within 30 years, says a University of Victoria neuroscientist.


Paul Zehr said while the technology will have practical benefits, like helping people with spinal-cord injuries walk, it will also have military applications, too.


So Zehr, the author of the just-released book Inventing Iron Man: The Possibility of a Human Machine, said now's the time to talk about the practical and moral implications of the technology.


"I think the main conclusions are that we have to think in advance about some of the places we are headed to here," said Zehr in an interview.


He said if researchers are going to create technology allowing people to control "suits of armour" with their minds, then safeguards will have to be put in place to ensure those machines can't be taken over and controlled by others.


Society will also have to address moral questions related to military, he added.


"I don't know? Is that where we want to go with society as well when it comes to warfare?"


While the suit worn by Iron Man's protagonist Tony Stark is still decades away, society is "on the trajectory" for the technology, said Zehr.


A handful of companies around the world, including Japan's CYBERDYNE Inc., California's Berkeley Bionics, New Zealand's REX Bionics and Israel's Argo Medical Technologies Ltd., already sell much-bulkier robotic exoskeletons for people with spinal-cord injuries.


Massachusetts-based Raytheon Company released a second-generation exoskeleton last September, an "honest-to-goodness Iron Man Suit" hailed by Time magazine as one of the 50 best inventions of 2010.


The Raytheon Company is also developing a robotic suit for the U.S. military.


But Zehr said these exoskeletons do not work by interfacing directly with the brain.


Researchers are only able to control things like computer cursors and wheelchairs with the human mind, thanks to electrodes implanted in the brain, he said.


Jaimie Borisoff, a rehabilitative engineer who played on Canada's Paralympic basketball team, said society is still a long way a way from the technology presented in the movie Iron Man.


"There's not a technology yet available that certainly I would be interested in using myself right now other than for research purposes," said Borisoff.


Borisoff said there are still many practical issues to deal with and one of those is energy supply for exoskeletons and suits. "If it's a difficulty in something that is big and heavy as a car, imagine what it would be in something that is small as a suit," he said of the power-supply problem.


He said researchers and scientists are also still debating whether robots or regeneration of the spinal cord is the best way to deal with spinal-cord injuries.


Borisoff said there's no real answer to the question.


Zehr will deliver a free talk, Inventing Iron Man: Where is the Line Between Human and Machine, Sept. 20 at B.C.'s centre for spinal cord research and treatment at Vancouver General Hospital.


Zehr said he hopes people come away from the discussion impressed by the wonders of the nervous systems.

Accessibility Links

View the original article here

E-textiles get fashion upgrade with memory-storing fiber

ScienceDaily (Sep. 26, 2011) — In the future when you upgrade your computer, you may also be upgrading your wardrobe as researchers create novel new textiles that pull double-duty as fabrics and electronics.

The integration of electronics into textiles is a burgeoning field of research that may soon enable smart fabrics and wearable electronics. Bringing this technology one step closer to fruition, Jin-Woo Han and Meyya Meyyappan at the Center for Nanotechnology at NASA Ames Research Center in Moffett Field, Calif., have developed a new flexible memory fabric woven together from interlocking strands of copper and copper-oxide wires. At each juncture, or stitch along the fabric, a nanoscale dab of platinum is placed between the fibers. This "sandwich structure" at each crossing forms a resistive memory circuit. Resistive memory has received much attention due to the simplicity of its design.

As described in the AIP's journal AIP Advances, the copper-oxide fibers serve as the storage medium because they are able to change from an insulator to a conductor simply by applying a voltage. The copper wires and the platinum layers serve as the bottom and top electrodes, respectively. This design easily lends itself to textiles because it naturally forms a crossbar memory structure where the fibers intersect. The researchers developed a reversible, rewritable memory system that was able to retain information for more than 100 days.

In this proof-of-concept design, the copper wires were one millimeter thick, though smaller diameter wire would allow for an increase in memory density and a reduction in weight. In practical applications, e-textiles would need to integrate a battery or power generator, sensors, and a computational element, as well as a memory structure. Taken together, an e-textile could potentially detect biomarkers for various diseases, monitor vital signs of the elderly or individuals in hostile environments, and then transmit that information to doctors.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by American Institute of Physics, via EurekAlert!, a service of AAAS.

Journal Reference:

Jin-Woo Han, M. Meyyappan. Copper oxide resistive switching memory for e-textile. AIP Advances, 2011; 1 (3): 032162 DOI: 10.1063/1.3645967

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Wednesday, October 19, 2011

Researchers use liquid crystal to replace space motors

ScienceDaily (Sep. 22, 2011) — Researchers at the Institute of Electronics, Communications and Information Technology (ECIT) and the Northern Ireland Semiconductor Research Centre (NISRC) at Queen's University Belfast have devised a way to eliminate the need for motors in space borne radiometers by incorporating liquid crystals in their Frequency Selective Surface (FSS) antenna arrays.

The project has attracted funding of £0.5 million from the European Space Agency (ESA) and £100,000 from economic development agency, Invest Northern Ireland. It is expected to result in significant weight savings in satellite payloads and greatly reduced power consumption in weather monitoring instruments.

The technology has other important potential applications as well. These include eradicating the attenuation of mobile phone signals passing through energy efficient glass and creating buildings that can be locked down to block radio signals at the flick of a switch.

The innovative ECIT project addresses frequencies ranging from millimeter wave up to 1 THz. Measuring radiation in this waveband is a key technique used to study Earth's atmosphere to improve global weather forecasting and understanding of climate change.

Current generation remote sensing radiometers that collect this data incorporate a turntable-mounted mirror operated by an electric motor to calibrate the instrument before each scan by directing their field of view between cold and ambient targets.

The ECIT/NISRC research team however has devised a technique for making such motors redundant. This involves sandwiching layers of liquid crystals between the FSS's metalized quartz layers to act as an electronically controlled shutter. Applying a small voltage to the structure then enables the radiometer to be switched from calibration mode to signal detection mode without mechanical components.

The team believes that using this technique to replace the motor and turntable could produce potential weight savings of 10 per cent per radiometer. It would also greatly reduce power consumption requirements as a motor represents a radiometer's single biggest power requirement.

Prototypes are being built at Queen's University's Northern Ireland Semiconductor Research Centre with ESA support and the devices are expected to be used in space missions from 2025 onwards.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Queen's University, Belfast, via AlphaGalileo.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Bell launches LTE network in Ontario cities

An LTE-capable USB turbo stick went on sale in some Bell stores Wednesday. Phones and tablets are expected later in the year. CBC

Bell launched its next-generation high-speed wireless network in areas of Toronto, Mississauga, Hamilton, Kitchener Waterloo and Guelph Wednesday.


The Ontario network, based on a technology called long-term evolution (LTE), "will deliver amazing data access speeds — at least three times faster than the Bell HSPA+ network originally launched less than three years ago," said Wade Oosterman, president of Bell Mobility in a statement ahead of Wednesday's launch.


The company said it expects typical speeds for the network to be between 12 and 25 megabits per second. Bell's HSPA+ network offers typical speeds of 3.5 to 8 megabits per second and 7 to 14 megabits per second in areas where it offers HSPA+ with dual carrier technology.


The company is referring to its LTE network as 4G, the same name it has been applying to its existing HSPA+ networks.


The first device able to use the new network will be the LTE Sierra Wireless U313 Turbo Stick, which went on sale at "select Bell stores" Wednesday. Bell said LTE-capable smartphones and tablets will be available later in the year.


Devices will switch to the next fastest speed available when outside LTE areas, Bell said. A news release claimed it is the "first and only Canadian 4G network" to allow that.


However, Rogers indicates on its website that its LTE users will also be able to connect with HSPA+ or EDGE technologies where there is no LTE coverage.


Rogers network launched its LTE network in Ottawa in July, and is currently the only other LTE provider in Canada other than Bell.


Rogers expects to launch LTE in Toronto at the end of September.


Bell said it will launch LTE in additional Canadian markets, starting with urban areas, later this year and through 2012.


It said the timing of its rollout in rural areas depends on the outcome of Industry Canada's 700 megahertz spectrum auction.


The 700 megahertz band of airwaves, formerly used by analog TV, was freed up when Canada made the switch to digital TV on Aug. 31. Bell and other large wireless carriers want to be able to bid in an open auction. However, there is a possibility that part of the spectrum could be set aside for newer wireless entrants to boost competition, as during a last auction that led to the launch of new wireless services from Wind Mobile, Mobilicity, Public Mobile and Vidéotron. Industry Canada has not yet indicated what rules it will set for the auction.

Accessibility Links

View the original article here

A contemporary 'copier' of a XIX century mural

ScienceDaily (Aug. 31, 2011) — There was a time when, lacking skyscrapers from which to view great vistas, it was fashionable to paint panoramic murals. These were mural paintings on paper that represented anything from a city's monuments to countryside landscapes, including hunting scenes and other events. One of the decisive moments of this artistic expression took place in France in the first third of the XIX century, with xylography (wood engraving).

This is the period that Alfredo Gil has taken as reference in order to recreate these paintings -- but with more modern techniques, looking for a balance between faithfulness and efficacy. His PhD thesis, defended at the University of the Basque Country, is entitled Adapting engraving systems applied to panoramic wall paintings on paper in the first third of the XIX century.

Mr Gil studied the development of printing systems for paper wall paintings, paying particular attention to xylography. He has also worked to emulate this technique with more modern and rapid ones, while preserving the quality and attributes of those times gone by. To this end, he opted for serigraphy, the modern system that, according to the researcher, best emulates the panoramic xylographies of the first third of the XIX century. He combined this technique with the use of the computer in the first stages of the engraving, thanks to which he has reconstructed panoramas much more rapidly and with equal precision. Finally, the methodology was applied to the mural Monuments de Paris, painted in 1814.

Xylography, until the end of the XIX century

As explained in the thesis, the manufacture of panoramas of this nature was traditionally printed using xylography. This involved carving a figure on a plank of wood and impregnating the resulting relief with ink, in such a way that the outline of the figure could be printed or stamped on paper or other medium. In the case of panoramas, due to their great size, complexity and need to superimpose colours, 3,500 blocks of wood and 90 colours could be used just for one work. With industrialisation other techniques arose which enabled the reproducing of greater numbers of hues, but the quality of the image obtained with a xylographic matrix continued to be high, and so the old technique withstood such changes. It was not until the arrival of serigraphy, at the end of the XIX century, that results started to rank equally.

With serigraphy, printing is carried out using a mesh, whereby the ink is blocked at certain areas, depending on the final picture desired. Mr Gil opted for this technique in order to reproduce a 1.5 x 1.5 metre fragment of the Monuments de Paris xylographic panorama; although with the prior aid of a computer reconstruction. The researcher undertook tests with more modern techniques such as digital printing, but he stresses that it is serigraphy that most faithfully reproduces the quality of the original xylography technique. In fact, he came to the conclusion that the new technologies improve on the old as regards optimising the pre-impression, but that, regarding the printing, none has managed to achieve the quality and characteristics of the panoramas of the early XIX century as has serigraphy.

Digital printing -- totally different

Looking more closely at the comparative study, Mr Gil explained that, at first sight, results from xylographic and serigraphic printing are the same. But if binocular magnifying glasses are used, one can see that xylography provides a uniform printing, with a compact although textured surface; while serigraphy presents a net of points -- due to the mesh technique used. Nevertheless, at a visual level, this difference is imperceptible.

On the other hand, the case of digital inks is quite different. With digital printing, the variation of hues depends on the concentration of points: the closer they are together, the greater the density of colour. Digital inks give an increasingly enhanced image quality, but not of the type that emulates a panoramic paper painted using xylography. In the words of Mr Gil, the differences between xylography and digital inks are ostensible, given that the colouring material has a density considerably inferior in the latter case, the thickness and polychromic layer being much less.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Elhuyar Fundazioa.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Tuesday, October 18, 2011

Dinosaur feathers found in Alberta amber

Feathers believed to be from dinosaurs have been found beautifully preserved in Alberta amber.


The primitive, hair-like feathers known as protofeathers likely belonged to theropods — dinosaurs similar to tiny Tyrannosaurus rexes — that roamed the swampy forests of Alberta 80 million years ago, said Alexander P. Wolfe, a University of Alberta earth sciences professor who co-authored the research published Thursday in Science.

The protofeathers don't look like feathers from any modern bird, but are similar to those seen in fossils of therapods. (Science/AAAS)

"Protofeathers aren't known from any modern, existing groups of birds and therefore the most obvious interpretation is that they belong to dinosaurs," he said.


Theropods, which are thought to be closely related to modern birds, were already known to have feathers, based on features surrounding fossils found in China. But a lot of details were lost in the fossilization process.


"The feathers get altered, they get substituted by minerals and you can't see any of the detail," Wolfe said.


"With amber, it's different. We actually have the actual object.… we actually have this protofeather for the first time in the flesh."


The feathers are preserved down to the pigments that show what colour they are and microscopic details of their structure.


Based on the fact that the protofeathers were just single filaments or clumps of filaments, just two centimetres long, the researchers concluded "these had nothing to do with flight," Wolfe said.


Instead, he believes they were used to keep the dinosaurs warm.

A theropod fossil from China shows bristle-like feathers on the head, neck, back and tail. However, many of the details have been obscured by the fossilization process. (Nanjing Institute/Associated Press)

The protofeathers were among a wide range of feathers found in Alberta amber specimens by Ryan McKellar, a researcher who recently completed his PhD under Wolfe's supervision. McKellar's research was initially interested in insects, but stumbled upon some very bird-like feathers in the process of sorting through amber from the Royal Tyrell Museum and the University of Alberta's collection, Wolfe said.


He decided to keep an eye out for other feathers. After sorting through around 4,000 chunks of amber, each less than two centimetres in diameter, he had collected a wide range, from the protofeathers to more complex feathers from the same time period that were most certainly from birds.


Some were downy "like the kind you have in your pillow," Wolfe said. Others look like modern flight feathers. Some also had special features found in diving birds such as grebes.


Wolfe, an expert in amber chemistry, said such birds likely shared the same ecosystem as the dinosaurs — a steamy, "very buggy" coastal forest similar to Florida's everglades, dominated by cypress and cedar-like trees. The remains of the forest were compressed into coal deposits in Alberta where the amber samples were found.


Wolfe said now that the new research, including photographs, has been published, he hopes researchers in other parts of the world where feather dinosaur fossils have been found will start keeping an eye out for dinosaur feathers in amber. He also hopes to do a biochemical analysis on the proteins in the feathers.

Accessibility Links

View the original article here

Colloidal quantum dots: Performance boost next-generation solar cell technology

ScienceDaily (Sep. 19, 2011) — Researchers from the University of Toronto (U of T), the King Abdullah University of Science & Technology (KAUST) and Pennsylvania State University (Penn State) have created the most efficient solar cell ever made based on colloidal quantum dots (CQD).

The discovery is reported in the latest issue of Nature Materials.

Quantum dots are nanoscale semiconductors that capture light and convert it into an energy source. Because of their small scale, the dots can be sprayed on to flexible surfaces, including plastics. This enables the production of solar cells that are less expensive to produce and more durable than the more widely-known silicon-based version. In the work highlighted by the Nature Materials paper, the researchers demonstrate how the wrappers that encapsulate the quantum dots can be shrunk to a mere layer of atoms.

"We figured out how to shrink the passivating materials to the smallest imaginable size," states Professor Ted Sargent, corresponding author on the work and holder of the Canada Research Chair in Nanotechnology at U of T.

A crucial challenge for the field has been striking a balance between convenience and performance. The ideal design is one that tightly packs the quantum dots together. The greater the distance between quantum dots, the lower the efficiency.

However the quantum dots are usually capped with organic molecules that add a nanometer or two. When working on a nanoscale, that is bulky. Yet the organic molecules have been an important ingredient in creating a colloid, which is a substance that is dispersed in another substance. This allows the quantum dots to be painted on to other surfaces.

To solve the problem, the researchers have turned to inorganic ligands, which bind the quantum dots together while using less space. The result is the same colloid characteristics but without the bulky organic molecules.

"We wrapped a single layer of atoms around each particle. As a result, they packed the quantum dots into a very dense solid," explains Dr. Jiang Tang, the first author of the paper who conducted the research while a post-doctoral fellow in The Edward S. Rogers Department of Electrical & Computer Engineering at U of T.

The team showed the highest electrical currents, and the highest overall power conversion efficiency, ever seen in CQD solar cells. The performance results were certified by an external laboratory, Newport, that is accredited by the US National Renewable Energy Laboratory.

"The team proved that we were able to remove charge traps -- locations where electrons get stuck -- while still packing the quantum dots closely together," says Professor John Asbury of Penn State, a co-author of the work.

The combination of close packing and charge trap elimination enabled electrons to move rapidly and smoothly through the solar cells, thus providing record efficiency.

"This finding proves the power of inorganic ligands in building practical devices," states Professor Dmitri Talapin of The University of Chicago, who is a research leader in the field. "This new surface chemistry provides the path toward both efficient and stable quantum dot solar cells. It should also impact other electronic and optoelectronic devices that utilize colloidal nanocrystals. Advantages of the all-inorganic approach include vastly improved electronic transport and a path to long-term stability."

"At KAUST we were able to visualize, with incredible resolution on the sub-nanometer length scale, the structure and composition of this remarkable new class of materials," states Professor Aram Amassian of KAUST, a co-author on the work.

"We proved that the inorganic passivants were tightly correlated with the location of the quantum dots; and that it was this new approach to chemical passivation, rather than nanocrystal ordering, that led to this record-breaking colloidal quantum dot solar cell performance," he adds.

As a result of the potential of this research discovery, a technology licensing agreement has been signed by U of T and KAUST, brokered by MaRS Innovations (MI), which will will enable the global commercialization of this new technology.

"The world -- and the marketplace -- need solar innovations that break the existing compromise between performance and cost. Through U of T's, MI's, and KAUST's partnership, we are poised to translate exciting research into tangible innovations that can be commercialized," said Sargent.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Toronto, via EurekAlert!, a service of AAAS.

Journal Reference:

Jiang Tang, Kyle W. Kemp, Sjoerd Hoogland, Kwang S. Jeong, Huan Liu, Larissa Levina, Melissa Furukawa, Xihua Wang, Ratan Debnath, Dongkyu Cha, Kang Wei Chou, Armin Fischer, Aram Amassian, John B. Asbury, Edward H. Sargent. Colloidal-quantum-dot photovoltaics using atomic-ligand passivation. Nature Materials, 2011; DOI: 10.1038/nmat3118

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Copper film could lower touch screen, LED and solar cell costs

ScienceDaily (Sep. 28, 2011) — Copper nanowires may be coming to a little screen near you. These new nanostructures have the potential to drive down the costs of displaying information on cell phones, e-readers and iPads, and they could also help engineers build foldable electronics and improved solar cells, according to new research.

Duke chemist Ben Wiley and his graduate student have developed a technique to organize copper atoms in water to form long, thin, non-clumped nanowires. The nanowires are then transformed into transparent, conductive films and coated onto glass or plastic.

The new research shows that the copper nanowire films have the same properties as those currently used in electronic devices and solar cells, but are less expensive to manufacture. The results were published online Sept. 23 in Advanced Materials.

The films that currently connect pixels in electronic screens are made of indium tin oxide, or ITO. It is highly transparent, which transmits the information well. But the ITO film must be deposited from a vapor in a process that is a thousand times slower than newspaper printing, and, once the ITO is in the device, it cracks easily. Indium is also an expensive rare earth element, costing as much as $800 per kilogram.

These problems have driven worldwide efforts to find less expensive materials that can be coated or printed like ink at much faster speeds to make low-cost, transparent conducting films, Wiley said.

One alternative to an ITO film is to use inks containing silver nanowires. The first cell phone with a screen made from silver nanowires will be on the market this year. But silver, like indium, is still relatively expensive at $1400 per kilogram.

Copper, on the other hand, is a thousand times more abundant than indium or silver, and about 100 times less expensive, costing only $9 per kilogram.

In 2010, Wiley and his graduate student Aaron Rathmell showed that it was possible to form a layer of copper nanowires on glass to make a transparent conducting film.

But at that time, the performance of the film was not good enough for practical applications because the wires clumped together. The new way of growing the copper nanowires and coating them on glass surfaces eliminates the clumping problem, Wiley said.

He and Rathmell also created the new copper nanowires to maintain their conductivity and form when bent back and forth 1,000 times. In contrast, ITO films' conduction and structure break after a few bends.

Wiley said the low-cost, high-performance, and flexibility of copper nanowires make them a natural choice for use in the next generation of displays and solar cells. He co-founded a company called NanoForge Corp in 2010 to manufacture copper nanowires for commercial applications.

In early 2011, NanoForge received a $45,000 North Carolina IDEA grant for refinement and scale-up of the manufacturing process of copper nanowires, and it is now filling orders.

With continuing development, copper nanowires could be in screens and solar cells in the next few years, which could lead to lighter and more reliable displays and also to making solar energy more competitive with fossil fuels, Wiley said.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Duke University.

Journal Reference:

Rathmell, A. R. and Wiley, B. J. he Synthesis and Coating of Long, Thin Copper Nanowires to Make Flexible, Transparent Conducting Films on Plastic Substrates. Advanced Materials, 2011 DOI: 10.1002/adma.201102284

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Monday, October 17, 2011

Wi-Fi coming to Vancouver streets, parks

Calgary-based Shaw Communications is teaming up with Cisco to blanket Vancouver and other western cities with wireless internet access — but it won't be free.

By spring of 2012, Shaw plans to let customers tap into Shaw's broadband backbone from several thousand urban wireless access points in places such as sporting venues and transit hubs.

Victoria has already experimented with small-scale wireless service, and New Westminster has offered Wi-Fi access for up to 30 users in a pilot project in places such as Queens Park and the public library.

As early as 2005, Vancouver's city council talked about offering free Wi-Fi, but has since backed off. Officials said it would be too expensive.

But even with a company stepping in to build the infrastructure, some city representatives think there should be regulation over the service.

Vancouver Coun. Geoff Meggs doesn't want to see several companies cluttering up the landscape.

"If they're going to be there, I think Ottawa — as the regulator of telecommunications — will have to try to make sure there's not a lot of wasted investment with one firm putting putting Wi-Fi on top of another firm," Meggs said.

Vancouver Non-Partisan Association candidate Mike Klassen said the city shouldn't stall any longer: it should get onboard with wireless providers now.

"From the NPA stand-point, this is a no-brainer. We want to be able to bring this innovative type of technology and use it to benefit our citizens and the taxpayers," Klassen said.

He said Shaw's plan opens the door to exciting new opportunities, especially in more remote places like parks.

"You could do things in Stanley Park: maps, information, even purchases of everything. We do parking now, but that's barely scraping the surface of what we could possibly do," Klassen said.

In its press release, Shaw Communications stated that part of the reason for providing wide-ranging Wi-Fi access is due to the near-ubiquity of Wi-Fi enabled smart phones.

The Western Canada Shaw Wi-Fi network will use Cisco technology, and is set to begin deployment in late 2011.

Shaw said it is anticipating being ready for customers next spring.

With files from the CBC's Tim Weekes Accessibility Links

View the original article here

Recent trends show recession's effect on US advanced technology exports

ScienceDaily (Sep. 6, 2011) — U.S. exports of advanced technology products (ATP) fared better than other non-advanced technology exports during the recent U.S. recession, says a new report from the National Science Foundation (NSF).

NSF's National Center for Science and Engineering Statistics reports that ATP exports fell from $270 billion in 2008 to $245 billion in 2009. But this 9 percent drop was less than half the decline of non ATP exports.

ATP exports embody new or leading edge breakthroughs in science and technology including drugs developed from gene therapy, nuclear resonance imaging, micro integrated circuits, robotics, advanced materials, and civilian and military satellites.

The finding results from U.S. Census Bureau data taken from ATP trade in the 10 ATP technology areas defined by the agency with a focus on four technology areas: aerospace, electronics, information and communications technology and life science. Together these four areas account for 85 percent of U.S. ATP exports in 2010.

NSF's report, titled "U.S. Exports of Advanced Technology Products Declined Less Than Other U.S. Exports in 2009," also says U.S. ATP exports recovered in 2010 but lagged behind the growth of non-ATP exports. In 2010, U.S. ATP exports improved by 11 percent as compared to 2009, but other types of U.S. exports expanded at 23 percent, twice the rate of ATP exports.

The InfoBrief also describes U.S. ATP trade with selected major economies and regions. It says the 2009 decline of US ATP exports was steeper for Asia as compared to those destined for the European Union and the North American Free Trade Agreement zone.

Exports to Asia fell from $94 billion in 2008 to $79 billion in 2009; at 15 percent, it was the largest decline among the three regions. According to NSF's report, the decline was driven by a drop in electronics exports to Asia, the most important export market for U.S. advanced technology products, and a drop in information and communications technology exports.

During the period of 2008-2009, U.S. ATP exports to Japan, South Korea and Taiwan declined steeply (between 19 and 29 percent), but exports to China saw little change. In 2010, however, US exports to Asia grew faster than US exports to the other two regions.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by National Science Foundation.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Planet with 2 suns discovered

The planet is similar to Saturn, but denser, and orbits the two small stars every 229 days. The smaller of the two stars is just a fifth the mass of the sun. (JPL-Caltech/NASA)

A planet that orbits two suns at the same time — just like Tatooine in Star Wars — has been found for the first time.


"The discovery is stunning," said Alan Boss, co-author of the paper describing the findings, in a statement. "Once again, what used to be science fiction has turned into reality."


Boss is a researcher at the Carnegie Institution for Science in Washington, D.C. The study was published online Thursday in Science, and NASA was to hold a news conference at 2 p.m. ET.


The research was led by Laurance Doyle, a researcher at the SETI Institute, a non-profit research group in Mountain View, Calif., that describes itself as dedicated to exploring the origin, nature and prevalence of life in the universe. His group specializes in looking for eclipsing binaries — systems of two stars that orbit each other and eclipse each other every time one passes in front of the other.


The researchers combed through data collected by the Kepler telescope, which is focused on the part of the sky containing the constellations Cygnus and Lyra and is designed to look for planets outside our solar system. It does so by precisely measuring the brightness of stars and any dimming caused by an orbiting planet passing in front of the star.


Doyle's team identified 2,000 eclipsing binaries, then carefully looked at them one at a time.


"My eye was drawn to some extra eclipses that occurred out of sequence," Doyle said in a podcast interview with Science. Some were caused by a third star — that is, the two-star system turned out to be a three star system.


But in one case, the two stars were very small — just 20 per cent and 69 per cent as massive as the sun respectively — and the extra "eclipses" dimmed the stars only very slightly, indicating that the object that caused the dips was planet-sized.


Based on careful measurements, the team figured out that the two stars orbit each other every 41 days, and the Saturn-sized planet orbits the two stars every 229 days. The planet is denser than Saturn, suggesting that it contains rock as well as gas, and its distance from the centre of its orbit is about 70 per cent of the distance between the Earth and the sun.


The planet is thought to be cold and not habitable.


Kepler was launched into space in March 2009. As of February, the telescope had found more than 1,200 "planet candidates" or data suggesting possible planets.

Accessibility Links

View the original article here

Sunday, October 16, 2011

Gold nanowires in engineered patches enhance electrical signaling and contraction

ScienceDaily (Sep. 26, 2011) — A team of physicians, engineers and materials scientists at Children's Hospital Boston and the Massachusetts Institute of Technology have used nanotechnology and tiny gold wires to engineer cardiac patches, with cells all beating in time, that could someday help heart attack patients.

As reported online by Nature Nanotechnology on September 25, the addition of gold wires to the engineered heart tissue make it electrically conductive, potentially improving on existing cardiac patches. Such patches are starting to go into clinical trials for heart patients.

"If you don't have the gold nanowires, and you stimulate the cardiac patch with an electrode, the cells will beat only right where you're stimulating," says senior investigator Daniel Kohane, MD, PhD, of the Laboratory for Biomaterials and Drug Delivery at Children's Hospital Boston. "With the nanowires, you see a lot of cells contracting together, even when the stimulation is far away. That shows the tissue is conducting."

After incubation, the patches studded with the gold nanowires were thicker and their heart muscle cells better organized. When stimulated with an electrical current, the cells produced a measurable spike in voltage, and electrical communication between adjacent bundles of cardiac cells was markedly improved. In contrast, only a negligible current passed through patches lacking the wires, and cells beat only in isolated clusters.

Kohane thinks the nanowire technology could be applied to the engineering of any electrically excitable tissue, including tissue in the brain and spinal cord. Gold was chosen as a material because it's a conductive material, easy to fabricate, scientists have a lot of experience with it, and it is tolerated by the body.

The wires average 30 nanometers thick and 2-3 microns long, just barely visible to the naked eye.

Since testing has so far been done only in cell cultures, the team plans to do further experiments to see how well the cardiac patches function in live animal models, and to get a better understanding of how exactly the nanowires are enhancing electrical signaling and contraction.

Kohane believes the gold fibers help because they're long enough to cross the scaffolding material that holds the cells and may act as a barrier to electrical conduction. In addition, the experiments showed enhanced production of troponin I, a protein involved in muscle calcium binding and contraction, and connexin-43, a protein involved in electrical coupling between cells that is believed to play a critical role in the development of the heart's architecture and in the synchronized contraction of the heart.

The study was funded by the National Institutes of Health and the American Heart Association. The paper's co-first authors were Tal Dvir, PhD, and Brian Timko, PhD, both of the Department of Chemical Engineering, Massachusetts Institute of Technology, and the Laboratory for Biomaterials and Drug Delivery at Children's Hospital Boston.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Children's Hospital Boston, via EurekAlert!, a service of AAAS.

Journal Reference:

Tal Dvir, Brian P. Timko, Mark D. Brigham, Shreesh R. Naik, Sandeep S. Karajanagi, Oren Levy, Hongwei Jin, Kevin K. Parker, Robert Langer & Daniel S. Kohane. Nanowired three-dimensional cardiac patches. Nature Nanotechnology, 25 September 2011 DOI: 1038/nnano.2011.160

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Greenpeace 'genius' turns media spectacle into action

Greenpeace's 40-year history is studded with outrageous stunts, from acting as human shields protecting whales to hanging protest slogans from iconic landmarks. Love them or hate them, those spectacles have turned Greenpeace into one of the most recognizable names in the global environmental movement.


“That is the genius that Greenpeace has always had, with scaling buildings and going out whaling and putting themselves between the harpoons and the whales — getting themselves into the news media,” says Douglas Macdonald, senior lecturer at the Centre for the Environment at the University of Toronto.


"We wouldn’t see the kind of action by governments that we've had in the last 40 years if you hadn’t had that kind of pressure applied by the environmental movement."

Children push a giant soccer ball in front of the office of Polish Prime Minister Donald Tusk in Warsaw, Poland, on June 28, 2011. The ball was part of a Greenpeace rally held to encourage the government to support environmental issues when it takes over the rotating European Union president later this week. Czarek Sokolowski/Associated Press

Sept. 15, 1971, is cited as the beginning of Greenpeace, the day a group of anti-nuclear activists in Vancouver called the Don’t Make a Wave Committee chartered a ship with the aim of heading off underground nuclear tests by the U.S. government on the remote Alaskan island of Amchitka. In anticipation of the protest, the vessel, Phyllis Cormack, was renamed Greenpeace, a term coined by activist Bill Darnell.


The ship was ultimately blocked by the U.S. Coast Guard before it could reach Amchitka, and the scheduled tests went ahead as planned. But the protest aroused significant public interest in the group, which was renamed Greenpeace International in 1972.


While Greenpeace continued its anti-nuclear efforts in the early 1970s — protesting French nuclear tests in the Moruroa Atoll in French Polynesia, for example — by the end of the decade it had expanded its mandate to include whaling, toxic waste and the commercial seal hunt.


Today, Greenpeace draws attention to almost every environmental issue conceivable. Earlier this year, it launched the Dirty Laundry campaign, which urges major clothing manufacturers to stop the use of hazardous chemicals in their apparel.


"We've become more professional in our approach to campaigns. I think part of our success with Greenpeace has been our ability to change with the times," explains Bruce Cox, Greenpeace Canada's executive director.


To many in the environmental movement that is a signal that Greenpeace has compromised its ideals. Some activists cite the group's willingness to enter into negotiations with multinational corporations, like paper giant Kimberley Clark, as an example of this. Cox disagrees.


"For every person that says we've sold out, I tell you there are 10 that say we are a radical organization that can't be dealt with."

Greenpeace activists dressed up as climate change refugees urge the European Union to reduce its CO2 gas emission by up to 30 per cent during a demonstration in front of the parliament building of Budapest, Hungary, on June 29, 2011. Bela Szandelszky/Associated Press

Greenpeace claims not to have had a single founder, but its foundations were largely laid out in Vancouver by U.S. Navy veteran Jim Bohlen and Irving and Dorothy Stowe, who were one-time members of the Sierra Club Canada.


Greenpeace is wholly funded by individual donors and foundations, and had 2008 revenues of approximately 196 million euros.


Headquartered in Amsterdam, Holland, the organization has 28 regional offices covering 45 countries. The current executive director is South African-born Kumi Naidoo, who oversees a staff of 2,400 and 15,000 volunteers.


Unlike many environmental groups, which adopt a democratic approach and often don’t even have a designated leader, Greenpeace is a top-down organization, “basically modelling their internal decision-making on corporations or the military,” says Macdonald. This has allowed them to co-ordinate their protest actions and environmental strategy with great efficiency, he says.


What also distinguishes Greenpeace from other non-governmental organizations is that it has been much less willing to enter into the negotiation process with governments, says Macdonald.

Greenpeace activists form a human banner with a message of 'Detox' in Bangkok on July 14, 2011. They were demanding Adidas and Nike remove toxic chemicals from their products in Thailand after a year of investigation into toxic water pollution in China. Sakchai Lalit/Associated Press

“The general tactic that they used was raising the issue through media awareness, appeals for public support, staking out a position that might be a more hard-line position but never compromising that position. The environmental movement on the whole gets its strength from having someone on the outside who wasn’t in the room negotiating, and the implicit message always was: ‘If you don’t negotiate with us, the compromisers, you’ll have to deal with the more extreme — namely Greenpeace.’”


Over the years, Greenpeace has claimed victory on a number of issues, from France’s decision to halt nuclear testing in the South Pacific to the European Union’s resolution to phase out drift net fishing.


Cox believes you can't achieve anything without talking to big polluters and what he calls "resource extractors" - but not without a little arm-twisting.


"Sometimes, in order to make those companies ... in order to bring those governments to the table, you have to turn up the heat. And that might require direct action and at times breaking the law."


Greenpeace's strategies remain as extreme as ever. This past June, Naidoo spent four days in a Greenland prison after illegally climbing an oil platform owned by the Scottish oil company Cairn Energy, a stunt intended to bring attention to Greenpeace’s “Go Beyond Oil” campaign.


“Its modus operandi and its function is to get issues into the news media,” says Macdonald. “Of course, some people are going to think it’s too extreme, they shouldn’t be doing these things, but I don’t think that limits its effectiveness, because its effectiveness doesn’t flow from popular support.


“Where you cross that line is going from non-violence to a willingness to engage in violent tactics. But Greenpeace has never crossed that line.”

Accessibility Links

View the original article here

Algorithm developed to improve remote electrocardiography

ScienceDaily (Aug. 23, 2011) — Today someone in a remote village in India is able to run an electrocardiogram (ECG) via their smart phone on a loved one having a potential heart attack and send to a doctor in New Delhi for analysis.

Mobile technology is already bringing health care to places it has never been able to reach. However, there is still room for error that can lead to misdiagnosis.

Xiaopeng Zhao, assistant professor in the Department of Mechanical, Aerospace and Biomedical Engineering at the University of Tennessee, Knoxville, is working to eliminate these errors. Zhao and his team of graduate and undergraduate students and physicians have developed an award-winning algorithm that improves the effectiveness of ECGs.

The ECG is the most commonly performed screening tool for a variety of cardiac abnormalities. However, it is estimated that about 4 percent of all ECGs are taken with misplaced electrodes, leading to faulty diagnoses and mistreatments.

Zhao's algorithm examines interferences that result from electrode misplacement and disturbances, including patient motion and electromagnetic noise. Unlike conventional algorithms used to evaluate ECGs, Zhao's algorithm is more reliable because it is based on a matrix which simultaneously tests for irregular patterns caused by such interferences. Therefore, instead of a typical "yes-no" type of classification result, Zhao's produces a more accurate A-F letter grade of the ECG -- indicating specific weaknesses in the test. The algorithm also makes recommendations as to where to accurately place the electrodes.

Zhao's team has implemented the algorithm in a java program, which can be installed and operated on a smart phone. The program takes only a split second to execute on a smart phone and assess a 10-second ECG. The speed is key in situations where a second can mean the difference between life and death.

The goal is for users in remote areas to be able to know which ECGs are accurate to decrease misdiagnoses and ultimately save lives. The algorithm is also helpful in intensive care units where medical staff may be overworked, as well as for novice health professionals.

"There is a large population that does not receive good health care because they live in rural communities," said Zhao. "This algorithm helps to bring the doctor to their home through the help of mobile phone technology. We hope our invention brings their health care quality more in line with that of the developed world by reducing errors and improving the quality of ECGs."

The algorithm recently won the top spots in Physionet Challenge 2011 -- first, first and third places. Sponsored by the National Institutes for Health, Physionet and the annual Computing in Cardiology conference jointly host a series of challenge problems that are either unsolved or not well-solved. Starting in 2000, a new challenge topic is announced each year, aiming to stimulate work on important clinical problems and to foster rapid progress towards their solution.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Tennessee at Knoxville, via EurekAlert!, a service of AAAS.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Saturday, October 15, 2011

Tiny oxygen generators boost effectiveness of anticancer treatment

ScienceDaily (Aug. 31, 2011) — Researchers have created and tested miniature devices that are implanted in tumors to generate oxygen, boosting the killing power of radiation and chemotherapy.

The technology is designed to treat solid tumors that are hypoxic at the center, meaning the core contains low oxygen levels.

"This is not good because radiation therapy needs oxygen to be effective," said Babak Ziaie, a Purdue University professor of electrical and computer engineering and biomedical engineering. "So the hypoxic areas are hard to kill. Pancreatic and cervical cancers are notoriously hypoxic. If you generate oxygen you can increase the effectiveness of radiation therapy and also chemotherapy."

The new "implantable micro oxygen generator" is an electronic device that receives ultrasound signals and uses the energy to generate a small voltage to separate oxygen and hydrogen from water - a chemical operation called water electrolysis.

"We are putting these devices inside tumors and then exposing the tumors to ultrasound," Ziaie said. "The ultrasound energy powers the device, generating oxygen.

The devices were created at the Birck Nanotechnology Center in the university's Discovery Park. Purdue researchers are working with Song-Chu (Arthur) Ko, an assistant professor of clinical radiation oncology at the Indiana University School of Medicine.

Researchers have tested the devices in pancreatic tumors implanted in mice, showing they generated oxygen and shrunk tumors faster than tumors without the devices. The devices are slightly less than one centimeter long and are inserted into tumors with a hypodermic biopsy needle.

"Most of us have been touched by cancer in one way or another," Ziaie said. "My father is a cancer survivor, and he went through many rounds of very painful chemotherapy. This is a new technology that has the potential to improve the effectiveness of such therapy."

Findings are detailed in a research paper appearing online this month in Transactions on Biomedical Engineering. The paper was written by research assistant professor Teimour Maleki, doctoral students Ning Cao and Seung Hyun Song, Ko and Ziaie.

"The implantable mini oxygen generator project is one of 11 projects the Alfred Mann Institute for Biomedical Development at Purdue University (AMIPurdue) has sponsored," Ziaie said. "AMIPurdue has been instrumental in providing the development funding of roughly $500,000 on this project. And beyond funding, the AMIPurdue team has also helped us with market research, physician feedback, industry input, as well as intellectual property and regulatory strategy. We have been able to accomplish a great deal in a short time due to the collaborative effort with AMIPurdue."

A patent application has been filed for the design.

Future work may focus on redesigning the device to make it more practical for manufacturing and clinical trials.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Purdue University.

Journal Reference:

T. Maleki, N. Cao, S. Song, C. Kao, S.-C. Ko, B. Ziaie. An Ultrasonically-Powered Implantable Micro Oxygen Generator (IMOG). IEEE Transactions on Biomedical Engineering, 2011; DOI: 10.1109/TBME.2011.2163634

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Cull 70,000 Gulf seals, fisheries group says

A fishing industry council is recommending that up to 70,000 grey seals should be killed to test the hypothesis that the mammals are preventing the recovery of cod stocks in the southern Gulf of St. Lawrence.

The recommendation is part of a Fisheries Resource Conservation Council report released Thursday.

It’s estimated more than 100,000 grey seals forage the southern Gulf from New Brunswick to Nova Scotia. The FRCC is recommending that more than two-thirds of them should be culled.

"It has been estimated that the number of grey seals foraging the southern Gulf would have to be reduced by 70 per cent, to approximately 31,000 animals, in order to reduce the natural mortality of cod to a level sufficiently low to allow for a measurable increase in the southern Gulf cod stock," says the council’s report.

The report says an estimated 350,000 grey seals live in the Sable Island area, southeast of Nova Scotia.

Harp and hooded seals also live in the Gulf area. Harp seals, which are the primary target of the annual seal hunt in Atlantic Canada, are estimated to have a population of approximately 9.1 million in Eastern Canada.

The FRCC gives advice to the federal fisheries minister.

The council wrote its report called Towards Recovered and Sustainable Groundfish Fisheries in Eastern Canada after holding 27 open consultations in communities across Atlantic Canada, Quebec and Nunavut.

It also reviewed 26 written briefs submitted to the council and held discussions with biologists and managers from the Department of Fisheries and Oceans.

Accessibility Links

View the original article here

Graphene's shining light could lead to super-fast Internet

ScienceDaily (Aug. 31, 2011) — Writing in the journal Nature Communications, a collaboration between the Universities of Manchester and Cambridge, which includes Nobel Prize winning scientists Professor Andre Geim and Professor Kostya Novoselov, has discovered a crucial recipe for improving characteristics of graphene devices for use as photodetectors in future high-speed optical communications.

By combining graphene with metallic nanostructures, they show a twenty-fold enhancement in harvesting light by graphene, which paves the way for advances in high-speed internet and other communications.

By putting two closely-spaced metallic wires on top of graphene and shining light on this structure, researchers previously showed that this generates electric power. This simple device presents an elementary solar cell.

More importantly for applications, such graphene devices can be incredibly fast, tens and potentially hundred times faster than communication rates in the fastest internet cables, which is due to the unique nature of electrons in graphene, their high mobility and high velocity.

The major stumbling block towards practical applications for these otherwise very promising devices has so far been their low efficiency. The problem is that graphene -- the thinnest material in the world -- absorbs little light, approximately only 3%, with the rest going through without contributing to the electrical power.

The Manchester researchers have solved the problems by combining graphene with tiny metallic structures, specially arranged on top of graphene.

These so-called plasmonic nanostructures have dramatically enhanced the optical electric field felt by graphene and effectively concentrated light within the one-atom-thick carbon layer.

By using the plasmonic enhancement, the light-harvesting performance of graphene was boosted by twenty times, without sacrificing any of its speed. The future efficiency can be improved even further.

Dr Alexander Grigorenko, an expert in plasmonics and a leading member of the team, said: "Graphene seems a natural companion for plasmonics. We expected that plasmonic nanostructures could improve the efficiency of graphene-based devices but it has come as a pleasant surprise that the improvements can be so dramatic."

Professor Novoselov added: "The technology of graphene production matures day-by-day, which has an immediate impact both on the type of exciting physics which we find in this material, and on the feasibility and the range of possible applications.

"Many leading electronics companies consider graphene for the next generation of devices. This work certainly boosts graphene's chances even further."

Professor Andrea Ferrari, from the Cambridge Engineering Department, who lead the Cambridge effort in the collaboration, said "So far, the main focus of graphene research has been on fundamental physics and electronic devices.

"These results show its great potential in the fields of photonics and optoelectronics, where the combination of its unique optical and electronic properties with plasmonic nanostructures, can be fully exploited, even in the absence of a bandgap, in a variety of useful devices, such as solar cells and photodetectors."

Graphene is a novel two-dimensional material which can be seen as a monolayer of carbon atoms arranged in a hexagonal lattice.

It is a wonder material that possesses a large number of unique properties and is currently considered in many new technologies.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Manchester, via EurekAlert!, a service of AAAS.

Journal Reference:

T.J. Echtermeyer, L. Britnell, P.K. Jasnos, A. Lombardo, R.V. Gorbachev, A.N. Grigorenko, A.K. Geim, A.C. Ferrari, K.S. Novoselov. Strong plasmonic enhancement of photovoltage in graphene. Nature Communications, 2011; 2: 458 DOI: 10.1038/ncomms1464

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Build music with blocks: Audio d-touch

ScienceDaily (Aug. 26, 2011) — Researchers at the University of Southampton have developed a new way to generate music and control computers.

"Grab a block and add a base beat, turn a block to speed up the high hat and we have a new way to generate music through controlling the computer," said Dr Enrico Costanza at the University's ECS -- Electronics and Computer Science, who is launching Audio d-touch (25 August).

Audio d-touch, which is based on Dr Costanza's research into tangible user interfaces, or TUIs, gives physical control in the immaterial world of computers. It uses a standard computer and a web cam. Through using simple computer vision techniques, physical blocks are tracked on a printed board. The position of the blocks then determines how the computer samples and reproduces sound.

"As more of our world moves into the electronic; records to mp3s, books to eBooks, we loose the satisfying richness of touching physical objects like paper and drumsticks," said Dr Costanza.

"Our Audio d-touch system allows people to set up and use tangible interfaces in their own home, office or recording studio, or wherever else they like," said Dr Costanza. This is the first time that anyone has developed a free application like this."

Audio d-touch is more than just for play: TUIs are an alternative to virtual worlds. Human-Computer Interaction researchers are investigating ways to move away from the online, purely digital world and rediscover the richness of our sense of touch.

Dr Costanza has developed Audio d-touch over several years, "Our aim is to advance the field by gaining insight into how tangible interfaces can be used in the real world. We are keen to have more people download audio d-touch and give us feedback so that we can improve it."

All that is needed is a regular computer equipped with a web-cam and a printer. The user creates physical interactive objects and attaches printed visual markers recognized by Audio d-touch. The software platform is open and can be extended for applications beyond music synthesis.

A video of the system and the software is available at: http://d-touch.org/

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by University of Southampton.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Working On Netbooks

In general, netbooks are best used as secondary computing devices and travel laptops. Their small size and weight are great for mobility, but netbooks' lower processing capabilities means working on them can be more difficult: opening applications on a netbook takes more patience than on more robust systems, multi-tasking can cripple your system, and processor-intensive applications like graphics editing aren't really compatible with netbooks. Netbooks' small screen sizes and less-than-full-sized keyboards may also make working on them for long periods of time less comfortable compared to working on full-sized laptops or desktop PCs.

Still, netbooks are becoming mainstream due to their long battery lives, multiple connectivity options (wireless-n and, often, mobile broadband), and great portability.

Some netbooks are considered business-class and a few offer customizable or above-average specs (higher resolution display, solid state hard drive (SSD), Windows 7 Home Premium instead of Windows 7 Starter, more memory, etc.). These higher-end netbooks can outshine an outdated laptop or desktop PC, depending on the kind of work you plan to do on it.

If you add a second or external monitor for when you are at home and an external DVD drive if needed for your work, you may not even notice any limitations in the netbook model type.

Netbooks are ideal for activities like surfing the Web and using programs that don't require a lot of processing power, such as office applications. If you have a job that primarily involves the following types of tasks, a netbook may well be enough for your business needs:

Doing online researchChecking and responding to emailsWorking on basic office documents: spreadsheets, word processing documents, and presentationsUsing web-based applications for collaborating on documents, CRM, appointment scheduling, and moreAttending virtual meetings via video conferencing and IM solutionsPretty much any web-based or cloud computing purposeTemper your expectations for what you can do on a netbook and how they work. Programs are a little slower to open than you may be used to, and you may not be able to open more than a handful of programs at the same time without slowing your system down.If you have an operating system choice, go for Windows 7 (and preferrably Windows 7 Home Premium). Windows 7 is faster (and loads more quickly) than previous Windows versions and new features like the program dock in the taskbar make working on a small computer easier.Customize your program toolbars to save screen real estate. Netbook screens aren't very tall, so for more vertical space, tweak your toolbar buttons and select to display only those toolbars you need for your work task (toolbar options are usually found in the View menu). You might not need the Bookmarks Toolbar, for example, in Firefox.Follow important mobile security precautions. Netbooks are prime targets for thieves and also very easy to lose.

View the original article here

Friday, October 14, 2011

Mobile phone data help track populations during disasters

ScienceDaily (Aug. 31, 2011) — Mobile phone positioning data can be used to monitor population movements during disasters and outbreaks, according to a study recently published in PLoS Medicine. The study, conducted by Linus Bengtsson and colleagues from the Karolinska Institute, Sweden and Columbia University, USA, finds that reports on the location of populations affected and in need of assistance can be generated within hours of receiving data.

Population movements after disasters make it difficult to deliver essential relief assistance to the right places and at the right scale. In this geospatial analysis, Bengtsson and colleagues investigate whether position data from mobile phone SIMs (subscriber identity modules) can be used to estimate the magnitude and trends of population movements. The authors collaborated with Digicel, the largest mobile phone operator in Haiti, to retrospectively follow the positions of 1.9 million SIMs in Haiti before and after the January 2010 earthquake, and found that the estimates of population movements using SIM cards were more accurate than ad hoc estimates generated immediately after the earthquake. The authors then tracked population movements by SIM positioning during the first few days of the cholera outbreak that occurred following the earthquake, showing that these estimates of population movements could be generated within 12 hours of receiving SIM positioning data.

Their findings show that routinely collected data on the movements of active SIM cards in a disaster-affected nation can provide estimates of the magnitude, distribution, and trends in population displacement, and that the method can be used for close to real-time monitoring of population movements during an infectious disease outbreak. Results of the study also suggest that this method could provide estimates on area-specific population sizes and could lead to important improvements in the allocation of relief supplies.

The authors say: "We recommend establishing relations with mobile phone operators prior to emergencies as well as implementing and further evaluating the method during future disasters."

However, this approach may not be effective in all situations, since disasters can destroy mobile phone towers and some areas have sparse network coverage. Additionally, mobile use may be lower in some population groups such as children or the elderly.

In an accompanying perspective article, Peter Gething of the University of Oxford, United Kingdom and Andrew Tatem from the University of Florida, USA, both uninvolved in the study, discuss the potential impact of mobile phone positioning data on responses to disaster. They highlight challenges that must be addressed if use of this technology for disaster response planning is to develop, including how to assess cross-border population movements and the need for protocols to protect the privacy of data, saying: "Bengtsson and colleagues have demonstrated a valuable proof-of-concept of the use of phone data in disaster response, but substantial further work will likely be required before operational usage becomes common."

Gething and Tatem continue: "While millions continue to be adversely affected by natural disasters, in an increasingly connected world where mobile phone ownership is becoming ubiquitous, these data will likely become a valuable component of the disaster response toolbox. Bengtsson and colleagues have taken the first step towards this full potential being realised."

The Swedish National Board of Health and Welfare supported the project financially.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Public Library of Science, via EurekAlert!, a service of AAAS.

Journal Reference:

Linus Bengtsson, Xin Lu, Anna Thorson, Richard Garfield, Johan von Schreeb. Improved Response to Disasters and Outbreaks by Tracking Population Movements with Mobile Phone Network Data: A Post-Earthquake Geospatial Study in Haiti. PLoS Medicine, 2011; 8 (8): e1001083 DOI: 10.1371/journal.pmed.1001083

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

Solve WiFi-No-Internet

One of the more perplexing and annoying wi-fi problems is having a strong wireless signal but still no Internet connection. Unlike issues like not having a wireless connection or dropped wireless signals, when you have a strong wireless signal, all indicators seem to be saying everything's OK -- and yet you can't connect to the Internet or, sometimes, other computers on your network.

Here's what to do about this common problem:

If the issue occurs on your home network, log in to the wireless router's administration page (directions will be in your manual; most router's admin sites are something like http://192.168.2.1). From the main page or in a separate "network status" section, check if your Internet connection is actually up. You can also go to the router itself and look at the status indicator lights -- there should be a blinking or steady light for the Internet connection. If your Internet connection is down, uplug the modem and the router, wait a few minutes, and plug them back in. If this doesn't refresh your service, contact your Internet Service Provider (ISP) for assistance, since the problem is likely on their end.

If you're using a wi-fi hotspot (at a hotel, cafe, or airport, for example), you may think you can check your email (e.g., in Outlook) once you have a wireless connection signal. Most hotspots, however, require you to first open a browser and view their landing page where you'll have to agree to their terms and conditions before using the service (some will also require you to pay for the access). This holds true whether you're using a laptop or a smartphone or other portable device to access a public wireless network.

Some operating systems (like Windows XP) won't warn you if you put in the wrong wireless security code. Although your laptop may show you have a strong wireless signal, if the wrong key is input, the router will refuse to properly communicate with your device. Re-input the security key (you can right-click on the icon in the status bar and click Disconnect, then try again). If you're at a public wi-fi hotspot, make sure you have the correct security code from the hotspot provider.

A simiilar problem is if the router or access point has MAC address filtering set up. MAC addresses (or Media Access Control numbers) identify individual networking hardware. Routers and access points can be set up to allow only certain MAC addresses -- i.e., unique devices -- to authenticate with them. If the network you're connecting to has this filtering set up (e.g., on a corporate or small business network), you'll need to have the MAC address of your computer/device's network adapter added to the permission list.

Changing your DNS servers, which translate domain names into actual web server addresses, from your ISP's to a dedicated DNS service -- such as OpenDNS -- can add more connection reliability and also speed up your Internet access. Enter the DNS addresses manually in your router's configuration pages.

These are the most common fixes for wireless problems. Another source that can help you diagnose and solve wireless issues on your laptop is this pretty technical Laptop Internet Connectivity flowchart from the Laptop Repair Workbook.

(Note: This article is also available in a PDF version for saving to your computer for reference before going on the road. If you need further help or want to discuss wi-fi or other mobile computing topics, feel free to visit our forum.)


View the original article here

Breakthrough could double wireless capacity with no new towers

ScienceDaily (Sep. 7, 2011) — The days of waiting for smartphones to upload video may be numbered. Rice University engineering researchers have made a breakthrough that could allow wireless phone companies to double throughput on their networks without adding a single cell tower.

Rice's new "full-duplex" technology allows wireless devices like cell phones and electronic tablets to both "talk" and "listen" to wireless cell towers on the same frequency -- something that requires two frequencies today.

"Our solution requires minimal new hardware, both for mobile devices and for networks, which is why we've attracted the attention of just about every wireless company in the world," said Ashutosh Sabharwal, professor of electrical and computer engineering at Rice. "The bigger change will be developing new wireless standards for full-duplex. I expect people may start seeing this when carriers upgrade to 4.5G or 5G networks in just a few years."

In 2010, Sabharwal and Rice colleagues Melissa Duarte and Chris Dick published the first paper showing that full-duplex was possible. That set off a worldwide race to demonstrate that the technology could actually be used in a real network. This summer, Sabharwal and Rice's Achaleshwar Sahai and Gaurav Patel set new performance records with a real-time demo of the technology that produced signal quality at least 10 times better than any previously published result.

"We showed that our approach could support higher throughput and better link reliability than anything else that's been demonstrated, which is a plus for wireless carriers," Sabharwal said. "On the device side, we've shown that we can add full duplex as an additional mode on existing hardware. Device makers love this because real estate inside mobile devices is at a premium, and it means they don't have to add new hardware that only supports full duplex."

To explain why full-duplex wireless was long thought impossible for wireless networks, Sabharwal uses the analogy of two people standing far apart inside an otherwise empty arena. If each shouts to the other at the same time, neither can hear what the other is saying. The easy solution is to have only one person speak at a time, and that's what happens on two-way radios where only one person may speak at a given time. Cell phones achieve two-way communications by using two different frequencies to send and listen.

Rice's team overcame the full-duplex hurdle by employing an extra antenna and some computing tricks. In the shouting analogy, the result is that the shouter cannot hear himself, and therefore hears the only other sound in the arena -- the person shouting from far away.

"We send two signals such that they cancel each other at the receiving antenna -- the device ears," Sabharwal said. "The canceling effect is purely local, so the other node can still hear what we're sending."

He said the cancellation idea is relatively simple in theory and had been proposed some time ago. But no one had figured a way to implement the idea at low cost and without requiring complex new radio hardware.

"We repurposed antenna technology called MIMO, which are common in today's devices," Sabharwal said. "MIMO stands for 'multiple-input multiple-output' and it uses several antennas to improve overall performance. We took advantage of the multiple antennas for our full-duplex scheme, which is the main reason why all wireless carriers are very comfortable with our technology."

Sabharwal said Rice is planning to roll its full-duplex innovations into its "wireless open-access research platform," or WARP. WARP is a collection of programmable processors, transmitters and other gadgets that make it possible for wireless researchers to test new ideas without building new hardware for each test. Sabharwal said adding full-duplex to WARP will allow other researchers to start innovating on top of Rice's breakthrough.

"There are groups that are already using WARP and our open-source software to compete with us," he said. "This is great because our vision for the WARP project is to enable never-before-possible research and to allow anyone to innovate freely with minimal startup effort."

Sabharwal's team has gone one step further and achieved asynchronous full-duplex too -- that is one wireless node can start receiving a signal while it's in the midst of transmitting. Asynchronous transmission is import for carriers wishing to maximize traffic on their networks, and Rice's team is the first to demonstrate the technology.

"We've also developed a preliminary theory that explains why our system is working the way that it is," Sabharwal said. "That's also important for carriers and device makers, because engineers aren't likely to implement something like this without a clear understanding of fundamental tradeoffs."

Rice's research has been funded by the National Science Foundation, the Roberto Rocca Education Program and Xilinx Incorporated.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted (with editorial adaptations by ScienceDaily staff) from materials provided by Rice University.

Journal Reference:

Achaleshwar Sahai, Gaurav Patel, Ashutosh Sabharwal. Pushing the limits of Full-duplex: Design and Real-time Implementation. arXiv, 4 Jul 2011 [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here