Kevin Lepton

Neuroscience and Art: How the Brain Responds

 Emerging Technology  Comments Off on Neuroscience and Art: How the Brain Responds
Aug 052015
 

Cognitive neuroscience advancements have increasingly engaged a wider audience. While simple paradigms and traditional experiments done in a controlled laboratory environment had significant contributions into the insights regarding brain function, it is observed that our discernment towards cognition would need similarly complex realistic environments. Taking accounts from an electroencephalography (EEG) based brain-computer-interface (BCI) experiment approved by the Research Ethics Board at Baycrest and the University of Toronto, let us dig deeper into how neuroscience and art can affect the brain.

 

The Experiment

The experiment focuses on exploring a person’s ability to rapidly learn controlling his brain states in a complex environment. Along with an art-exhibition related criterion, this objective guided the entire experimental design. Participants received several minutes of controlled neurofeedback tests in a period that is shorter than the typical neurofeedback training experiments. The hypothesis was: Neurofeedback effects can be detected early in training, and with a large sample size, sufficient statistical power would be revealed.

The study revealed that aesthetic sophistication and technological maturity of virtual reality, gaming and multi-media have positioned these platforms as suitable partners for neuroscience. Also, it found that EEG has expanded its use outside the lab through BCI technology and interventions in therapeutic neurofeedback, as well as through other products, such as wearable devices for self-optimization, self-monitoring and neurogaming. This means that neurofeedback protocols based on brain-computer interfaces present good promises for attention, learning and creativity.

It also found that BCI applications learning was enhanced if a person would learn how to modulate his brain activity in as little time as possible. Learning is somehow associated with structural and functional changes in the brain, and despite continuous re-organization on a synaptic scale, effects on a large scale required time to manifest. Also, sensory stimulation protocols have yielded persisting re-organization of coupling between distributed areas in the brain after stimulation. With regards to cognitive performance, individual neurofeedback training sessions were found to mediate significant changes.

 

Findings in Detail

It is found that there were interesting global patterns of correlation between brain data and variables in demographic, regardless of conditions. This was reached by folding all condition-specific relative spectral power (RSP) measurements and gathering data from each participant across all conditions together. For headsets, researchers considered their effects as nuisance variables.

For relaxation and concentration, neurofeedback also had significant effects on these states, depending on the conditions of the subjects. Participants were found learning to modulate their relative spectral power for relaxation and concentration. Based on general results on these aspects, it was hypothesized that early (yet subtle) changes in activities in the brain are associated with short neurofeedback training protocol and would be detected with a larger size of samples.

 

Conclusion

Both novel and confirmatory findings from the experiment have provided a necessary proof of concept for a novel neuroscience research framework. By combining brain-computer interfaces, art and performance, we can now ask questions of complex real-life social cognition that are not accessible in laboratory settings, otherwise. It is concluded that the traditional approach to performing mind studies would discount the central feature of our brain being intrinsically subjective. Now, this opens interesting new avenues for research on neuroscience considering sociability, complexity and individuality of the human mind.

 

Relevant External Link

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130129

 

Future Intel Supercomputer Is an Argonne Conclusion

 Future Computers  Comments Off on Future Intel Supercomputer Is an Argonne Conclusion
Apr 172015
 

On April 9, 2015, the United States Department of Energy announced that it will push forward with the granting of an award that will further promote the U.S.’s leadership in exascale computing. Under the Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) initiative, the DOE will invest $200 million to develop a new supercomputer and have it installed in the Argonne Leadership Computing Facility at Argonne National Laboratory.

The next-generation computing machine, which is called “Aurora”, will be manufactured by supercomputer giant Cray. The company has become known for its lucrative government and commercial contracts in the past years and its development of supercomputers with world-renowned chip maker Intel.

Aurora is remarkable not just because of the amount of money that’s being put to it, but also because of its impressive specifications and its breath-taking potential. As one of the most powerful pre-exascale supercomputers to be ever created, it’s expected to reach a peak performance of 180 petaflops and become one of the fastest and most powerful computing machines ever made.

If this doesn’t seem too impressive, it’s important to take note that average modern computers — depending on the hardware — are capable of achieving up to 2,600 gigaflops. Some of the world’s current supercomputers, namely the Sequoia machine at the National Nuclear Security Administration and the Titan at the Oak Ridge National Laboratory, have a peak performance of 20 petaflops and 27 petaflops, respectively.

Aurora will be built with Intel’s third-generation processor named Knights Hill. Even though it’s still in development and Intel has not said much about it, this system framework is expected to have breakthrough performance, be highly compatible with a massive range of applications and prove itself to be more power-efficient than today’s supercomputers. It’s also highly scalable and adaptable and can therefore pave the way to new scientific discoveries that will have a global impact.

The Aurora contract is the third and final pre-exascale class system that has been funded for the CORAL initiative. Earlier, the Department of Energy announced that it’s investing around $325 million to develop state-of-the-art supercomputers for two other laboratories. Oak Ridge National Laboratory is set to receive Summit (which theoretically can reach 150 to 300 peak petaflops) in 2017. In the same year, Lawrence Livermore National Laboratory will get its supercomputer named Sierra, which will have a peak performance of 100 petaflops.

Aurora is scheduled to be installed at Argonne Leadership Computing Facility in 2018. But, prior to that, Intel and Argonne will collaborate to come up with an interim system (which will be named Theta) in 2016. This system will help the ACLF community in transitioning their programs and applications to the new technology and ensuring they’ll retain all important data when Aurora will be rolled around.

Argonne National Laboratory’s supercomputer will mainly be used to boost the performance of computing applications that are valuable to the Department of Energy as well as other agencies. It will also be available to everyone in the scientific community to attract the country’s best researchers to Argonne and help in developing other industries like materials science, biological science, renewable energy and transportation efficiency.

 

Caltech Scientists Finally Discover A Simple Way to Make Graphene

 Emerging Technology  Comments Off on Caltech Scientists Finally Discover A Simple Way to Make Graphene
Mar 262015
 

graphene

Scientists have always known of the existence of graphene. After all, we all have used pencils once in our lives and the result of drawing using the writing device is the substance in question. Basically, it’s this one atom thick crystal that is one million times thinner than human hair and 200 times stronger than steel. The problem was: no one knew how to extract it out of graphite.

This is where two Russian-born scientists come in. Andre Geim and Konstantin Novosolev are researchers at the University of Manchester and in one of their Friday night experiments – sessions they hold not linked to their job to maintain interest in their field and generate new ideas – they accidentally created graphene with the help of Scotch tape.

The pair wrote a three-page paper describing what they had just discovered. It was rejected by Nature – twice – but eventually got published in the journal Science in 2004.

Since then, researchers all over the world have devoted time to studying this fantastic material that is as pliable as rubber and can stretch to 120% of its length. They also found that the material is a good conductor of heat and electricity.

Six years after Geim and Novosolev published their paper, they were awarded the 2010 Nobel Prize in Physics. As a result, the material they were able to create was lauded as “a wonder material” and one that “could change the world.” Researchers from various fields – medicine, chemistry, physics, electrical engineering – all come together to study this groundbreaking material.

As a result, the number of graphene-related patents have risen. The UK Intellectual Property Office alone reports of a jump from 3,018 in 2011 to 8,416 at the beginning of 2013. Samsung and Sungkyunkwan University in Korea, Zheijiang University in China and IBM in the US are the leaders in patent applications.

The possibilities are endless when it comes to products that can be developed using graphene: bendable computer screens, long-life batteries, very fast microcomputers, etc. Although the possibilities were endless, the amount of time it took to make the substance was lengthy and the temperatures too high. This is the area that Caltech staff scientist David Boyd was able to address with yet another accidental discovery.

Boyd wasn’t having luck with creating graphene by exposing methane to a heated copper surface. He got distracted by a phone call leading him to leave the copper on heat for a longer time than usual. Once he got back, he found that graphene was formed due to the added heat that removed a key impurity.

Basically, what used to take about 10 hours and a very high temperature to do can now be accomplished in around five minutes and at a lower temperature.

The discovery just opens of worlds of possibilities when it comes to graphene-based products. As Boyd told Pasadena Star-News, “You could imagine something crazy. You could wrap a building in graphene to keep it from falling over.

 

External Resource

http://www.huffingtonpost.com/2015/03/19/better-graphene-making-process-breakthrough_n_6891226.html