Here in DL1, Robert Alexander and Yohei Kanehara have been working together on some really interesting work about sound in space, specifically the Sun! Here are a few quick questions and answers about his research.
Q: Why did you choose to work on this UROP project?
A: I chose to work on this UROP project because I am fascinated by sound, which stems from my background in electronic music production and sound engineering.
Q: Who are you working with?
A: I am currently working closely with Robert Alexander who is a Phd candidate in Design Science. The project is also overseen by Dr. Jason Gilbert. They both are involved in research as part of the Solar and Heliospheric Research Group.
Q: What does “sonification” mean?
A: Sonification refers to the transmission of data through sound.
Q: Why did you choose the sound samples that you are working with?
A: For this particular assignment, I was given the freedom to analyze any data set. However, a large amount of data is required to successfully audify data so I chose to analyse data from one of NASA’s database since it contains an array of large data sets. I sampled data from the GENESIS space probe which collected data on solar wind from 2001 to 2004. I chose to analyze the solar wind’s proton temperature since it is indicates the solar wind type. I sampled data from November 2001 to April 2004.
Q: How did you collect your samples?
A:The data samples that I am working with came from NASA’s database which can be accessed online here:
Q: What is the goal of this research project?
A: The goal of this research project is to explore various sonification techniques to explore solar data in order to isolate interesting moments in data and to gain new perspectives that may or may not have been possible through visual data alone.
Q: What techniques or technologies are you using the develop your research?
A: The main software we are using are Max/MSP, MatLab, iZotope RX, and Logic Pro. Max/MSP and MatLab are used to audify the data while iZotope RX and Logic Pro are used to analyze the data. For this project, I used an audifier in MatLab to transcribe the data set into sound. After listening to the audio file on repeat several times, I opened the audio file in iZotope RX, which creates a spectrogram of the audio file. After analyzing the audio’s spectral content, I opened the audio file in Logic and processed the sounds by passing it through filters, slowing the playback, etc. which gave rise to new conclusions.
Q: What about this research have you found most fascinating?
A: The amount of detail that is discernable in a small bit of audio has fascinated me. The audio files that I am working with are on average only about 10 seconds long. However, if the audio file is looped and listend to repeatedly, many audio artifacts can be detected in that short ammount of audio.
Q: Where is this work leading you?
A: This work is leading to a UROP symposium in April where I will present my findings from throughout the year and share my UROP experience.
Q: What are some of the conclusions you’ve come to?
A: For this project, I have concluded two potential coronal mass ejection events that occur in May and July of 2002. After further investigation, I comfirmed that these events were CME events.
Q: What is something you have learned from your UROP experience that you would like to share with others?
A: I would like others to become more aware of the field of sonifcation. I think it is an overlooked practice in the field of scientific research. If combined with visual data analysis, I believe that sonification can provide a useful narrative to data sets.
Yohei’s presentation, pictured here, was on some of his most recent data:
Congrats Yohei! Keep up the great work!