What started out as a sort of slow day, ended up turning out really great.

Two turning points:

1) I finished my stellar population! all 216 files are neatly saved to a directory and are ready for use!

2) fixed my data structure! Turned it into a fits file and started plotting.

So I was feeling kind of off in the morning. I was slowly working on this data structure and didn't really understand my coding and kept running into different errors. But with Beth's help, I got it up and running and started tinkering with the data. I made histograms for max, min, and time, as well as a couple different error plots, that are neatly stored in a file. Learned the handy !p.multi trick and used it to neatly format my graphs. Beth asked for a histogram of minimum, but all of the minimum values are at 0 (this is with 10, 000 stars), so the graph was obsolete (and unplottable for that matter). I calculated the maximum of our max values, and did the same for min, average, and average of the time it took for our calculation.

A few things to check:

- time error plot (I think the values should be smaller)

- median vs. average plot

also have to update my latex doc with the work of the past week.

Feeling good!

## Wednesday, June 30, 2010

## Tuesday, June 29, 2010

With lots of help from Beth, I fixed my function and got rid of my hard coding. Fixed my smoothing filter, and toyed around with different smoothing lengths, different number of times of exp scaled length, pixel size, etc. It is so cool! Especially being able to look at the spatial smoothing section of the Walsh (2008) paper, and be able to understand what they're talking about. Really interested in the comparison between low/high numbers of exp scaled lengths and smoothing lengths. Interested to compare these along with other variables....which brings me to my next task: setting up a data structure that would compute these fields N times and compute values for the max, min, average, etc.

So I just started setting up my empty structure. Going okay so far, but lots of debugging in the near future. Was having issues with what I thought was the fltarr command, so I set all of them to 0.0 to try to debug, but that's not turning out so well either. I'll get it tomorrow or later today though. Looking forward to having a large data set that will be helpful.

things to do:

- debug function

- update latex doc with Koposov info (already updated with Walsh info)

- finish stellar population

## Monday, June 28, 2010

### Simulating fields

Today and last Friday, I've been working on my next major task: creating and using a function that simulates random uniform fields. This function basically generates N number of random x and y-coordinates that represent a random distribution of stars. Though it sounds simple, I've spent quite some time and patience on it. Last Friday and early this morning, I had problems just getting data points to show up on my plot -- I kept getting just a dark graph. Found the problem in the fact that I neglected the different between arcmin and degrees. Fixed that in the procedure, but then didn't fix the same thing in the function....but it worked after that! I originally wanted to make my function/procedure as general as possible, so that someone could put in inputs that span a wide range and it would be effective, but for now, it's pretty much set to data points spanning a -.5, .5 range.

After plotting the function, I applied an exponential filter and smoothed the plot. I spent some time tinkering around with the different filter parameters, changing how my filter smooths the data, but didn't end up seeing very dramatic results. Going to look at it with Beth tomorrow morning and hopefully change a few things.

Spent some time finishing up my stellar population - 2/3 of the way done! Going to hopefully finish it by the time I leave work today. Also added a few more sections to my latex document so I don't forget what I've been working on (though this blog makes sure that doesn't happen!).

things to work on:

- finish updating latex document

- keep working on fake field function and improve filter

- finish stellar pop

-------

update since 1 hour ago:

i might cry:

i realized that all of the combinations with al = 0, i did for bess +2mass instead of sdss, so i deleted all of them and redid them. just finished redoing them all, and realized i used filter 1 instead of filter 4. just deleted all of them. back to having only 1/3 of my stellar population done. something for tomorrow morning!

After plotting the function, I applied an exponential filter and smoothed the plot. I spent some time tinkering around with the different filter parameters, changing how my filter smooths the data, but didn't end up seeing very dramatic results. Going to look at it with Beth tomorrow morning and hopefully change a few things.

Spent some time finishing up my stellar population - 2/3 of the way done! Going to hopefully finish it by the time I leave work today. Also added a few more sections to my latex document so I don't forget what I've been working on (though this blog makes sure that doesn't happen!).

things to work on:

- finish updating latex document

- keep working on fake field function and improve filter

- finish stellar pop

-------

update since 1 hour ago:

i might cry:

i realized that all of the combinations with al = 0, i did for bess +2mass instead of sdss, so i deleted all of them and redid them. just finished redoing them all, and realized i used filter 1 instead of filter 4. just deleted all of them. back to having only 1/3 of my stellar population done. something for tomorrow morning!

## Thursday, June 24, 2010

Accomplished my previously set goals for today. My M31 functions work great. Tested them out with data from And I, And III, as well as And X, and they seem to be in tip-top shape. Also re-started my stellar population database. 86 down!

Next focus is looking at the angular sizes and exponential scale length used when finding dwarfs. We'll be looking at And XIX specifically as it was much larger than the others. The question of the moment, is if angular size was changed (especially, if it was larger) would more dwarfs have been found? Hopefully this will become clearer to me after reading the Koposov (2008) paper.

Next focus is looking at the angular sizes and exponential scale length used when finding dwarfs. We'll be looking at And XIX specifically as it was much larger than the others. The question of the moment, is if angular size was changed (especially, if it was larger) would more dwarfs have been found? Hopefully this will become clearer to me after reading the Koposov (2008) paper.

### update from yesterday - everything is functional!

Gotta love the wordplay. Fixed my fraction of light function. Now, you can input any absolute magnitude, and it will return a percentage of light visible in that range. Also created functions for my distance calculations, so you can input RA, DEC, and distance from MW, and you will get the distance from M31. Did the same for my error calculation.

Also went to a lecture on Neanderthals and genetic differences between them and modern humans. Neat stuff.

Things I'm going to work on today:

- checking random known distances to ensure I correctly coded my functions

- add header to all of my procedures and functions that will be shared

- work on updating our stellar population from Dotter (aiming to be done by the end of the week)

- Hess diagrams sometime in the future?

Also went to a lecture on Neanderthals and genetic differences between them and modern humans. Neat stuff.

Things I'm going to work on today:

- checking random known distances to ensure I correctly coded my functions

- add header to all of my procedures and functions that will be shared

- work on updating our stellar population from Dotter (aiming to be done by the end of the week)

- Hess diagrams sometime in the future?

## Tuesday, June 22, 2010

After a great group meeting, I moved onto creating a function and using the interpol command in order to interpolate data for the values that I don't have specific data points for. Slightly disappointed, as I had trouble with my R-input. I ended up just typing in a series of integers that spanned my x-range. I wanted to write a command that wasn't manual so it'd be easier to change later on -- something to fix tomorrow. Also had a few issues with the function part as well. Have to read up on that for tomorrow.

To end the day on a good note, I did my first 2 Dotter isochrones/luminosity functions. 2 down, 106 to go! Tomorrow morning, I hope to have figured out this interpol business and have that function down.

## Monday, June 21, 2010

### distance calculation = done! On to bigger and better projects...

Finished with my distance calculation and error! Spent the morning checking my equations and verifying my distances. Also updated my LaTeX doc with my equations and notes.

Met up with Beth to discuss upcoming projects. My next task is to take Dartmouth stellar evolution data to model the fraction of visible light from stars in the dwarfs versus their magnitudes. Since then, I've started my first read file and am formatting the data into a table I can use in IDL to create a plot. Looking forward to having this plot by tomorrow afternoon if all goes well!

(reminder: put notes about parameter errors in LaTeX)

Met up with Beth to discuss upcoming projects. My next task is to take Dartmouth stellar evolution data to model the fraction of visible light from stars in the dwarfs versus their magnitudes. Since then, I've started my first read file and am formatting the data into a table I can use in IDL to create a plot. Looking forward to having this plot by tomorrow afternoon if all goes well!

(reminder: put notes about parameter errors in LaTeX)

## Friday, June 18, 2010

### so close to finishing this error calc!

Worked on latex document, entered all the calculations I've done these past few weeks, described some of the tasks I've worked on, etc.

Also re-ran that data calculation Beth wanted....forgot about degrees conversion.

Then I took a look at my distance error calculation -- and realized how much work I had to do on it. Originally, I just assumed the main uncertainty was only coming from the distance to MW. After talking it over with Maya, I realized I needed to include the errors from the RA and DEC values. So I assumed the largest errors, and assigned those for the rest of the RA/DEC errors, and then re-ran the fitsread files. My error calculation now has 4 main steps: 1) finding the errors for the x,y,z coordinates, 2) combining the errors from the dwarf coordinates to the errors from the M31 coordinate, 3) combining all of the x,y,z errors into one error 4) taking the error after the square root of this value was taken. As of now, I have all of my formulas written up, I just have to re-program them. I'm positive about the accuracy in steps 2-4 of my uncertainty calculation, but I want to re-check my partial derivatives on mathematica before I enter them into my .pro file. Until Monday!

Also re-ran that data calculation Beth wanted....forgot about degrees conversion.

Then I took a look at my distance error calculation -- and realized how much work I had to do on it. Originally, I just assumed the main uncertainty was only coming from the distance to MW. After talking it over with Maya, I realized I needed to include the errors from the RA and DEC values. So I assumed the largest errors, and assigned those for the rest of the RA/DEC errors, and then re-ran the fitsread files. My error calculation now has 4 main steps: 1) finding the errors for the x,y,z coordinates, 2) combining the errors from the dwarf coordinates to the errors from the M31 coordinate, 3) combining all of the x,y,z errors into one error 4) taking the error after the square root of this value was taken. As of now, I have all of my formulas written up, I just have to re-program them. I'm positive about the accuracy in steps 2-4 of my uncertainty calculation, but I want to re-check my partial derivatives on mathematica before I enter them into my .pro file. Until Monday!

## Thursday, June 17, 2010

Ran distance calculation in idl with the correct parameter values! It's looking great! Two errors (And IX and And XX) look kind of high, so that's something to work on. All of the errors look reasonable, so I don't think it's an error with the uncertainty equation.... (note: go over that error equation with Beth and see if it is correct)

Also did a mini calculation for Beth. She wanted our parameter values converted into galactic coordinates and then sorted....it was a pretty simple task and I was excited that I understood how to do it without any significant problems....then I realized that I messed up my SORT command and just sorted the b column. Oops! But with Gail's help I got it in the end.

At a point where I'm not sure how I'm going to spend the rest of the afternoon...there's always trying to find those last 7 CSB data points. I'm slightly worried about finding those. Not sure where they're hiding, but after extensive searching I still don't have them....

^^update. Found a value for Leo T (actually had it the entire time, but forgot to enter it). Pretty sure there aren't values for Pisces II and Segue III....but the Andromeda ones should have CSB values. Especially cause there was a KECK survey....hm.

## Wednesday, June 16, 2010

### success!

Yay! Finally debugged my distance formula earlier this afternoon. After a mad series of calculations (that ended up killing the batteries on Maya's calculator - oops! ), and figuring out my radian/degrees errors, it is correct as far as I can tell. Once Maya puts in the new values for the table, and I re-run IDL it will be finished!

Also started my first latex document. I'm psyched to have all of our work in one place, all formatted nicely, at the end of the summer.

A slight downside is that for And XII, XIII, XV and XVI, Leo T, Segue II and III, and Pisces II, I couldn't find central surface brightnesses. For And XII, there was a value in other units -- but when I converted it using their taken data, it was still off by 0.2, so I figured it'd be better to keep searching. Something to keep working on.

other things:

- go over errors for distances

- re-run idl with complete set of correct values (don't forget!)

- continue with new latex doc

- hess diagrams

Also started my first latex document. I'm psyched to have all of our work in one place, all formatted nicely, at the end of the summer.

A slight downside is that for And XII, XIII, XV and XVI, Leo T, Segue II and III, and Pisces II, I couldn't find central surface brightnesses. For And XII, there was a value in other units -- but when I converted it using their taken data, it was still off by 0.2, so I figured it'd be better to keep searching. Something to keep working on.

other things:

- go over errors for distances

- re-run idl with complete set of correct values (don't forget!)

- continue with new latex doc

- hess diagrams

## Tuesday, June 15, 2010

I thought that I could get through the rest of the central surface brightness values pretty quickly. I went through the data to consolidate sources in order to make it more consistent. With that, and adding other values here and there, as well as documenting everything, it took about half of the day. But just 6 more CSB values needed...

Spent the rest of my time working on my distance equation...kept thinking that I had fixed it, but it still isn't looking right. I think I'll have it by tomorrow though.

Also heard a neat lecture from a ocean engineering prof who gave an impromptu talk this morning. Cool stuff.

things to do:

finish CSB table

fix equation

hess diagrams

## Friday, June 4, 2010

### End of week 2

Feeling really good about today. I spent the morning finding central surface brightness values for the galaxies -- I have about 70% of them. Added some error values to our parameter measurements along the way. Also fixed my M31 equation. But I'm most proud of my error calculation. Using equations from the error reading we did this week, I put together what I think is a correct uncertainty formula. I was kind of daunted by the error calculation, especially the computer programming aspect of it, but I think I've done it (and without many error messages!). I now have a nice table with the galaxies listed with their xyz coordinates, distances and distance errors. Going to spend the rest of the afternoon searching for more central surface brightness values!

For next week:

- Gather more central surface brightness values

- Correct where function for M31 distances in order to separate them from MW satellites

- Go over error formula with Beth

- Figure out why some MW distances are close to M31 (? possibly calculation error?)

- add a column to our table documenting how these values were found (eek...kind of forgot about this until just now...will get on that pronto)

Until Monday!

For next week:

- Gather more central surface brightness values

- Correct where function for M31 distances in order to separate them from MW satellites

- Go over error formula with Beth

- Figure out why some MW distances are close to M31 (? possibly calculation error?)

- add a column to our table documenting how these values were found (eek...kind of forgot about this until just now...will get on that pronto)

Until Monday!

## Thursday, June 3, 2010

### M31 Distance Calculation

Today I spent the day working on a calculation to determine the distances from the Andromeda satellites to the center of the Andromeda galaxy. The morning consisted of a complicated method to get this distance. Upon working at this for quite some time, I realized I should just convert this into Cartesian coordinates. After lunch, Beth helped me get on track with this. I spent the afternoon doing a few sample calculations, and then entering the functions into our FITS binary table. As of now, I have the values for the x, y, and z coordinates, along with the distance and distance error values (all converted into radians) in neat columns in a new file. Surprisingly, when the distance was calculated, some dwarfs other than the And satellites had values up to 1.9 (MW satellites should have a value of 0)...gotta take a closer look at that. Also.....

Complications to work out:

- format when name of dwarf is included in table

- the distance error for the last two values (check and see if error is that large, or if it is a calculation flaw)

- where command (know that it is different from regular old code, but needs tweaking)

Hopefully by tomorrow, this table will be in good form. Maybe add it to big data table (or subset of it)...? We'll see. Next week, I hope to make a plot or two using these distances -- any blatant errors should pop up then.

Also for tomorrow:

- Look up central surface brightness values (turn to Mateo and Martin for majority of them)

- Finish up readings

Complications to work out:

- format when name of dwarf is included in table

- the distance error for the last two values (check and see if error is that large, or if it is a calculation flaw)

- where command (know that it is different from regular old code, but needs tweaking)

Hopefully by tomorrow, this table will be in good form. Maybe add it to big data table (or subset of it)...? We'll see. Next week, I hope to make a plot or two using these distances -- any blatant errors should pop up then.

Also for tomorrow:

- Look up central surface brightness values (turn to Mateo and Martin for majority of them)

- Finish up readings

## Wednesday, June 2, 2010

Started out working on the data table, and fixed the little reference confusions from yesterday. From there went on to work with the luminosity comparison I've been doing. I'm very happy to report that they're all in a neat little table. I also played around with the where command and sorted my data. After so many error messages, I was very proud of my data sorting and my final luminosity comparison table.

After lunch, I talked to Beth about the distance calculation I'm going to be working on. It's a calculation to help determine the distances for M31 and MW satellites so that we can analyze M31 distances from the center of the galaxy to the satellite itself. Though it seems slightly daunting, I'm not too worried at this point. I have a feeling that while it may be frustrating for the next few days, it will be very satisfying by the end. Also talked about binary tables with Beth and Maya.

Also, I'm proud to say that every galaxy we have listed in our data table has RA and DEC values. Yay! I kind of feel like I'm just constantly adding bits and pieces to this data table; it feels really great to say that all of the RA and DEC values are in the table and accounted for.

things to do:

go over luminosities and source for systematic error

work new calculation

readings (2 journal articles and ch. 3 in book)

## Tuesday, June 1, 2010

From there, I went through our references and made sure that each reference was calculated in that paper, to make sure we were citing the source directly. Almost all of them were direct references - yay! But there's a complicated line of references from Roychowdhury to Begum and then Begum to Karachentsev. Gotta figure that out. Along those lines, I also have to replace Martin (2) reference with Begum (2008) citation. (Also, check and see if subscription to royal astron. society expired.)

I liked the team meeting. It was nice to hear what everyone else was up to. Also just a nice way to reflect on the work and progress of the past week.

things for tomorrow, reference-wise:

- enter in data from Martin (2008) and Mateo (1998) -- with this, we'll have data for the majority of galaxies in our chart, even if they aren't the most recent

- replace Martin(2) ref with Begum (2008), and see if e and pos. angle are included in Begum article

- go over Karachentsev refs along with Begum string of refs

outside of citation stuff:

- go over luminosities and do log and print in order to further compare calculated and gathered lums

- go over where command

- distance calculation with beth (excited for first sig calculation!)

- read articles Beth gave us

- read ch. 3 of book

Subscribe to:
Posts (Atom)