Analyzing Data
Analysis Modules
Two helpful libraries are our modules and analysis tools.
git clone [email protected]:caterpillarproject/modules.git # Python 2.7+
git clone [email protected]:caterpillarproject/analysis.git # Python 2.7+Add these to your PYTHONPATH environment variable, e.g. for .cshrc add:
setenv PYTHONPATH /path/to/modules:$PYTHONPATH
setenv PYTHONPATH /path/to/analysis:$PYTHONPATHLastly you will need to install asciitable, h5py and pandas.
These tools aren't critical, but they may make your life a lot easier.
Halo Catalogs
Once you have the modules ready, uou can load a ROCKSTAR catalogue for a given snapshot simply as:
rscat = htils.load_rscat(hpath,319,verbose=True) # snapshot = 319 (z = 0)Once you do this however, you will have access to the following methods:
def __init__(self, dir, snap_num, version=2, sort_by='mvir', base='halos_', digits=2, AllParticles=False):
def get_particles_from_halo(self, haloID):
# @param haloID: id number of halo. Not its row position in matrix
# @return: a list of particle IDs in the Halo
def get_subhalos_from_halo(self,haloID):
#Retrieve subhalos only one level deep.
#Does not get sub-sub halos, etc.
def get_subhalos_from_halos(self,haloIDs):
#Returns an array of pandas data frames of subhalos. one data frame
#for each host halo. returns only first level of subhalos.
def get_subhalos_from_halos_flat(self,haloIDs):
#Returns a flattened pandas data frame of all subhalos within
#the hosts given by haloIDs. Returns only first level of subhalos.
def get_hosts(self):
# Get host halo frame only
def get_subs(self):
# Get subhalo frame only
def get_all_subs_recurse(self,haloID):
# Retrieve all subhalos: sub and sub-sub, etc.
# just need mask of all subhalos, then return data frame subset
def get_all_subhalos_from_halo(self,haloID):
# Retrieve all subhalos: sub and sub-sub, etc.
# return pandas data frame of subhalos
def get_all_sub_particles_from_halo(self,haloID):
#returns int array of particle IDs belonging to all substructure
#within host of haloID
def get_all_particles_from_halo(self,haloID):
#returns int array of all particles belonging to haloID
def get_all_num_particles_from_halo(self,haloID):
# Get the actual number of particles 'total_npart' from halo as opposed to 'npart'.
# mainly for versions less than 7
def get_block_from_halo(self, snapshot_dir, haloID, blockname, allparticles=True):
# quick load a block (hdf5 block) of particles belong to halo.
# e.g. you want particle positions for haloid = 10 (use blockname="pos")
# this works fastest on snapshots ordered by id and requires import readsnapHDF5_greg
def H(self):
#returns hubble parameter for rockstar run
def get_most_gravbound_particles_from_halo(self,snapshot_dir, haloID):
# Gets most bound particles just based on potential energy for specific halo ID
def get_most_bound_particles_from_halo(self, snapshot_dir, haloID):
# Gets most bound particles for halo based on pot. energy and kin. energy
# if potential block does not exist, it is calculate assuming a spherical halo
def getversion(self):
# returns the version of rockstar the run was done within
# this will include versions made by Alex Ji, Greg Dooley & Brendan Griffen
One workflow might look as follows:
Here is an example of some of these in action:
Merger Trees
Similarly the merger tree catalogues (once loaded) have a number of its own functions.
These can be used in the following example:
A function to find the descendant branch of any halo in merger tree catalogue. You should use it as follows:
Particle Data
If you want the Gadget header:
Be sure to divide the relevant quantities (pos, rvir etc.) by header.hubble. See the Gadget section in the sidebar for more information on the header and block types available.
If you wanted to get the postions of all the particles for a specific halo (or any block).
If you want to read in the entire block, use the following:
If you wanted just the ids for a selection of particle ids:
Last updated
Was this helpful?