This is the GIMIC simulation with dark matter particles included. The volume shows HI gas and how the dark matter moves and is shaped.
I have been working on displaying the dark matter from the HDF5 data sets that I have been working on for my astronomy work. Over the last couple of days, I got the loading from HDF5 working and the translation into my local coordinates. The volume is changed from point coordinates to voxel coordinates, then scaled and clamped to the bounding box size. This has to be done because the camera slowly zooms in over the course of the animation and some of the gas / dark matter can leave the simulation.
The first result was encouraging, but wasn’t what I expected. In the following image, the green points is the dark matter volume displayed in Drishti.
The problem was that Drishti sucks at displaying dynamic range. The internal possible range of values is limited to the size of an unsigned char (0-255). While this would limit the memory usage, I wish there was an option to use short int, or even float because in the image above, all the dark matter points are physically there but the dynamic range is too high to accurately display them. This may have implications on how my other volumes are being displayed as well. I realised that dark matter for this simulation doesn’t have a mass associated with it and doesn’t share mass between voxels either. To check that the points were actually there, I fired up stereo2 and took an image of the raw points.
So the points are there, my code and Drishti just weren’t displaying them properly. I fixed the dynamic range issue by setting voxels that contained dark matter points to 1. The end result is shown below, and what is expected. Hopefully I can get some awesome movies going with the dark matter as well.
Just thought I should update that the svn of my Astronomy work is live at http://owls-h1.sourceforge.net
For now, here is an image showing the 2d occlusion smoothing of the owls volume. This method helps to identify large scale structures within the volume.
The following reports are both about the visualisation pipeline that I developed when working for ICRAR (International Centre for Radio Astronomy Research).
The first report was my final report for my internship with iVec and describes the process I went through designing the algorithm, reading from HDF5 files, the file structures, Smoothed Particle Hydrodynamics code, saving data to volumetric files, 2D Occlusion, Periodic boundary conditions. This report also discusses alternate visualisation methods such as cube maps, fisheye and spherical projections for use on curved screens (domes).
Download the pdf here – Visualising Galaxy Simulations
The second report was completed as a final project for my degree. I described additions to the Gimic Visualisation Pipeline including exporting to the native file formats of Drishti and POVray, rendering images in parallel, tracking code, stereoscopic imaging, RGB volumes, cross platform support and parallel vs distributed benchmarking. This was my first report using Latex and I think it added a much more professional look than the first report.
Download the pdf here – Extending the Capabilities of the GIMIC Visualistaion Pipeline
Just a quick update, the video in this post shows some of the work I have been doing for the International Centre for Radio Astronomy Research. This video consists of 951 frames, each frame is a 3d volume that has been calculated from the GIMIC data set. The volumes are then rendered out from Drishti with a rainbow colour table.
From there, the 951 images are overlaid with the colour table and text in each corner using ImageMagick. The L value shows the redshift for that frame and the L value is the bounding box length in one direction (all sides of the cube are the same length). This simulation zooms in slowly on the dataset as the video progresses. Each frame takes about 40 minutes to calculate, these are calculated in parallel on the Cognac supercomputer, or the Nereus Cloud platform.
The images are then combined into a movie with ffmpeg using the following command:
ffmpeg -f image2 -i c_hidensity%d.png -sameq -r 25 -s 1024×1024 output_video.avi
Here is the final output as a video:
Also, here is a spherical projection of the OWLS dataset rendered with POVray.
I often have the need to transform cubic images into a cube map or skybox, so I have written this program in python to create them quickly and easily.
- Python 2.6 or higher
- Python Imaging Library (PIL)
- Images need to be the same size and square to avoid problems
Images need to be named with the following convention:
- f – front
- r – right
- l – left
- t – top
- b – back / behind
- d – down
For example – When you call the program and image files are named f_image.png, r_image.png, l_image.png , the proper command would be
python image2skybox.py %c_image.png
Here is an example output of this program:
I am currently working as an iVec intern on the Visualizing Galaxy Simulations project. The project involves taking cosmological simulation data stored in .hdf5 files and converting point, mass, h1 mass data into a volumetric 3d density representation of h1 hydrogen gas. I am using Smoothed Particle Hydrodynamics to calculate the density contributions of nearby points.
Here are some of the first renders using an open source rendering package called Drishti.
I am using the Cognac super computer at the moment to create 951 volumes that I will be turning into an animation. I will update more as this is completed.
Thats all for now,