Cygnus Loop – Progression

On 24th May, 2022, I set out to capture one of the most intriguing targets (in my eyes), the Cygnus Loop, which is comprised of filamentary supernova remnants. This target which is located some 1,500 light years away from Earth, includes the Eastern Veil Nebula (NGC 6992), Pickering’s Triangle (NGC 6979), and the Western Veil Nebula (NGC6960). Because of its unique composition, this target allowed me to test the impact of integration time on the resulting image processing potential. Basically, I have learned that the longer the total integration time, that is to say, how much data is collected, the higher the potential for bringing out the finer details and reducing signal noise in the final image.

I had the opportunity, amidst many frustrating nights of clouds, to capture data on 24, 25 May, and 1 June, 2022. I started by capturing 50 minutes on 24 May. I added and 2.5 hours on 25 May, and then concluded by adding another 2 hours on 1 June, for a total image integration time of about 5.5 hours. These progressions are demonstrated in the images below. You can clearly see the benefits of adding integration time, as each progression shows more of the filamentary structure. I am happy with the results and I will definitely adopt the habit of capturing and adding data to my images to allow for better results. Stay tuned!

All images taken with Altair Hypercam 26c, Evostar 72ED APO, Askar 0.8 reducer, Altair Quadband filter, on EXOS 2GT mount. 5-minute exposures, Gain 200, Offset 3, TEC @ 10 degrees.

0

4 Comments

  1. A really good wide angle image of the Veil, especially considering its huge diameter of about 180′.

    For the final image, did you combine one processed image per night or did you restack all sub frames?

    1. Very much appreciated. It is a very target indeed, and I was fascinated to capture the whole thing in a single frame. For the final image I restacked all sub using Astro Pixel Processor.

  2. Totally amazing! You can really see the difference. Let me see if I got this right, you basically combined three separate viewing sessions into one composite photo. What I find amazing is that how were you able to “register” the data obtained from each different viewing session? In other words how are you able to get all three sessions perfectly aligned?
    I am behind the times I guess it’s built in either the software or he guiding capabilities of your scope/camera.
    Anyhow truly magnificent! 👍

    1. You’re on point Eric. Aligned images from 3 different sessions. The software these days are so sophisticated, they look for the same stars in each image and align them together. Quite amazing how it’s done.

Leave a Reply to Drexel Glasgow Cancel reply

Your email address will not be published. Required fields are marked *