In the previous entry http://www.astrocmos.com/2016/10/cmos-czy-nie-cmos/ I just presented an introduction to the topic, and the flow has ended with readout process. Then comes another difference, that is analog to digital converter. Well, the converter is pretty much the same, however the resolution differs. Most CCD astrocameras has ADC with 16 bit resolution. And also in most of CCD amateur cameras that is a little bit overkill, because the dynamic range of amateur CCD cameras hardly reaches even 12 bits. There is another story with CMOS sensors – their ADC have resolution 12 or 14 bits, but sensor dynamic range is usually over 12 bits (due to very low readout noise). 12 bits resolution does not look impressive – why they are doing this to us? Well, CMOS is originated at consumer electronics cameras, and apparently manufacturers decided it is enough. And probably we need to live with it, because astro amateur market is pretty tiny and it will be hard to convince manufacturers to upgrade production line with 16 bits converters. What we get in return is incredible conversion speed. Converters in CMOS sensors can convert and feed us with many frames per second and is usually limited only by interface (like USB). For CCD transfer speed is usually many seconds per frame 🙂
Andromeda triplet M31, M32, M110. QHY163M with 130mm refractor on EQ6 unguided. 180x10s stack.
But is that 12 bits resolution so bad?
Not really. Well, if we consider single frame, we have 12 bits and thats it. We can increase camera gain setting to reach unity gain, so we will not loose resolution. But we can also effectively increase real resolution with stacking many frames. There are two scenarios to consider. First, when we have low gain setting (close to zero). Then oversampling starts to work for us. As long as we have noise in the signal (and we have plenty of it, mostly photon noise) then increasing the sample number (sample is just a single subexposure in our case) and stacking them also increases resolution. Cool, isn’t it? Other case is when gain is set to unity gain or higher. Then we actually have required resolution (so every electron converts to at least single converter ADU) and what we need to fight for is pixel capacity, because higher gain lowers the pixel well depth. Well, again, shortening subexposure and stacking many of them is the answer.
In both scenarios we can benefit from very low readout noise of CMOS sensors. I would not recommend using short subexposures for CCD sensors. But CMOS is different story, and if we got used to CCD workflow process we need to adjust it a little for CMOS sensors. This is not revolution at all, but only different approach to use CMOS advantages for fighting down its own drawbacks. We also need to get familiar with gain and offset settings, but this is a story for another part.