![]() By averaging the luma values, we've definitely gone from 2-bit (4 possible values for the luma) to 4-bit (16 values for luma). ![]() In fact we can have 16 different values of luma now, a 4-bit value. For this averaged, down-coverted pixel we can have values of 0, 0.25, 0.5, 0.75, 1, 1.25, etc. Now, let's average these value for our 1 pixel, down-converted video and we end up with a luma value of 1.75, a value that can't be represented by our 2-bit luma values. Let's say we take our 2x2 video and the pixels end up with luma values of 2, 2, 2, and 1. For 2-bit luma we can have a value of 0, 1, 2, or 3. 4K is roughly 4 times the resolution of 1080p, so in this example it is the same down-coversion. Let's look at a really simple example, say converting a 2x2 pixel video to 1 pixel video with 2-bit luma channel. Maybe I'm completely wrong, but the math makes sense. This is not the equivalent of re-sampling the light at 2k, 10bit 4:4:4 native, but I commend the effort and am sure it will be useful. and later if you push the gamma in color correction, they will re-appear. in effect, you are smoothing just the contrast border areas of those quantized blocks. there are areas of quantization in your source 4k (large blocks of pixels that are forced to the same color values) that can be much larger than the pixel matrix that you are subsampling from, no matter what the algo is that you use, the blocks remain visually. I feel I should also add something of a warning for people that might think you can get great 10bit from 4k 8bit. Here's a challenge- create an example that shows the '10-bit' effect is significant (I agree it's there, but at 8.67 actual bits, it will be hard to see in most cases). This is why we won't see a significant improvement in color depth. If the NLE does the scaling after converting YUV to RGB, it's still 8.67-bits of information per pixel at best (no new information is added during the transformation). So best case, we have 10-bits of information in Y and 8-bits for U and V. Luminance is more important than chrominance, so that's not so bad. If we scale 4K YUV 420 to 2K 444 YUV, only Y is full resolution, and only Y will get the benefit of the 4-value summing and additional fractional bit depth. This is effectively what an NLE will do when rescaling, so we don't need any special extra processing for an NLE that works in 32-bit float.Ĥ20 AVCHD (H.264) is YUV. So if we average the values together in floating-point, we've achieved the 'effect'. If we add up all the values, then divide by 4, thus averaging the result, we'll still see the extra information in the fraction. Here again is what I wrote in the other thread (I'm a filmmaker, image-processing app developer, and game developer: I'm not a mathematician either :) ):ġ0 bits is 2 more bits vs 8, 2^2= 4, so adding 4 8-bit values together gives us a 10-bit result. Regardless of credentials and title and who said what, it's not possible to get 10-bit 444 from 420 8-bit (unless perhaps the 420 is taken internally). We'll get (at best) an 8.67-bit 444 file. First, we're not going to get a 10-bit 444 file. I don't want to rain on the parade here (I championed post-sharpening for the 5D3 when folks didn't believe it would work), however the math doesn't actually predict much of an improvement for this process.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |