I am doing SPECT/CT reconstruction of real image DICOM. Why is the maximum value of the reconstructed image very small, about less than 50, which is very different from the image reconstructed by the commercial workstation. The maximum count of the commercial workstation is about 1000. Is this a unit problem? ? What units does this package rebuild? Or is there any other reason. thanks to the pytomography team for their work, this is the best open source SPECT/CT reconstruction library I have ever used
I think this is caused by the way the data is scaled when saving the DICOM dataset.
In io/SPECT/dicom.py->save_dcm a scaling factor is calculated from the maximum value in the pixel array and the array is then multiplied by this value
scale_factor = (2**16 - 1) / pixel_data.max()
pixel_data *= scale_factor #maximum dynamic range
The problem is that the maximum is often a long way from the mean. In the example from the tutorial the mean is 0.12 and the maximum is 149.91.
A histogram of the scaled data shows that the scaling will only has a minor effect
np.histogram(img, bins=5)
(array([16321, 41, 14, 5, 3]), array([ 0., 13107., 26214., 39321., 52428., 65535.]))
I suspect the vendors either scale the data before doing the reconstruction or clamp the output values so that the scaling factor is higher.
I tested clamping the maximum value to the 4th histogram bin and the output does look “brighter”
hist = np.histogram(pixel_data, bins=5)
clamp = hist[1][4]
pixel_data = np.clip(pixel_data, 0, clamp)
Next, I will test this on some clinical data.
Mark
Thank you very much for your reply,I’ll study this part of the code again. In addition, the official tutorial mentioned:https://pytomography.readthedocs.io/en/latest/notebooks/t_dicomdata.html,
"
Note: Some vendor reconstructions (i) multiply by the total number of projections and/or (ii) normalize to counts/second when they save their SPECT reconstructions. Make sure to multiply reconstructed_object
by these values before you save if you want to do such comparison
"
why need to multiply by the total number of projections? Is this related to the fact that the image value I reconstructed is too small?
@json I think the reason why they do this is to slightly modify what they mean by “counts”. My definition of counts would be the expected counts emitted by that voxel that would be detected per projection, whereas the vendor might define “counts” as the expected counts emitted by that voxel that would be detected summed across all projections (so the total number of contributions across the whole scan). This is the issue with the units of “counts”: its interpretation is ambiguous.