TransWikia.com

What technique is used to determine the energies of the radionuclide standards used in energy sensor calibration?

Physics Asked on April 1, 2021

In using scintillator or germanium energy sensors, certain radionuclides representing some definite line energies are used to calibrate the sensors. What technique or method is used to determine the energies of these standards.

One Answer

If you really want a historical answer, I don't know for sure. However, these detectors are highly linear, so all you really need is two calibration points for a particular detector, with a particular amount of high voltage applied to it. For example, if you have a beta+ source and a NaI detector (a fairly old technology), then what you're going to see in the detector's spectrum is a photopeak with a known energy of 511 keV, and also a Compton edge which has a known energy. For a cleaner calibration, maybe build your detector with a cavity inside and insert the source into the cavity. Then you'll get events in which both decay gammas are detected, so there will be a second gaussian peak at 2x511 keV.

If you're concerned about the linearity of the detector, then you can study a source that has gammas forming a sum $E_1=E_2+E_3$. It's common these days with HPGe detectors to do a second-order polynomial fit with a set of known sources. But in the early days when they needed to get started with known standards, I suspect the nonlinearity was negligible compared to the lousy energy resolution of a NaI.

At the low energy end, you may also be able to compare with diffraction or take measurements with theoretically calculable K-shell energies. I don't know.

Answered by user289749 on April 1, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP