Monday 11 January 2010

Basis on qNMR: Integration Rudiments (Part II)

My last post was a basic survey on different measurement strategies for peak areas. Manual methods such as counting squares or cutting and weighing, known as ‘boundary methods’ were introduced for historical reasons. These methods were first used by engineers, cartographers, etc, end then quickly adopted by spectroscopists and chromatographers.

In the digital era, most common peak area measurement involves the calculation of the running sum of all points within the peak(s) boundaries or by other quadrature method (e.g. Trapezoid, Simpson, etc [1]). Obviously, the digital resolution, i.e. the number of discrete points that defines a peak is a very important factor in minimizing the integration error. Intuitively, it’s easy to understand that the higher the number of acquired data points, the lower the integration error. It’s therefore very important to avoid any under-digitalization when an FID is acquired, a problem which is unfortunately more common than many chemists realize.

As described by F. Malz and H. Jancke [2], at least five data points must appear above the half width for each resonance for a precise and reliable subsequent integration. What does this mean in practical terms? Typically, acquisition parameters are defined according to the Nyquist condition: the spectral width (SW) and the number of data points (N, total number of complex points) determine the total acquisition time AQ:

AQ = N/SW

And the digital resolution (DR) is proportional to the inverse of the acquisition time, the latter being the product of the dwell time (DW) and the number of increments:

DR = SW/N = DW x TD = 1 / AQ

If we consider a typical 500 MHz 1H-NMR spectrum with a line width at half height of 0.4 Hz (this is a common manufacturer specification) and a spectral width of 10 ppm (5000 Hz), the minimum number of acquired data points required to satisfy the five points rule should be:
5 pt x 5000 Hz / 0.4 Hz = 62500 complex points.

This number is not suitable for the FFT algorithm which requires, generally, a length equal to a power of two. This is done by zero padding the FID with zeroes until the closest upper power of two, in this case 65536 (64 Kb).

Furthermore, in order to get the most out of the acquired data points, zero filling once (adding as many zeros as acquired data points) has been found (see [3]) to incorporate information from the dispersive component into the absorptive component, and hence it is useful to zero fill at least once (which is exactly what Mnova does).. For example, as S. Bourg and J. M. Nuzillard have shown [4], even though zero-filling does not participate in the improvement of the spectral signal to-noise ratio, it may increase the integral precision by a factor up to 2^(1/2) when the time-domain noise is not correlated.

Regardless of the quadrature method, they all share the same systematic problem: in order to integrate one or several peaks it’s necessary to specify the integration limits. In qNMR assays, this is an evaluation parameter whose effect can be estimated using the theoretical line shape of an NMR signal. To a good approximation (assuming proper shimming), the shape of an NMR line can be expressed as a Lorentzian function:


Where w is the peak width at half height and H is its height value. When L(x) is integrated between +/- infinite, the total integrated area becomes:

Obviously, it’s it is unreasonable to integrate digitally from –infinite to +infinite so an approximation must be made by choosing limits. This has been studied by Griffiths and Irving [5] who have showed that for a maximum error of 1%, integration limits of 25 times the line width in both directions must be employed. If errors less than 0.1 % are desired, the integral width has to be +/-76 times the peak width. For example, in a 500 MHz NMR spectrum with a peak width of 1 Hz, the integrated region should be 152 Hz (~0.30 ppm), as illustrated in the image below


But in general, peaks are not so well separated and for example, when studying complex mixtures or impurities related to the main compound, wide integrals cannot be used. In general, integration by direct summation is not adapted to partially overlapping peaks.

For example, just consider the simple case of peak overlapping where, for instance, one peak of the double doublet overlaps within a triplet:


The theoretical relative integrals for the two multiplets should be 1:1. However, the area of the triplet calculated via the standard running sum method will be overvalued because of the contamination caused by one of the peaks of the double doublet which in turn will be underestimated. This is illustrated in the figure below where the green lines corresponds to the triplet, the blue lines to the double doublet and the red line is the actual spectrum (sum of all individual peaks)


The question is: how to overcome this problem? The answer is, of course, Line Fitting (Deconvolution) which will be the subject of my next post.

References:

[1] Jeffrey C. Hoch and Alan S. Stern, NMR Data Processing, Wiley-Liss, New York (1996)

[2] F. Malz, H. Jancke, J. Pharmaceut. Biomed. 38, 813-823 (2005)

[3] E. Bartholdi and R. R. Ernst, "Fourier spectroscopy and the causality principle", J. Magn. Reson. 11, 9-19 (1973)
doi:10.1016/0022-2364(73)90076-0

[4] S. Bourg, J. M. Nuzillard, "Influence of Noise on Peak Integrals Obatined by irect Summation", J. Magn. Reson. 134, 184-188 (1988)
doi:10.1006/jmre.1998.1500

[4] Lee Griffiths and Alan M. Irving, "Assay by nuclear magnetic resonance spectroscopy: quantification limits", Analyst 123 (5), 1061–1068 (1998)

2 comments:

Anonymous said...

Regarding the requirement to integrate ca. 0.3 ppm for accurate integrals, I would like to add that this is, while theoretically surely correct, much less of an issue in practice than it sounds, provided some care is taken to cut all integrals the same. Because then the errors even out quite well.

Even using only eye perspective to cut integrals an experienced and aware (of the principal problem) experimentator should be
able to reach ca. 1% precision using much less than 0.3 ppm integration area.

The key here is really just to cut all peaks, including the standard peaks, with the same "narrowness".

Carlos Cobas said...

Thanks for your comment!
I agree that in practice, the 0.3 ppm rule does not need to be strictly followed
Cuting all integrals in the same extent will help, for sure, but it can also be an issue if the line widths are very different

The idea I wanted to show here is that there will be scenarios in which standard integration will not be an optimal method (for example, when one or several peaks overlap the region of interest to be integrated)

Cheers,
Carlos