Numerical simulation results will never give exactly the correct answer; there is always some numerical error. It is important to understand some of the sources of numerical error and steps that can be taken to reduce the error to an acceptable level. Reducing the error often involves increased simulation time and memory and so it is important to consider, for your application, what is an acceptable level of error so that you can run your simulations as quickly as possible. This page provides a thorough method for convergence testing of results from an FDTD simulation, so you can determine the possible sources of error in a simulation, and quantify the level of convergence.

This topic uses a 2D Mie scattering example so that a highly accurate analytic result can be calculated using Mie theory. This will allow us to determine precisely the level of error from our FDTD simulations, but we will consider how to estimate your level of error when a more accurate solution is not available.

It is often assumed in finite difference schemes such as FDTD that the error in a simulation result always diminishes with decreasing grid size. In reality, there are more sources of error that need to be considered:

If we are studying a mode that has evanescent fields, the PML will introduce errors if a non-negligible amount of the evanescent field comes in contact with the PML. This will, for example, reduce the quality factor of a resonance or shift the frequency of a resonance.

The PML always has some reflection which can be theoretically calculated and changes with incident angle. Any reflection from the PML can re-interfere with the source, leading to incorrect power normalization, or simply re-interfere with the true scattered fields of the structure as they would exist in an ideal, unbounded space. Increasing the number of PML layers can reduce the reflection.

When dx, dy, dz and dt are finite, the dispersion relation on the FDTD mesh is not identical to free space. The dispersion relation of FDTD can be calculated precisely for a given mesh size (see getnumericalpermittivity). In principle this source of error can be reduced indefinitely by reducing the spatial and temporal mesh to 0. However, computers always use finite precision numbers (whether floating, double or other) and there is always a limit to how far the mesh size can be reduced without introducing new sources of numerical error. Finally, many first time users incorrectly assume that this is the major source of error in their simulations. Given the computational costs of using a small mesh, it is important to understand the actual contribution that this makes.

When dx, dy, dz and dt are finite, it is not possible to resolve geometric features to arbitrary resolution. Lumerical's conformal mesh can make convergence much faster for many geometries and materials, however, there will always be a geometric error for a finite sized mesh. As discussed above, this error can be reduced by making the mesh size smaller, but eventually other numerical considerations due to the finite precision of the numbers used will limit how small dx, dy and dz can go.

In the FDTD method, it is not possible to use the dispersive refractive index as a function of wavelength (or frequency) directly. Instead, we must fit our dispersive material properties to models that can be solved efficiently in the time domain. Lumerical's multi-coefficient model makes it possible to obtain good fits even for highly dispersive materials, however, there is always error in the fit.

When dt is finite, the permittivity (ratio of D/E) is not exactly equal to the theoretical model used to fit the dispersive materials. This leads to an additional discrepancy between the dispersive material properties. Again, this discrepancy can be calculated precisely. Please see getnumericalpermittivity for more details.

Graded meshing introduces new sources of error. The graded regions can lead to small amounts of gain or loss, and the change in mesh size leads to a change in grid dispersion which results in small amounts of scattering. In addition, regions with different mesh sizes can support propagation of different frequencies of light which means that high frequency light can stay trapped in a region with a small mesh size and eventually lead to aliasing problems for downsampled Fourier transforms. Also, grading of the mesh can affect the PML performance and lead to more reflections. Despite all these small sources of error, the non-uniform mesh allows us to reduce other sources of error, namely the grid dispersion and the staircasing effects, in a highly computationally efficient way and the benefits well outweigh the costs of using it.

There are a variety of small errors with sources. For example, all sources will have a small reflection which gets worse further from the center frequency of the source. This problem is reduced as the mesh size decreases. In addition, some sources use a mode profile or a beam profile which is calculated only at the center frequency of the source. At other frequencies, the profile will not be exactly correct and this will lead to increased reflection for broadband simulations. These reflections are of course small and, when they cannot be ignored, can often be worked around to still perform broadband simulations. Please see Mode source - Broadband for an example of how to work around these issues.

Monitors by default will interpolate the fields to certain desired locations, leading to additional interpolation error. For advanced calculations, this feature can be disabled. Please see this link for an example.

Note: Spatial interpolation - NONE setting

Disabling the spatial interpolation is a very advanced feature. Only expert users that are very familiar with the FDTD method should consider using this feature. Most standard analysis functions (such as the transmission script function, the data visualizer, etc) assume that the spatial interpolation is enabled. They may not give the most accurate result when used to analyze such monitor data. All analysis must be done manually. See the advanced tab of frequency monitor.

Single precision numbers have approximately 7 decimals of precision. Finite-difference methods rely on taking spatial and temporal derivatives and performing integrals. Clearly, if we oversample a wave, we may lose the ability to calculate accurate derivatives or integrals. This will occur when dx ~ lambda/1e7 for floating precision and about lambda/1e16 for double precision calculations. In practical terms, this will almost never occur - for example, for 1000nm light, you could encounter difficulties when dx ~ 1e-4nm - much smaller than most realistic FDTD simulations. FDTD uses an efficient combination of floating and double precision numbers for different parts of the calculation, so the single precision limit should be considered.

testing_convergence_analysis.lsf

nanowire_au_jc_theory_from_mcm_fit.txt

We will consider the extinction cross section, which is the sum of the scattering and absorption cross sections, of a gold nanowire in the wavelength range of 300nm to 1000nm. The nanowire diameter is 140nm. This is a convenient test case because we have an analytic solution. Also, the 2D nature of the problem makes convergence testing much faster.

The analytic result from Mie theory is contained in the nanowire_au_jc_theory_from_mcm_fit.txt file. The testing_convergence.fsp and testing_convergence_analysis.lsf simulation and script files can be used to generate the plots shown on this page.

One challenge with convergence testing is that you generally don't know know the 'correct' answer. To understand the convergence, we want to vary certain parameters, for example, the mesh size dx, in N steps. Ideally we want to reduce dx until the results stop changing. To quantify this idea, at each step of a parameter we define the difference with the results of the previous step:

$$ \Delta \sigma(i) = \sqrt{ \frac{\int (\sigma_i -\sigma_{i-1})^2 d \lambda}{\int (\sigma_i)^2 d \lambda} }$$

where i = 1,...,N is the step for each parameter. Ideally, we want to determine the point where this quantity becomes close to 0. In reality, we may see that Δσ(i) becomes flat at some point which means that the error is being dominated by another parameter but the current parameter still has an effect. For example, if our principle error is due to reflections from the PML, then we may see that Δσ becomes flat as we vary the distance to the PML. This is because the reflections from the PML are the principle cause of the error (which will introduce ripple into the spectrum) but the position of those ripples will constantly change as the distance to the PML varies.

To estimate the absolute error, it is useful to consider the quantity

$$ \Delta \sigma_N(i) = \sqrt{ \frac{\int (\sigma_i -\sigma_N)^2 d \lambda}{\int (\sigma_i)^2 d \lambda} }$$

which can also give us an estimate of the error at parameter value i if we assume that the result with parameter N is much closer to the exact solution than with parameter i. While this gives a good estimate of the absolute error when i << N, it obviously does not give a good estimate when i ~ N where the error will be significantly underestimated. Still, this is the quantity that will give us the best estimate of error in the absence of the exact solution.

To quantify the error (or difference) between different tests, we can use the following definition

$$ \Delta \sigma_{1 - 2}(i) = \sqrt{ \frac{\int (\sigma_1 - \sigma_2)^2 d \lambda}{\int 0.5 (\sigma_1^2 + \sigma_2^2) d \lambda} }$$

For our Mie scattering test case with a known solution, we will define the the exact error as

$$ \Delta \sigma_{theory}(i) = \sqrt{ \frac{\int (\sigma(i) - \sigma_{theory})^2 d \lambda}{\int (\sigma_{theory})^2 d \lambda} } $$

which allows us to look at the convergence in absolute terms. This will also allow us to test our estimate of the absolute error without knowing the exact result as defined above. If so, we can apply these methods to problems where the exact solution is not known.

The sources of error listed above lead to typically small effects that can be ignored completely when the default settings are used. When you want to investigate the sources of error in detail, please consider what level of error you need and what is acceptable because it will never be possible to completely eliminate any numerical error. For example, it may be unnecessary to define your geometric features to a precision of 1nm (ie setting dx to 1nm) if you know that your manufacturing process can only achieve a 10nm accuracy. Similarly, if you know that your experimental error is on the order of 5%, it may not be necessary to achieve simulation results with an error of less than 1%.

As another example, consider the quality of Lumerical's multi-coefficient model (MCM) for Gold. The figure below shows the MCM fit to the refractive index of gold provided by Johnson and Christy. On the same figure, are the refractive indices of gold provided by the CRC handbook, and by Palik's handbook. It is clear that the MCM model is not a perfect fit for gold which is highly dispersive over this wavelength range. However, the difference between the MCM fit and Johnson and Christy is smaller than the difference between different experimental results for gold by different authors! The optical properties of gold have been well studied, and it is a relatively easy metal to measure (compared to metals that oxidize much more quickly), so we can expect the differences in experimental results for other metals to be even larger. Before spending a great deal of time getting a better fit, it is worth considering if your fit is within the experimental error you have on your material data.

We want to investigate our sources of error in the most efficient way possible. This means that we will leave the tests that take the longest - for example reducing the mesh size - to the end. We will begin with investigations of quantities like the PML distance, which can be done on a coarse mesh quickly. In a 2D simulation like this one, all these steps can be completed quickly. When applied to 3D simulations, we may be more limited due to computational resource constraints in how much we can vary certain parameters.

We want to reduce the PML reflectivity and focus on the error that comes from having the evanescent mode fields overlap with the PML. To do this, we set the minimum PML layers to 12 in the Advanced Options of the FDTD simulation region. In addition, we set the mesh accuracy slider to 2 and we set the inner mesh size over the particle to 5nm by setting the dx_max and dx_min in the mesh group object to 5nm. We use the default conformal mesh setting, which means that conformal meshing will not be applied in this case because we have a metal/dielectric interface. We use the parameter sweep object sweep_PML_distance to first perform 10 steps which will vary the simulation span from 0.5 to 5 microns. In this case, the sphere radius is only 35nm, so the distance from the structure to the PML is approximately half the simulation span.

Looking at the above figures, we can estimate that our PML distance will contribute less than approximately 1e-3 (0.1%) by about 2 microns in simulation span (a PML distance of about 1 micron). If we get to the point of needing to reduce our error further than that, we will have to come back to this setting and increase the simulation span further.

We now set the simulation span at 2 microns, and leave the mesh accuracy settings the same as before. Now we want to sweep the PML layers setting. We will sweep 5 points between 2-32 layers. The parameter sweep object sweep_PML_layers will perform this calculation. The results are shown below.

With the current settings, increasing the number of PML layers is not important as the contribution to the error is on the order of 3e-5 (0.03%) for all points in the sweep. However, we may need to return to this error, especially later when we introduce significant mesh grading that can reduce the PML performance.

After setting the PML layers to the default value of 8, we sweep the mesh accuracy slider for a 5nm mesh over the particle.

Based on the above results, we can operate at a mesh accuracy of 4 and we estimate a contribution to the overall error of about 1e-3 (0.1%).

We set the mesh accuracy slider at 4 (this corresponds to a target mesh size of 18 points per wavelength) and now we will sweep the inner mesh region. We will sweep this mesh from dx=5nm to dx=0.5nm in 20 steps. We use a geometric sequence for these 20 steps with

$$ dx(i) = dx_{max} \left( \frac{dx_{min}}{dx_{max}} \right)^{ \left( \frac{i - 1}{N - 1} \right)} $$

so that we include more points at smaller values of dx than a linear spacing would give us. The results are shown in the figures below. We see that as the mesh size decreases, Δσ becomes smaller and smaller until about 3nm, At that point, the results continue to change on about the same scale from point to point, and the error is likely on the order of 1%. This is an indication that another parameter is affecting our results. To test if this is the PML, we can increase the minimum number of layers of PML as done in the following section to try and reduce the PML reflectivity. The PML pefromance be degraded as the inner mesh size is reduced and this error can be seen in the spectrum in the form of small interference ripples which are preventing us from converging on an unchanging result even though the mesh size is getting smaller. We can conclude that we should increase the minimum number of PML layers as the mesh size decreases to compensate for the effect on PML.

Without using enough PML layers, our error is on the order of 1% at a 3 nm mesh.

Based on the previous steps, we can see that the PML reflectivity and the inner mesh size are currently the largest sources of error. We will sweep the inner mesh using the same geometric series as above but in this case we will sweep from dx=5nm to dx=0.1nm. In addition, we set the number of PML layers to 64 which should be large enough to reduce the influence of the PML significantly. Please note that in a 3D simulation, it would make sense to sweep the PML layers at this point to determine how many are actually required and alternate between increasing PML layers and reducing the mesh size.

We see that at dx=1nm, our absolute error is about 1%. For mesh sizes above 1nm, the ΔσN number tracks the actual error very well, as expected. Note that the absolute error by the 0.1nm mesh is reduced to about 2.5e-3 (0.25%). Finally, since Δσ is becoming essentially flat by 0.1 nm, and is on the order of 1e-4, we see that other sources of error with similar sizes of Δσ such as the distance of the PML, PML reflectivity and the mesh size outside the inner region are beginning to play an important role.

Until now, we have been comparing the FDTD results to the theoretical results calculated using the MCM fit to the gold data. If we compare to cross section calculated from the original Johnson and Christy gold data, we see that this is by far the most significant contribution to the error. In fact, the calculate absolute error is approximately 4.2%. However, this is smaller than the error between the results calculated with the material data from 3 different sources! In the figures below, the FDTD results were calculated with a mesh accuracy of 2, a 1nm mesh and all other default settings, which can be run in a few seconds. For comparison, the error between the FDTD result and the result calculated using the MCM fit is 1.3%. If we are concerned about accuracy compared to experimental results for this type of simulation, the error in the material data is likely to be by far the dominant source of error. Nonetheless, it is useful to understand the convergence of FDTD towards the exact solution of Maxwell's macroscopic equations and for this it is best to compare the FDTD results to the results calculated using the MCM.

A further complication of the MCM is that the actual numerical permittivity at a finite value of dt is not the same as the MCM model in the limit dt->0. The numerical permittivity for a finite dt can be calculated using the script function getnumericalpermittivity. In most cases, the difference is negligible and this is the case here. In the figure below, we see a comparison of the scattering cross section using the MCM fit and the actual numerical result with dt=1.17e-17s - the size of dt used when the inner mesh region is 5nm. The difference between these curves is 0.01% which is negligible compared to other sources of error, and this difference will become much smaller at smaller mesh sizes because dt is reduced as the spatial mesh gets smaller.

The conformal mesh is not a benefit for the wavelength range of this simulation. It introduces spurious resonances into the spectrum at longer wavelengths for larger mesh sizes. Ultimately, it gives faster convergence over the entire wavelength range, but in this case only at the smallest mesh sizes. These spurious resonances can occur with the conformal mesh when considering metal/dielectric interfaces and are the reason that the default "conformal" mesh setting does not apply the algorithm to metal/dielectric interfaces. However, for many wavelength ranges involving metals the conformal mesh provides much faster convergence and should be used. To turn on the conformal mesh for all interfaces, we use the "conformal variant 1" setting. Here we see that if we consider the error over the entire spectrum, the conformal mesh only gives a better answer at mesh sizes below 0.1nm. However, if we consider the error only over the range from 500nm to 560nm, which includes the resonant peak, then the conformal mesh gives a significantly lower error at all mesh sizes considered. This means that if the resonant wavelength or linewidth were the main simulation result, then conformal meshing should be used.

In this figure, we see the spurious resonances that can appear at longer wavelengths. The resonances are likely to occur at a metal dielectric interface when the real part of the metal permittivity is negative and, in absolute value, is much larger than the dielectric permittivity. They disappear as the mesh size decreases, but can cause the convergence with conformal meshing to be slower than staircasing at metal/dielectric interfaces. For this reason, the conformal mesh is not applied to metal interfaces by default.

The figure below shows the error compare to theory for the staircase and conformal meshes as a function of mesh size. We can see that at mesh sizes of 0.1nm to 5nm, the staircase mesh actually has lower error, however the conformal mesh is converging more quickly to the correct answer. In the zoomed in view of the same result, we see that the conformal mesh is likely to outperform the staircase mesh only at mesh sizes below about 0.1nm, which means we should not use it for this particular measurement over the entire wavelength range of 300nm to 1000nm.

The figure below shows the error compare to theory for the staircase and conformal meshes, where the error is calculated only from 500nm to 560nm, which includes the main resonant peak. In this range, where the spurious resonances do not appear, the conformal mesh gives a consistently better answer at all mesh sizes from 0.1nm to 5nm. In the zoomed in view of the same result, we see that the conformal mesh provides an error at a 1nm mesh that is comparable to the error with the staircase mesh at 0.1nm. Clearly, if we are considering the extinction cross section from 500nm to 560nm, for example to determine the resonant wavelength and linewidth, then the conformal mesh should be used.

The non-uniform meshing is not currently limiting the error for this type of simulation. Experimenting with different grading factors, uniform meshes and eliminating the downsampling of frequency domain monitors performing fourier transforms does not substantially improve the error. Of course, as other sources of errors are eliminated this will eventually become important.

The analysis performed here and the TFSF source used mean that source injection error and monitor interpolation errors are not a major contributor to the error. In addition, these errors will be reduced in this examples as the inner mesh size is decreased.

With care and enough computational resources, and within the limits of finite precision numbers, FDTD can give almost arbitrarily accurate answers to the macroscopic Maxwell's equations for any geometry, assuming the material permittivities are given by the multi-coefficient model. The figure below shows the extinction cross section calculated from Mie theory and FDTD, and the difference between the curves is 0.15%. This could be further improved by repeating many of the steps above.

This figure shows the FDTD spectrum and the theory using 64 layers of PML, dx=dy=0.1 nm over the particle, a mesh accuracy of 4 (minimum of 18 points per wavelength), and a simulation span of 2 microns. The difference between the curves is approximately 1.5e-3 (0.15%).