Lecture 3

馃摎 Solving elliptic PDEs, 1D to 2D
馃捇 Personal Julia install, Julia's Project environment, Julia's package manager
馃毀 Exercises:

  • Damped wave equation in 1D and 2D

  • Port 1D codes to 2D


馃憠 get started with exercises

Solving elliptic PDEs

The goal of this lecture 3 is to familiarise (or refresh) with:

In the previous lecture, we established that the solution to the elliptic PDE could be obtained through integrating in time a corresponding parabolic PDE:

\[ \frac{\partial C}{\partial t} - \frac{\partial^2 C}{\partial x^2} = 0 \]

and discussed the limitations of this approach for numerical modelling, i.e., the quadratic dependence of the number of time steps on the number of grid points in spatial discretisation.

Accelerating elliptic solver convergence: intuition

In this lecture, we'll improve the convergence rate of the elliptic solver, and consider the generalisation to higher dimensions

Let's recall the stability conditions for diffusion and acoustic wave propagation:

dt = dx^2/dc/2      # diffusion
dt = dx/sqrt(1/尾/蟻) # acoustic wave propagation

We can see that the acceptable time step for an acoustic problem is proportional to the grid spacing dx, and not dx^2 as for the diffusion.

The number of time steps required for the wave to propagate through the domain is only proportional to the number of grid points nx.

Can we use that information to reduce the time required for the elliptic solver to converge?

In the solution to the wave equation, the waves do not attenuate with time: there is no steady state!

Damped wave equation

Let's add diffusive properties to the wave equation by simply combining the physics:

\[\begin{aligned} \rho\frac{\partial V_x}{\partial t} &= -\frac{\partial P}{\partial x} \\[10pt] \beta\frac{\partial P}{\partial t} + \frac{P}{\eta} &= -\frac{\partial V_x}{\partial x} \end{aligned}\]

Note the addition of the new term \(\frac{P}{\eta}\) to the left-hand side of the mass balance equation, which could be interpreted physically as accounting for the bulk viscosity of the gas.

Equvalently, we could add the time derivative to the diffusion equation

\[\begin{aligned} \rho\frac{\partial q}{\partial t} + \frac{q}{D} &= -\frac{\partial C}{\partial x} \\[10pt] \frac{\partial C}{\partial t} &= -\frac{\partial q}{\partial x} \end{aligned}\]

In that case, the new term would be \(\rho\frac{\partial q}{\partial t}\), which could be interpreted physically as adding the inertia to the momentum equation for diffusive flux.

馃挕 Note
In 1D, both modifications are equivalent up to renaming the variables. The conceptual difference is that in the former case we add new terms to the vector quantity (diffusive flux \(q\)), and in the latter case we modify the equation governing the evolution of the scalar quantity (pressure \(P\)).

Let's eliminate \(V_x\) and \(q\) in both systems to get one governing equation for \(P\) and \(C\), respectively:

\[\begin{aligned} \beta\frac{\partial^2 P}{\partial t^2} + \frac{1}{\eta}\frac{\partial P}{\partial t} &= \frac{1}{\rho}\frac{\partial^2 P}{\partial x^2} \\[10pt] \rho\frac{\partial^2 C}{\partial t^2} + \frac{1}{D}\frac{\partial C}{\partial t} &= \frac{\partial^2 C}{\partial x^2} \end{aligned}\]

We refer to such equations as the damped wave equations. They combine wave propagation with diffusion, which manifests as wave attenuation, or decay. The damped wave equation is a hyperbolic PDE.

Implementing the damped wave equation

In the following, we'll use the damped wave equation for concentration \(C\) obtained by augmenting the diffusion equation with density \(\rho\).

Starting from the existing code implementing time-dependent diffusion, let's add the intertial term \(\rho\frac{\partial q}{\partial t}\).

馃憠 You can use the l2_diffusion_1D.jl script as starting point.

First step is to add the new physical parameter \(\rho\) to the # physics section:

# physics
蟻   = 20.0

Then we increase the number of time steps and reduce the frequency of plotting, and modify the initial conditions to have more interesting time evolution:

# numerics
nvis = 50
# derived numerics
nt   = nx^2 梅 5
# array initialisation
C    = @. 1.0 + exp(-(xc-lx/4)^2) - xc/lx; C_i = copy(C)

Then we modify the time loop to incorporate the new physics:

for it = 1:nt
    qx         .-= dt./(蟻*dc + dt).*(qx + dc.*diff(C)./dx)
    C[2:end-1] .-= dt.*diff(qx)./dx

馃憠 Your turn. Try to add the inertial term.

Hint: There are two ways of adding the inertial term into the update rule.

  • We could either take the known flux q in q/dc from the previous time step (explicit time integration), or the unknown flux from the next time step (implicit time integration).

  • Could we treat the flux implicitly without having to solve the linear system?

  • What are the benefits of the implicit time integration compared to the explicit one?

If the implementation is correct, we should see something like this:

The waves decay, now there is a steady state! 馃帀 The time it takes to converge, however, doesn't seem to improve...

Now we solve the hyperbolic PDE, and with the implicit flux term treatment, the time step should become proportional to the grid spacing dx instead of dx^2.

Looking at the damped wave equation for \(C\), and recalling the stability condition for wave propagation, we modify the time step, reduce the total number of time steps, and increase the frequency of plotting calls:

# numerics
nvis = 5
# derived numerics
dt   = dx/sqrt(1/蟻)
nt   = 5nx

Re-run the simulation and see the results:

Now, this is much better! We observe that in less time steps, we get a much faster convergence. However, we introduced the new parameter, \(\rho\). Does the solution depend on the value of \(\rho\)?

馃挕 Note
As we now consider pseudo-time to reach a steady state instead of physical time, we should replce t by .

Problem of finding the iteration parameters

馃憠 Try changing the new parameter , increasing and decreasing it. What happens to the solution?

We notice that depending on the value of the parameter , the convergence to steady-state can be faster or slower. If is too small, the process becomes diffusion-dominated, and we're back to the non-accelerated version. If is too large, waves decay slowly.

If the parameter has optimal value, the convergence to steady-state could be achieved in the number of time steps proportional to the number of grid points nx and not nx^2 as for the parabolic PDE.

For linear PDEs it is possible to determine the optimal value for analytically:

蟻    = (lx/(dc*2蟺))^2

How does one derive the optimal values for other problems and boundary conditions? Unfortunately, we don't have time to dive into details in this course...

The idea of accelerating the convergence by increasing the order of PDE dates back to the work by Frankel (1950) where he studied the convergence rates of different iterative methods. Frankel noted the analogy between the iteration process and transient physics. In his work, the accelerated method was called the second-order Richardson method

馃憖 If interested, R盲ss et al. (2022) paper is a good starting point.

Pseudo-transient method

In this course, we call any method that builds upon the analogy to the transient physics the pseudo-transient method.

Using this analogy proves useful when studying multi-physics and nonlinear processes. The pseudo-transient method isn't restricted to solving the Poisson problems, but can be applied to a wide range of problems that are modelled with PDEs.

In a pseudo-transient method, we are interested only in a steady-state distributions of the unknown field variables such as concentration, temperature, etc.

We consider time steps as iterations in a numerical method. Therefore, we replace the time \(t\) in the equations with pseudo-time \(\tau\), and a time step it with iteration counter iter. When a pseudo-transient method converges, all the pseudo-time derivatives \(\partial/\partial\tau\), \(\partial^2/\partial\tau^2\) etc., vanish.

鈿狅笍 Warning!
We should be careful when introducing the new pseudo-physical terms into the governing equations. We need to make sure that when iterations converge, i.e., if the pseudo-time derivatives are set to 0, the system of equations is identical to the original steady-state formulation.

For example, consider the damped acoustic problem that we introduced in the beginning:

\[\begin{aligned} \rho\frac{\partial V_x}{\partial\tau} &= -\frac{\partial P}{\partial x} \\[10pt] \beta\frac{\partial P}{\partial\tau} + \frac{P}{\eta} &= -\frac{\partial V_x}{\partial x} \end{aligned}\]

At the steady-state, the second equation reads:

\[ \frac{P}{\eta} = -\frac{\partial V_x}{\partial x} \]

The velocity divergence is proportional to the pressure. If we wanted to solve the incompressible problem (i.e. the velocity divergence = 0), and were interested in the velocity distribution, this approach would lead to incorrect results. If we only want to solve the Laplace problem \(\partial^2 P/\partial x^2 = 0\), we could consider \(V_x\) purely as a numerical variable.

In other words: only add those new terms to the governing equations that vanish when the iterations converge!

Visualising convergence

Let's modify the code structure of the new elliptic solver. We need to monitor convergence and stop iterations when the error has reached predefined tolerance.

To define the measure of error, we introduce the residual:

\[ r_C = D\frac{\partial^2 \widehat{C}}{\partial x^2} \]

where \(\widehat{C}\) is the pseudo-transient solution

There are many ways to define the error as the norm of the residual, the most popular ones are the \(L_2\) norm and \(L_\infty\) norm. We will use the \(L_\infty\) norm here:

\[ \|\boldsymbol{r}\|_\infty = \max_i(|r_i|) \]

Add new parameters to the # numerics section of the code:

# numerics
nx      = 200
系tol    = 1e-8
maxiter = 20nx
ncheck  = ceil(Int,0.25nx)

Here 系tol is the tolerance for the pseudo-transient iterations, maxiter is the maximal number of iterations, that we use now instead of number of time steps nt, and ncheck is the frequency of evaluating the residual and the norm of the residual, which is a costly operation.

We turn the time loop into the iteration loop, add the arrays to store the evolution of the error:

# iteration loop
iter = 1; err = 2系tol; iter_evo = Float64[]; err_evo = Float64[]
while err >= 系tol && iter <= maxiter
    qx         .-= ...
    C[2:end-1] .-= ...
    if iter % ncheck == 0
        err = maximum(abs.(diff(dc.*diff(C)./dx)./dx))
        push!(iter_evo,iter/nx); push!(err_evo,err)
        p1 = plot(xc,[C_i,C];xlims=(0,lx),ylims=(-0.1,2.0),
        p2 = plot(iter_evo,err_evo;xlabel="iter/nx",ylabel="err",
    iter += 1

Note that we save the number of iteration per grid cell iter/nx

If the value of pseudo-transient parameter is optimal, the number of iterations required for convergence should be proportional to nx, thus the iter/nx should be approximately constant.

馃憠 Try to check that by changing the resolution nx.


Multi-physics: steady diffusion-reaction

Let's implement our first pseudo-transient multi-physics solver by adding chemical reaction:

\[ D\frac{\partial^2 C}{\partial x^2} = \frac{C - C_{eq}}{\xi} \]

As you might remember from the exercises, characteristic time scales of diffusion and reaction can be related through the non-dimensional Damk枚hler number \(\mathrm{Da}=l_x^2/D/\xi\).

馃憠 Let's add the new physical parameters and modify the iteration loop:

# physics
C_eq    = 0.1
da      = 10.0
尉       = lx^2/dc/da
# iteration loop
iter = 1; err = 2系tol; iter_evo = Float64[]; err_evo = Float64[]
while err >= 系tol && iter <= maxiter
    C[2:end-1] .-= d蟿./(1 + d蟿/尉) .*((C[2:end-1] .- C_eq)./尉 .+ diff(qx)./dx)

Hint: don't forget to modify the residual!

Run the simulation and see the results: steady-diffusion-reaction

As a final touch, let's refactor the code and extract the magical constant 2蟺 from the definition of numerical density :

re      = 2蟺
蟻       = (lx/(dc*re))^2

We call this new parameter re due to it's association to the non-dimensional Reynolds number relating intertial and dissipative forces into the momentum balance.

Interestingly, the convergence rate of the diffusion-reaction systems could be improved significantly by modifying re to depend on the previously defined Damk枚hler number da:

re      = 蟺 + sqrt(蟺^2 + da)

馃憠 Verify that the number of iterations is indeed lower for the higher values of the Damk枚hler number.

Going 2D

Converting the 1D code to higher dimensions is remarkably easy thanks to the explicit time integration. Firstly, we introduce the domain extent and the number of grid points in the y-direction:

# physics
lx,ly   = 20.0,20.0
# numerics
nx,ny   = 100,100

Then, we calculate the grid spacing, grid cell centers locations, and modify the time step to comply with the 2D stability criteria:

# derived numerics
dx,dy   = lx/nx,ly/ny
xc,yc   = LinRange(dx/2,lx-dx/2,nx),LinRange(dy/2,ly-dy/2,ny)
d蟿      = dx/sqrt(1/蟻)/sqrt(2)

We allocate 2D arrays for concentration and fluxes:

# array initialisation
C       = @. 1.0 + exp(-(xc-lx/4)^2-(yc'-ly/4)^2) - xc/lx
qx,qy   = zeros(nx-1,ny),zeros(nx,ny-1)

and add the physics for the second dimension:

while err >= 系tol && iter <= maxiter
    qx                 .-= d蟿./(蟻 + d蟿/dc).*(qx./dc .+ diff(C,dims=1)./dx)
    qy                 .-= d蟿./(蟻 + d蟿/dc).*(qy./dc .+ diff(C,dims=2)./dy)
    C[2:end-1,2:end-1] .-= d蟿./(1 + d蟿/尉) .*((C[2:end-1,2:end-1] .- C_eq)./尉 .+ diff(qx[:,2:end-1],dims=1)./dx .+
馃挕 Note
we have to specify the direction for taking the partial derivatives: diff(C,dims=1)./dx, diff(C,dims=2)./dy

Last thing to fix is the visualisation, as now we want the top-down view of the computational domain:

p1 = heatmap(xc,yc,C';xlims=(0,lx),ylims=(0,ly),clims=(0,1),aspect_ratio=1,

Let's run the simulation:


back to Content

Julia's REPL, the Package manager (Pkg.jl), and essential packages

Julia REPL (interactive terminal)


julia starts the REPL, run julia --help to see options. Useful options:

Pretty powerful REPL:


Julia package manager


Powerful package manager:

Demo of Julia package manager

Demo of:

Take-home: make a separate Project for each of your projects/assignments!

Essential packages for your global environment

Packages installed in your global environment are always available, thus useful for utility packages.

I have installed in my global environment:

back to Content

Exercises - lecture 3

鈿狅笍 Warning!
Starting from this week's lecture 3, all exercises have to be handed in as monolithic Julia scripts (one code per script) and uploaded to your private (shared) GitHub repository, in a specific folder for each lecture. The git commit hash (or SHA) of the final push needs to be uploaded on Moodle (more).

Exercise 1 - Implicit transient diffusion using dual timestepping

馃憠 See Logistics for submission details.

The goal of this exercise is to:

In this first exercise, you will modify the diffusion-reaction example to model transient diffusion, but this time including not just the pseudo-time derivative \(\partial/\partial\tau\), but also the physical time derivative \(\partial/\partial t\).

Recall the transient diffusion equation:

\[ D\frac{\partial^2 C}{\partial x^2} = \frac{\partial C}{\partial t} \]

Let's discretise only the time derivative using the first-order Euler integration rule:

\[ \frac{\partial C}{\partial t} \approx \frac{C - C_\mathrm{old}}{\mathrm{d}t} \]

where \(\mathrm{d}t\) is the physical time step, and \(C_\mathrm{old}\) is the concentration at the previous time step. If we discretise the spatial derivatives in a usual way and update \(C\) using the explicit update rule, the maximum value for the time step is restricted by the stability criteria, and is proportional to the grid spacing \(\mathrm{d}x\). However, if we consider the \(C\) to be from the implicit layer of the time integration scheme, we don't have that restriction anymore and any time step is stable (stable means here that the solution does not diverge, however, accuracy requirements may still dictate small time steps). The downside is that in this case we have to solve the linear system to get the values at the next time step.

A close look at the equation with the discretised time derivative reveals that this equation is mathematically identical to the diffusion-reaction equation that we already learned how to solve! The value for the concentration at the old time step \(C_\mathrm{old}\) and the physical time step \(\mathrm{d}t\) correspond to the equilibrium concentration \(C_\mathrm{eq}\) and the time scale of reaction, respectively.

At each physical time step the implicit problem could be solved using the PT method. Thus, there are present time derivatives both in physical time and in pseudo-time. This approach is therefore called the dual-time method. The code structure would include the two nested loops, one for the physical time and one for the pseudo-transient iterations:

anim = @animate for it = 1:nt
    C_old .= C
    iter = 1; err = 2系tol; iter_evo = Float64[]; err_evo = []
    while err >= 系tol && iter <= maxiter
        iter += 1
    # visualisation

馃憠 Download the l3_steady_diffusion_reaction_1D.jl script here if needed (available after the course).

Getting started

  1. Create a new folder lecture3 in your private GitHub repository.

  2. Add a README.md to that folder.

  3. Add a copy of the l3_steady_diffusion_reaction_1D.jl script we did in class to your exercise folder and rename it implicit_diffusion_1D.jl.

  4. Modify that script so that it includes the physical time loop and performs the numerical experiment as follows.

Task 1

As a first task, rename the C_eq to C_old and to dt. Make C_old an array and initialise it with copy of C. Set the da number equal to da = 1000.0. Add the new parameter nt = 10 indicating the number of physical time steps. Wrap the iteration loop in the outer for-loop to make physical time steps. Move the visualisation outside from the iteration loop, so that the plots are only updated once per physical timestep (keeping error checking for iterations).

Task 2

Perform the numerical experiment using the developed code. Report your results in a 2-panel gif, plotting a) the spatial distribution of concentration C after nt=10 time steps, on top of the plot of the initial concentration distribution, and b) the error as function of iteration/nx. Include the gif in the README.md and provide one or two sentence of description.

馃挕 Note
Use the @animate macro as in the provided code snippet to realise a gif of your simulation results. Use ![fig_name](./<relative-path>/my_fig.png) to insert a figure or animation in the README.md.

back to Content

Exercise 2 - Operator-splitting for advection-diffusion

馃憠 See Logistics for submission details.

The goal of this exercise is to implement the advection-diffusion with implicit timestepping for diffusion. Start from the time-dependent code you developed in Exercise 1. Then add advection step after the iteration loop so that the concentration is advected only once per physical time step.

Getting started

  1. Duplicate the file implicit_diffusion_1D.jl in the folder lecture3 and name it implicit_advection_diffusion_1D.jl.

  2. Modify that script so that it includes the advection step as follows.

Task 1

Add advection to the implicit diffusion code, using an advection velocity of

vx = 1.0

and use the stability criteria for advection to specify the physical timestep:

dt = dx/abs(vx)

Note that now one doesn't need to take the minimum between the time steps for diffusion and advection, since the diffusion that is more restrictive is resolved implicitly. Also, we do not consider any change in velocity direction at mid-simulation.

Now the physical timestep dt is defined by advection velocity, so the da number that is needed for calculating the optimal PT parameters, has to be computed from dt:

# derived numerics
dt      = dx/abs(vx)
da      = lx^2/dc/dt
re      = ...
蟻       = ...
d蟿      = ...

Report with the figure, plotting a spatial distribution of concentration C after nt=10 time steps, on top of the plot of the initial concentration distribution. Add the figure to the README.md file.

back to Content

Exercise 3 - Advection-diffusion in 2D

馃憠 See Logistics for submission details.

The goal of this exercise is to

Create a code implicit_advection_diffusion_2D.jl for this exercise and add it to the lecture3 folder in your private GitHub repo. Report the results of this exercise within a new section in the README.

Getting started

  1. Duplicate the the implicit_diffusion_1D.jl code you created in Exercise 2 and name it implicit_advection_diffusion_2D.jl.

  2. Extend the 1D calculations to 2D

  3. Add advection as in Exercise 2

Modify the initial conditions to include following parameters in the # physics section:

# physics
lx,ly   = 10.0,10.0
dc      = 1.0
vx      = 10.0
vy      = -10.0

and implement following initial condition

# array initialisation
C       = @. exp(-(xc-lx/4)^2 -(yc'-3ly/4)^2)
C_old   = copy(C)

Choose the time step according to the following (stability) criterion:

dt = min(dx/abs(vx),dy/abs(vy))/2

Also make sure to use the following numerical parameters (in number of grid points nx,ny)

# numerics
nx,ny   = 200,201
系tol    = 1e-8
maxiter = 10nx
ncheck  = ceil(Int,0.02nx)
nt      = 50

Note that the iterative pseudo-timestep limitation should be updated to

d蟿 = min(dx,dy)/sqrt(1/蟻)/sqrt(2)

for 2D configurations.

馃挕 Note
Make sure to update the flux array initialisation and include the flux in the y-direction qy and use now 2D arrays: qx,qy = zeros(nx-1,??), zeros(??,ny-1).

Task 1

Repeat the steps from the Exercise 1 to create the implicit time-dependent diffusion solver but in 2D. Do not include advection yet. Pay attention to add information relative to the second dimension whenever it's needed.

Make a short animation showing the time evolution of the concentration field C during nt = 50 physical time steps. The figure should contain 2 subplots, the first displaying the heatmap of the C field and the second the evolution of the by nx normalised iteration count:

# visualisation
p1 = heatmap(xc,yc,C';xlims=(0,lx),ylims=(0,ly),clims=(0,1),aspect_ratio=1,
p2 = plot(iter_evo,err_evo;xlabel="iter/nx",ylabel="err",

Task 2

Include now the advection step in a similar way as in the 1D case from the previous exercise, i.e., adding them after the iteration loop within the time loop. Use advection velocities and parameters listed above, taking care in implementing the "upwind" strategy discussed in Lecture 2.

Make a short animation showing the time evolution of the concentration field C during nt = 50 physical time steps using the same figure layout as for Task 1.

back to Content

Exercise 4 - Optimal iteration parameters for pseudo-transient method

馃憠 See Logistics for submission details.

The goal of this exercise is to confirm numerically the optimality of the pseudo-transient parameters for a 1D elliptic solver (i.e., solving the steady-state diffusion).

You will make the systematic study of the convergence rate of the pseudo-transient method solving a steady-state diffusion process, varying the numerical parameter re on a regular grid of values.

Getting started

  1. 馃憠 Start from the script we did in class, or download the l3_steady_diffusion_1D.jl script here if needed (available after the course).

  2. Create a new code steady_diffusion_parametric_1D.jl for this exercise and add it to the lecture3 folder in your private GitHub repo.

  3. Report the results of this exercise within a new section in the README.md.

Task 1

Start from the 1D elliptic solver we developed in class which implements the pseudo-transient method (damping). Add a new variable to store the range of factors to multiply the re parameter with to the # numerics section. Add new array to store the number of iterations per grid block for each value of this factor:

# numerics
fact = 0.5:0.1:1.5
conv = zeros(size(fact))

Wrap the # array initialisation and # iteration loop sections of the code in a for-loop over indices of fact:

for ifact in eachindex(fact)
    # array initialisation
    # iteration loop
    while ...

Move the definition of to the beginning of the new loop. Extract the value 2蟺 and store in the variable re, multiplying by the corresponding value from fact:

for ifact in eachindex(fact)
    re = 2蟺*fact[ifact]
    蟻  = (lx/(dc*re))^2
    d蟿 = ...

After every elliptic solve, store the iter/nx value in the conv[ifact]. Report the results as a figure, plotting the conv vs fact. You should get a picture like this:


back to Content