Frontmatter

If you are publishing this notebook on the web, you can set the parameters below to provide HTML metadata. This is useful for search engines and social media.

TableOfContents()
41 ns

Ways to calculate sparse Jacobians

by Jürgen Fuhrmann, 2022-10-21

md"""
# Ways to calculate sparse Jacobians

by Jürgen Fuhrmann, $(today())


"""
277 μs
begin
using ExtendableSparse
using SparseArrays
using ForwardDiff
using SparseDiffTools
using SparsityDetection
using Symbolics
using VoronoiFVM
end
396 μs
packageversion
AnyAny
1
VoronoiFVM
"0.18.3"
2
Symbolics
"4.13.0"
3
ForwardDiff
"0.10.32"
4
SparseDiffTools
"1.27.0"
pkgversions([VoronoiFVM,Symbolics,ForwardDiff,SparseDiffTools])
17.6 ms

We investigate different ways to assemble sparse Jacobians for a finite difference operator $A_h(u)$ discretizing the differential operator

$$ A(u)=-\Delta u^2 + u^2 -1 $$

in $\Omega=(0,1)$ with homogeneous Neumann boundary conditions on an equidistant grid of $n$ points. We only discuss approaches which can be generalized to higher space dimensions and unstructured grids, so e.g. banded and tridiagonal matrix structures are not discussed.

md"""
We investigate different ways to assemble sparse Jacobians for a finite difference operator ``A_h(u)`` discretizing the differential operator
```math
A(u)=-\Delta u^2 + u^2 -1
```
in ``\Omega=(0,1)`` with homogeneous Neumann boundary conditions on an equidistant grid of ``n`` points.
We only discuss approaches which can be generalized to higher space dimensions and unstructured grids, so e.g. banded and tridiagonal matrix structures are not discussed.
"""
143 μs

This is the corresponding nonlinear finite difference operator:

md"""
This is the corresponding nonlinear finite difference operator:
"""
109 μs
function A_h!(y,u)
n=length(u)
h=1/(n-1)
y[1]=(u[1]^2-1)*0.5*h
for i=2:n-1
y[i]=(u[i]^2-1.0)*h
end
y[end]=(u[end]^2-1)*0.5*h

for i=1:n-1
du=(u[i+1]^2-u[i]^2)/h
y[i+1]+=du
y[i]-=du
end
end;
1.5 ms

Number of discretization points for initial testing:

md"""
Number of discretization points for initial testing:
"""
158 μs
n_test=5;
10.3 μs

The methods

md"""
## The methods
"""
143 μs

Straightforward Jacobian calculation

Just take the operator and calculate the jacobian using ForwardDiff.jl and use a sparse matrix to collect the derivatives. Here we use the ExtendableSparse.jl package in order to avoid cumbersome handling of intermediate coordinate format data.

As we will see in the timing test, the complexity of this operation is $O(n^2)$. Without further structural information, automatic differencing needs to investigate the partial derivatives with respect to all unknowns.

md"""
### Straightforward Jacobian calculation

Just take the operator and calculate the jacobian using [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) and use a sparse matrix to collect the derivatives. Here we use the [ExtendableSparse.jl](https://github.com/j-fu/ExtendableSparse.jl) package in order to avoid cumbersome handling of intermediate coordinate format data.

As we will see in the timing test, the complexity of this operation is $O(n^2)$. Without further structural information, automatic differencing needs to investigate the partial derivatives with respect to all unknowns.
"""
211 μs
function straightforward(n)
X=ones(n)
Y=ones(n)
jac = ExtendableSparseMatrix(n, n)

t=@elapsed begin
ForwardDiff.jacobian!(jac, A_h!, X, Y)
flush!(jac)
end
jac.cscmatrix,t
end;
715 μs
straightforward(n_test)
19.2 μs

Symbolics.jl for sparsity detection

While in the particular case the sparsity pattern of the operator is known – it is tridiagonal – we assume it is not known and we use Symbolics.jl to detect the sparsity pattern and convey the sparsity information to ForwardDiff.jl.

This approach uses the multiple dispatch feature of Julia and calls A_h! with symbolic data, resulting in a symbolic representation of the operator which can be investigated for sparsity in $O(n)$ time. For a description of this approach see this paper.

In addition, for calculating the Jacobian efficiently it is useful to provide a matrix coloring obtained using SparseDiffTools.jl.

md"""
### Symbolics.jl for sparsity detection

While in the particular case the sparsity pattern of the operator is known -- it is tridiagonal -- we assume it is not known and we use [Symbolics.jl](https://github.com/JuliaSymbolics/Symbolics.jl) to detect the sparsity pattern and convey the sparsity information to [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl).

This approach uses the multiple dispatch feature of Julia and calls `A_h!` with symbolic data, resulting in a symbolic representation of the operator which can be investigated for sparsity in $O(n)$ time. For a description of this approach see [this paper](https://openreview.net/pdf?id=rJlPdcY38B).

In addition, for calculating the Jacobian efficiently it is useful to provide a matrix coloring obtained using [SparseDiffTools.jl]( https://github.com/JuliaDiff/SparseDiffTools.jl).
"""
620 μs
function symbolics(n)
input = ones(n)
output = similar(input)
tdetect=@elapsed begin
sparsity_pattern=Symbolics.jacobian_sparsity(A_h!,output,input)
jac=Float64.(sparsity_pattern)
colors = SparseDiffTools.matrix_colors(jac)
end
tassemble=@elapsed begin
SparseDiffTools.forwarddiff_color_jacobian!(jac, A_h!, input, colorvec = colors)
end
jac,tdetect,tassemble
end;
920 μs
symbolics(n_test)
368 μs

SparsityDetection.jl for sparsity detection

Before the advent of Symbolics.jl, SparsityDetection.jl (now deprecated) provided a similar method for sparsity detection of nonlinear operators in $O(n)$ time which we test here in order to be comprehensive.

md"""
### SparsityDetection.jl for sparsity detection

Before the advent of [Symbolics.jl](https://github.com/JuliaSymbolics/Symbolics.jl), [SparsityDetection.jl](https://github.com/SciML/SparsityDetection.jl) (now deprecated) provided a similar method for sparsity detection of nonlinear operators in $O(n)$ time which we test here in order to be comprehensive.
"""
182 μs
function sparsitydetect(n)
input = rand(n)
output = similar(input)
tdetect=@elapsed begin
sparsity_pattern = SparsityDetection.jacobian_sparsity(A_h!,output,input)
jac = Float64.(sparsity_pattern)
colors = SparseDiffTools.matrix_colors(jac)
end
u=ones(n)
tassemble=@elapsed begin
SparseDiffTools.forwarddiff_color_jacobian!(jac, A_h!, u, colorvec = colors)
end
jac,tdetect,tassemble
end;
877 μs
sparsitydetect(n_test)
❔
Explored path: SparsityDetection.Path(Bool[], 1)
869 μs

Assembly from local jacobians

Usually, for finite difference, finite volume and finite element methods, the sparsity structure is defined by the discretization grid topology, and the operator is essentially a superposition of local contributions translated in space according to the grid structure. So we can assemble the global Jacobi matrix from local contributions, very much like in classical finite element assembly. The local Jacobi matrix can be obtained without the need to detect its sparsity. However, for larger coupled systems of PDEs this possibility appears to be worth to be investigated.

md"""
### Assembly from local jacobians

Usually, for finite difference, finite volume and finite element methods, the sparsity structure is defined by the discretization grid topology, and the operator is essentially a superposition of local contributions translated in space according to the grid structure. So we can assemble the global Jacobi matrix from local contributions, very much like in classical finite element assembly.
The local Jacobi matrix can be obtained without the need to detect its sparsity. However, for larger coupled systems of PDEs this possibility appears to be worth to be investigated.
"""
131 μs

For this purpose, we define the local contributions to the nonlinear difference operator as mutating Julia functions, ready to be generalized for systems of PDEs.

md"""
For this purpose, we define the local contributions to the nonlinear difference operator as mutating Julia functions, ready to be generalized for systems of PDEs.
"""
110 μs
Loading...