Followup to Atmospheric Scattering – Part 1: Overview

This post is the first in a series to fol­low-​up on my 2012 GPU Pro 3 arti­cle about atmos­pher­ic scat­ter­ing [11]. What I showed there was a full sin­gle-​scat­ter­ing solu­tion for a plan­e­tary atmos­phere run­ning in a pix­el shad­er, dynam­ic and in real time, with­out pre-​com­pu­ta­tion or sim­pli­fy­ing assump­tions. The key to this achieve­ment was a nov­el and effi­cient way to eval­u­ate the Chap­man func­tion [2], hence the title. In the time since then I have improved on the algo­rithm and extend­ed it to include aspects of mul­ti­ple scat­ter­ing. The lat­ter caus­es hor­i­zon­tal dif­fu­sion (twi­light sit­u­a­tions) and ver­ti­cal dif­fu­sion (deep atmos­pheres), and nei­ther can be ignored for a gen­er­al atmos­phere ren­der­er in a space game, for example.

I have writ­ten a Shader­toy that reflects the cur­rent state of affairs. It’s a mini flight sim­u­la­tor that also fea­tures clouds, and oth­er ren­der­ing good­ies. A WebGL 2 capa­ble brows­er is need­ed to run it. Under Win­dows, the ANGLE/Direct 3D trans­la­tor may take a long time to com­pile it (up to a minute is noth­ing unusu­al, but it runs fast after­wards). When suc­cess­ful­ly com­piled it should look like this:

A little background story

The ori­gins of my tech­nique go back to exper­i­ments I did pri­vate­ly while hack­ing on my old Space Glid­er. Orig­i­nal­ly this was a demo writ­ten for DOS/DJGPP with­out 3‑D accel­er­a­tion. It fea­tured a mini solar sys­tem with non-pho­to­re­al­is­tic graph­ics, in the style of Mer­ce­nary and Star­glid­er. Some­time I dust­ed it off and want­ed to make it more real™: larg­er scale, atmos­pher­ic scat­ter­ing, pro­ce­dur­al ter­rain, HDR, dynam­ic light adap­ta­tion, sec­ondary illu­mi­na­tion from celes­tial bod­ies, spher­i­cal har­mon­ics, real flight dynam­ics, real orbital mechan­ics, all that …, you know, a real™ space sim­u­la­tor needs. Nat­u­ral­ly, the source did­n’t get pret­ti­er in the process and the project became an unre­leasable mess.

Nev­er­the­less, I came up with a work­ing pro­to­type that had scat­ter­ing and a pro­ce­dur­al ter­rain. Since it was all cal­cu­lat­ed on the CPU it had to be super effi­cient. I devised an ana­lyt­i­cal approx­i­ma­tion to the air­mass inte­gral (what lat­er became the Chap­man func­tion) so I could reduce the numer­i­cal inte­gra­tion from two nest­ed loops to one loop. I made heavy use of SIMD, first by cal­cu­lat­ing the sam­ple points in par­al­lel, and then in a sec­ond pass over the sam­ple points doing the col­or chan­nels in par­al­lel. This is what the result looked like (cir­ca 2011, unreleased):

space-glider-state-of-affairs-3x3I want to empha­size that the above is ren­dered in soft­ware with noth­ing but Gouraud shad­ing and ver­tex col­ors. This just as a reminder of how nice visu­als can be dri­ven by low tech. It ran also rel­a­tive­ly fast (40 Hz in a res­o­lu­tion of 720p).

Moving it to the GPU

The arti­cle that I wrote for GPU Pro 3 describes the same algo­rithm as that which was run­ning in my pro­to­type, just with all code moved to the pix­el shad­er, and wrapped in a ray-trac­ing frame­work (essen­tial­ly “shader­toy­ing” it). This approach is waste­ful, a fact that was cor­rect­ly point­ed out in [12]. One can no longer ben­e­fit from tri­an­gle inter­po­la­tion and can­not par­al­lelize threads across sam­ple points; instead the whole thing is a big loop per pixel.

Instead, atmos­pher­ic scat­ter­ing should ide­al­ly be done with a kind of map-reduce oper­a­tion: First cal­cu­late all sam­ple points in par­al­lel (the map part) and then col­lect the result along rays (the reduce part), sim­i­lar to how my pro­to­type worked. This could prac­ti­cal­ly be done a com­pute shad­er, but I did not yet explore this idea fur­ther. The shader­toy that was linked in the begin­ning of this post also does the big-loop-in-the-pix­el-shad­er approach, for lack of oth­er prac­ti­cal alternatives.

Aer­i­al per­spec­tive — Pho­to tak­en from Wikipedia, CC-BY-SA Joaquim Alves Gaspar

Aerial perspective and spectral sampling

The visu­al impact of atmos­pher­ic scat­ter­ing is called aer­i­al per­spec­tive. It is not the same as fixed-func­tion fog, but if we want to use that as an anal­o­gy, we would have to have a fog coor­di­nate for each col­or chan­nel. To illus­trate, I have tak­en the pho­to from the top of the linked Wikipedia arti­cle and sep­a­rat­ed its col­or chan­nels (see the insets above). It should be evi­dent that the dis­tant moun­tains are blue not because of some blue “fog col­or” (that would be white, in the lim­it of infi­nite dis­tance), but rather because each chan­nel has a dif­fer­ent mul­ti­pli­er for the fog dis­tance, the blue chan­nel hav­ing the highest.

For­mal­ly, aer­i­al per­spec­tive is a trans­for­ma­tion of some orig­i­nal col­or L_0 by a mul­ti­plica­tive oper­a­tion, the trans­mit­tance T, and an addi­tive oper­a­tion, the in-scat­ter I (see also [1] for an intro­duc­tion of these terms):

    \[L = T L_0 + I,\]

where L, L_0 and I are col­or vec­tors. In order to be uni­ver­sal­ly valid and inde­pen­dent of the choice of col­or basis, the trans­mit­tance oper­a­tor would have to be a ten­sor, as in

    \[L_\nu = T^\mu_\nu L_{0 \mu} + I_\nu,\]

where \mu, \nu are spec­tral indices. If (and only if) T is diag­o­nal would it be appro­pri­ate to use the more famil­iar com­po­nent-wise form

    \[L_\nu = T_\nu L_{0 \nu} + I_\nu.\]

This opens up the ques­tion of col­or space. T is triv­ial­ly diag­o­nal if the basis is sim­ply the full spec­trum (in infi­nite res­o­lu­tion). Oth­er­wise the col­or basis must be care­ful­ly cho­sen to have prop­er­ties of a chro­mat­ic ada­p­a­tion space. Fol­low­ing this clue I took the dom­i­nant wave­lengths of the LMS cone respons­es as a start­ing point to set­tle for 615 nm for the red wave­length, 535 nm for the green wave­length and 445 nm for the blue wave­length. When these wave­lengths are used as sam­ple points togeth­er with a prop­er trans­form, the result­ing col­ors are very accu­rate. I used the new CIE 2012 CMFs to cal­cu­late the trans­form from spec­tral sam­ples to sRGB col­ors, as follows:

    \[M_{\text{615,535,445}\rightarrow\text{sRGB}} = \left( \begin{array}{d{4}d{4}d{4}} 1.6218 & -0.4493 & 0.0325 \\ -0.0374 & 1.0598 & -0.0742 \\ -0.0283 & -0.1119 & 1.0491 \end{array} \right) .\]

Side note: The matrix \scriptstyle M from above looks red-heavy, because it con­tains an adap­ta­tion from white point E to D65. This was done so that the col­or com­po­nents in this basis can be used as if they were spec­tral point sam­ples. For exam­ple: the col­or of the sun­light spec­trum, nor­mal­ized to unit lumi­nance, would be \scriptstyle \left( \begin{smallmatrix}0.9420 & 1.0269 & 1.0241\end{smallmatrix} \right)^T in this basis, which cor­re­sponds very well to a point sam­pling of the smoothed sun­light spec­trum at the wave­lengths 615 nm, 535 nm and 445 nm. The trans­for­ma­tion by \scriptstyle M then yields a (lin­ear) sRGB col­or of \scriptstyle \left( \begin{smallmatrix}1.0997 & 0.9771 & 0.9329\end{smallmatrix} \right)^T which has a cor­re­lat­ed col­or tem­per­a­ture of 5778 K.

Geometry of the aerial perspective problem

The next prob­lem is to find the numer­i­cal val­ues for T and I. No solu­tions in closed form exist in the gen­er­al case, so some sort of numer­i­cal inte­gra­tion needs to be performed.

  • The trans­mit­tance T is found as the antilog­a­rithm of the opti­cal depth \tau, which in turn is found as the path inte­gral of the extinc­tion coef­fi­cient \beta along lines of sight. The extinc­tion coef­fi­cient is den­si­ty depen­dent, so it varies as a func­tion of the path coor­di­nate u.
  • The in-scat­ter I is found sim­i­lar­ly as the path inte­gral of \beta weight­ed with the source func­tion J(u) atten­u­at­ed by the cumu­lat­ed opti­cal depth \tau(u) along the path. The source func­tion in turn – when only sin­gle scat­ter­ing of sun­light is con­sid­ered (also dis­re­gard­ing ther­mal emis­sion) – equals incom­ing solar flux F_\text{sun} times a phase func­tion f(\theta) times the sin­gle scat­ter­ing albe­do \omega_0 of the atmos­pher­ic mate­r­i­al, and again atten­u­at­ed by opti­cal depth \tau_\text{sun} along the path towards the sun.

Note how the prod­uct \beta \, \mathrm{d}u is always dimensionless.

    \begin{align*} T &=e^{-\tau}, & \tau &= \int \beta(u) \, \mathrm{d}u^{\,}, & I &= \int J(u) e^{-\tau(u)} \beta(u) \, \mathrm{d}u, \\ & & & & J &= \omega_0 f(\theta) F_\text{sun} e^{-\tau_\text{sun}}. \end{align*}

It is evi­dent that find­ing \tau along (semi-infi­nite, treat­ing the sun as infi­nite­ly dis­tant) rays appears as a sub-prob­lem of eval­u­at­ing J. A real-time appli­ca­tion must be able to do this in O(1) time. But again there is, in gen­er­al, no sim­ple solu­tion for \tau, since air den­si­ty decreas­es expo­nen­tial­ly with alti­tude, and the alti­tude above a spher­i­cal sur­face varies non-lin­ear­ly along straight paths (and we have not even con­sid­ered bend­ing of rays at the horizon).

The Chapman function

As it is so often, prob­lems in com­put­er graph­ics have already been treat­ed in physics lit­er­a­ture. In this case, the opti­cal depth along rays in a spher­i­cal­ly sym­met­ric, expo­nen­tial­ly decreas­ing atmos­phere was con­sid­ered in the 1930’s by Syd­ney Chap­man [2]. The epony­mous Chap­man function

    \[\operatorname{Ch}(x,\chi)\]

is defined as rel­a­tive opti­cal depth along a semi-infi­nite ray hav­ing zenith angle \chi (read: greek let­ter chi), as seen by an observ­er at alti­tude x (where x is the radi­al dis­tance to the sphere cen­ter expressed in mul­ti­ples of the scale height). It is nor­mal­ized to the zenith direc­tion so \operatorname{Ch}(x,0)=1 for all x. The fol­low­ing expres­sion, dis­tilled from a larg­er one in [3], can be tak­en as a vir­tu­al­ly exact solu­tion pass­able approx­i­ma­tion (see Evgenii Golubev’s arti­cle over at Zero Radi­ance where he points out its flaws, and ref­er­ence [13] for a more recent treatment):

    \begin{align*} \operatorname{Ch}&(x,\chi) = \frac{1}{2} \Bigg[ \cos(\chi) + \operatorname{erfcx} \left( \sqrt{ \frac{x \cos^2\chi}{2} } \right) \left( \frac{1}{x} + 2 - \cos^2\chi \right) \sqrt{\frac{\pi x}{2}} \Bigg] . \end{align*}

Note how this expres­sion con­tains \operatorname{erfcx}, the scaled com­ple­men­tary error func­tion, which is a non-stan­dard spe­cial func­tion that one would need to imple­ment on its own, so we’re back to square one. This is clear­ly not real-time friend­ly and needs to be sim­pli­fied a lot.

An efficient approximation

My approx­i­ma­tion was moti­vat­ed by the fact that in the lim­it of a flat earth, \lim_{x\rightarrow \infty}\operatorname{Ch}(x,\chi) must approach the secant func­tion. There­fore I looked for a low-order ratio­nal func­tion of \cos \chi, such that it gives uni­ty in the zenith direc­tion and the spe­cial val­ue c=\operatorname{Ch}(x,90^\circ) in the hor­i­zon­tal direc­tion. The expres­sion for c is already much sim­pler since \cos\chi van­ish­es, and I aggres­sive­ly sim­pli­fied it even more. The exten­sion to zenith angles \chi>90^\circ was then done by some trigono­met­ric rea­son­ing. The final result, as orig­i­nal­ly pub­lished, is

    \begin{align*} \operatorname{Ch}(x,\chi) &\approx \begin{dcases} { c \over ( c - 1 ) \cos \chi + 1 } , & \text{if } \chi \leq 90^\circ \\ { c \over ( c - 1 ) \cos \chi - 1 } + 2 \, c \, e^{ x - x \sin \chi } \sqrt{ \sin \chi } , & \text{if } \chi > 90^\circ \end{dcases} \intertext{with} c &\approx \sqrt{ \pi x \over 2 } \qquad \text{with less than 5\% error for } x > 10. \end{align*}

This is the main result. Note that the expen­sive case for when \chi > 90^\circ only needs to be con­sid­ered when look­ing down­wards and the view is not already obscured by the ter­rain, which only applies to a small region below the hori­zon. A com­par­i­son between the exact and the approx­i­mate ver­sions of \operatorname{Ch}(x,\chi) is shown below:

Epilog: A literature review

At this point I want to stress how every real-time algo­rithm for atmos­pher­ic scat­ter­ing must have an effi­cient way to eval­u­ate \operatorname{Ch}(x,\chi), even when this func­tion is not explic­it­ly iden­ti­fied as such. Here is a small lit­er­a­ture sur­vey to illus­trate that point:

Author(s) Type Assump­tions \scriptstyle \operatorname{Ch}(x,\chi) appears dis­guised in
Nishi­ta et al. ’93 [4] table Par­al­lel sunrays Sec­tion 4.3.4
Preetham et al. ’99 [5] ana­lyt­ic Earth­bound observer Appen­dix A.1
Nielsen ’03 [6] ana­lyt­ic Hard­cod­ed use case Sec­tion 5.4.1
O’Neil ’04 [7] table Source­code
O’Neil ’05 [8] ana­lyt­ic Con­stant \operatorname{Ch}_{\parallel} Sec­tion 4.2
Schafhitzel et al. ’07 [9] table Sec­tion 4.1
Brune­ton, Neyret ’08 [10] table Sec­tion 4 as \mathbb{T}(x,v)
Schüler ’12 [11] ana­lyt­ic List­ing 1
Yusov ’14 [12] ana­lyt­ic Sec­tion 2.4.2 as \operatorname{T}(A\rightarrow B)

Outlook

This was the first post in a series. I orig­i­nal­ly did­n’t plan to do a series but I decid­ed to split when the word count went over 4000, espe­cial­ly with the source list­ings. The next post will explain an over­flow-free imple­men­ta­tion of the chap­man func­tion in shad­er code, how trans­mit­tances can be cal­cu­lat­ed along arbi­trary rays and then guide through the pro­ce­dure of inte­grat­ing the atmos­pher­ic in-scat­ter along a view­ing ray.


References

[1] Naty Hoff­man and Arcot J Preetham (2002): “Ren­der­ing Out­door Light Scat­ter­ing in Real Time”
http://developer.amd.com/…

[2] Syd­ney Chap­man (1931): “The absorp­tion and dis­so­cia­tive or ion­iz­ing effect of mono­chro­mat­ic radi­a­tion in an atmos­phere on a rotat­ing earth.” Pro­ceed­ings of the Phys­i­cal Soci­ety 43:1, 26.
http://stacks.iop.org/…

[3] Miroslav Koci­f­aj (1996): “Opti­cal Air Mass and Refrac­tion in a Rayleigh Atmos­phere.” Con­tri­bu­tions of the Astro­nom­i­cal Obser­va­to­ry Skalnate Ple­so 26:1, 23–30.
http://adsabs.harvard.edu/…

[4] Tomoyu­ki Nishi­ta, Takao Sir­ai, Kat­su­mi Tadamu­ra and Eihachi­ro Naka­mae (1993): “Dis­play of the earth tak­ing into account atmos­pher­ic scat­ter­ing” Pro­ceed­ings of the 20th annu­al con­fer­ence on Com­put­er graph­ics and inter­ac­tive tech­niques, SIGGRAPH ’93, pp. 175–182.
http://doi.acm.org/…

[5] Arcot J Preetham, Peter Shirley and Bri­an Smits (1999): “A prac­ti­cal ana­lyt­ic mod­el for day­light” Pro­ceed­ings of the 26th annu­al con­fer­ence on Com­put­er graph­ics and inter­ac­tive tech­niques, SIGGRAPH ’99, pp. 91–100
http://www2.cs.duke.edu/…

[6] Ralf Stock­holm Nielsen (2003): “Real Time Ren­der­ing of Atmos­pher­ic Scat­ter­ing Effects for Flight Sim­u­la­tors” Master’s the­sis, Infor­mat­ics and Math­e­mat­i­cal Mod­el­ling, Tech­ni­cal Uni­ver­si­ty of Denmark
http://www2.imm.dtu.dk/…

[7] Sean O’Neil (2004): “Real-Time Atmos­pher­ic Scattering”
http://archive.gamedev.net/…

[8] Sean O’Neil (2005): “Accu­rate Atmos­pher­ic Scat­ter­ing” GPU Gems 2, Pear­son Addi­son Wes­ley Prof, pp. 253–268
http://developer.nvidia.com/…

[9] Tobias Schafhitzel, Mar­tin Falk and Thomas Ertl (2007): “Real-Time Ren­der­ing of Plan­ets with Atmos­pheres” Jour­nal of WSCG 15, pp. 91–98
http://citeseerx.ist.psu.edu/…

[10] Éric Brune­ton and Fab­rice Neyret (2008): “Pre­com­put­ed Atmos­pher­ic Scat­ter­ing” Pro­ceed­ings of the 19th Euro­graph­ics Sym­po­sium on Ren­der­ing, Com­put. Graph. Forum 27:4, pp. 1079–1086
http://www-ljk.imag.fr/…

[11] Chris­t­ian Schüler (2012): “An Approx­i­ma­tion to the Chap­man Graz­ing-Inci­dence Func­tion for Atmos­pher­ic Scat­ter­ing” GPU Pro 3, A K Peters, pp. 105–118.
http://books.google.de/…

[12] Egor Yusov (2014): “High Per­for­mance Out­door Light Scat­ter­ing using Epipo­lar Sam­pling” GPU Pro 5, A K Peters, pp. 101–126
http://books.google.de/…

[13] Dmytro Vasy­lyev (2021): “Accu­rate ana­lyt­ic approximation
for the Chap­man graz­ing inci­dence func­tion” Earth, Plan­ets and Space 73:112
http://doi.org/10.1186/s40623-021–01435‑y

11 Gedanken zu „Followup to Atmospheric Scattering – Part 1: Overview

  1. Pingback: Followup to GPU Pro3 Atmospheric Scattering—Part 1 via @KostasAAA | hanecci's blog : はねっちブログ

  2. I would be real­ly inter­est­ed into hear­ing more about the col­or adap­ta­tion part. These are just a few ques­tions that pop to mind:
    * How is the matrix derived exact­ly? I guess the matrix is the result of mul­ti­ply­ing the spec­trum-to-srgb CMF matrix, and the col­or adap­ta­tion matrix. For the lat­ter, did you use a Brad­ford or Von Kries trans­form to adapt from illu­mi­nant E to illu­mi­nant D65?
    * What kind of improve­ments did you notice in prac­tice by shift­ing col­or space?
    * Did you notice bet­ter results by using CIE 2012 instead of the old 1931 CMF?
    * In this work­flow, the basis is essen­tial­ly a spec­trum inten­si­ty at the three dif­fer­ent wave­lengths. So, if I under­stand cor­rect­ly, col­ors are not expressed in terms of “coor­di­nates” with­in a gamut. Does this make it hard­er for artists to express col­ors in this way?

    • Hal­lo Sun,
      I did a few exper­i­ments in order to set­tle down at exact­ly the wave­lengths giv­en. These exper­i­ments are as of yet unpub­lished (but your enquiry is just anoth­er hint for me that this needs to change :D). It turns out that there is only a very nar­row range of wave­lengths that can be cho­sen to make the trans­mit­tance oper­a­tor approx­i­mate­ly diagonal.

      Once we have estab­lished that the trans­mit­tance oper­a­tor is diag­o­nal, there is no need for a Brad­ford trans­form or the like, we can just adapt the illu­mi­nant by com­po­nen­t­wise mul­ti­pli­ca­tion, as if it was a trans­mit­tance trough a fil­ter (that was the whole point of diag­o­nal­iz­ing the operator!).

      So the method to arrive at the matrix is
      — use the sRGB CMFs sam­pled at 615, 535, 445 and stack them to a matrix ‘M*’, so that M* is the trans­form from sam­pled col­or space to sRGB
      — get the com­po­nents of illu­mi­nant E in sRGB
      — mul­ti­pliy illu­mi­nant E with the inverse of M*, giv­ing ‘E*’
      — scale columns of M* by com­po­nent-weise by E*, giv­ing M

      The col­or space defined by M is just like any oth­er col­or space hav­ing a gamut, a white point and its primaries. 

      One exam­ple where the cus­tom col­or space real­ly shows is look­ing at the turquoise shal­low beach­wa­ter at a dis­tance (through the atmos­phere). The yel­low sand is trans­mit­ted through the water, giv­ing a (prob­a­bly out of gamut) turqoise. Then the turquoise is trans­mit­ted through the atmos­phere, which de-sat­u­rates and brings it into the sRGB gamut. But if we did­n’t have the pos­si­bil­i­ty to express the out of gamut col­or in the inter­me­di­ate step, the end result would be much less lively.

    • @christian

      Thank you, real­ly inter­est­ing exam­ple you brought up.

      One more thing I’m inter­est­ed into is how this affects the data that are input to the sim­u­la­tion. Since sRGB is not used any­more, and cal­cu­la­tions hap­pen in M, then both lumi­nance val­ues (e.g. sun), as well as atmos­phere para­me­ters (e.g. rayleigh scat­ter­ing, ozone absorp­tion) should be expressed in a dif­fer­ent space, is that correct?
      If so, how would you derive this data to main­tain correctness?

    • Yes, and they are!
      For exam­ple, in the code of the linked shader­toy you can find these def­i­n­i­tions, among others


      const vec4 COL_AIRGLOW = 1.0154e-6 * vec4( .8670, 1.0899, .4332, 15. );
      const vec4 COL_SUNLIGHT = vec4( 0.9420, 1.0269, 1.0242, .3 );
      const vec3 COL_P43PHOSPHOR = vec3( 0.5335, 1.2621, 0.1874 );

      etc…

      So the 615/535/445 col­or space is just like any oth­er old col­or space where you can con­volve a spec­trum to get the three col­or val­ues (or anal­o­gous­ly, con­vert the spec­trum to sRGB and use the inverse matrix).

      The abil­i­ty to do spec­tral point-sam­pling with rea­son­able accu­ra­cy is just a bonus fea­ture of this col­or space.

      (Oh, and in the exam­ples above, when­ev­er there is a 4th com­po­nent this is used for 1250 nm near-infrared, so you can ignore it for the sake of this argument.)

  3. Pingback: 大気/散乱 – Site-Builder.wiki

  4. Pingback: Caveat: Even NASA pictures may not be linear (or the wrong kind of linear) | The Tenth Planet

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Please answer the following anti-spam test

Which thing makes "tick-tock" and if it falls down, the clock is broken?

  1.    ruler
  2.    chair
  3.    table
  4.    watch
  5.    pencil