Blender has an incredible amount of versatility as a 3D tool, and it boasts an incredible amount of versatility in areas that may not be expected, and applications that have potential to be incredibly useful that are just waiting to be explored.

Recently, I have explored the creation of certain mathematical and physics computational models in Blender. Through this, the goal is to ultimately provide awareness to this as a potential option, and how Blender can- through its geometry node and procedural material setup, to simulate the Rayleigh Criterion and actively change parameters with very intuitive visual indicators.

One of the biggest hurdles that presents itself given standard procedures to 3D visualized simulations is in fact the 3D itself. OpenGL and the multitude of language-specific wrappers that exist still present an incredible challenge to configure, and a significant amount of investment in terms of time, resources, and specialization in order to get this in operational order. In essence, approaching this from a typical programatical paradigm would result in most of the effort being dumped into trivial 3D and shader work.

Blender adopts a nodal type system, that allows for abstraction of configuring of procedural elements. This is significantly easier than using gLTF or other shader languages in conjunction with OpenGL. This nodal system not only allows for manipulation of textures, but also, the mathematical manipulation of different forms of noise, colors, vectors, as well as to simulate fluids and gases through CPU baking.

Recently, with the addition of Blender 3.0 has made this much more feasible, as modifier stacks, drivers, and shape key manipulation were the standard norm for procedural manipulation of base geometry. However, this new version added geometry nodes. This, in essence, allows for the manipulation of geometry in a much more expansive and streamlined manner. Being able to manipulate geometry based on pre-determined calculations (i.e. formulas), being able to create functional recursive algorithms with exploiting geometric boundary conditions, as well as the ease of operating and manipulating primitives in conjunction with the base geometry creates a significant amount of potential pertaining to the prospect of using Blender as a much more flexible and agile alternative to the standard approach instantiated by these types of workloads.

Additionally, the ease of manipulating variables; including incident and target positions, wavelength, and target temperature in this- as well as other similar computational contexts provides a level of feedback and responsiveness that allows for much more granular control and robust analysis of minor and major changes that occur within the system.

Node Groups and other organizational paradigms are incredibly useful in the ease of manipulating these system parameters, and allow for schemes to be abstracted whilst still remaining modifiable. This inherent ease of creating abstraction without having to create a dedicated GUI system via linking inputs and outputs in a Node Group present a very powerful means of creating visualization tools with relatively low amounts of time overhead for the graphical aspect of a given computational workload.

Additionally, this can prove useful in coincident physics simulations or multiple variables influencing a simulation. For example, Blender can simulate rigidbodies under both the conditions that other rigidbodies in the simulation are contributing to, but also forces such as turbulence, force, and drag. Lighting contributions from simulations (including this one) can be physically accurately displayed on the target object, as well as adjacent objects.

The rayleigh criterion is an approximation used in physics settings for simulating rather ideal/best-case optics and beam shape for lasers. Achieving the Rayleigh Criterion in practice is incredibly difficult, but is achievable; especially in lasers without space constraint issues. Theoretically, a Gaussian beam for example is better, but is even less achievable in a practical scenario for laser optics to achieve.

The equation for the Rayleigh Criterion without using small angle approximations is:

\[\sin{\theta} = 1.22 \cdot \frac{\lambda}{d} \]

Where 1.22 represents the first zero/minimum in the Bessel function, λ represents the wavelength of the beam, and d is the diameter of the lens.

The solution when plugging into the Rayleigh Criterion will give out an angle. This angle without exhausting too much technical detail gives the minimum angular spread of in this case; a beam. This is defined by the minimum angle in which two points through a given lens or aperature are distinguishable from each other.

Now that the Rayleigh Criterion has been defined as the ratio between the first miminum of the Bessel function multiplied by a ratio between the wavelength and the diameter of the lens, the surface area of the beam itself can be defined as well at a given distance. Firstly, the radius of the beam itself must be computed.

In order to get the distance between the incident and the target, the first step is to subtract the target position vector with the incident position vector. The vector is in 3D space (x, y, z), and this gets the total distance between the two vectors.

\[\vec{p_x} = \vec{p_1} - \vec{p_0} \]

The radius of the beam can be obtained via manipulation of the resulting value from the previous calculation.

\[\arcsin{\sin{\theta}} = \arcsin{1.22 \cdot \frac{\lambda}{d}} \]

Using inverse sine on both sides yields the value of θ.

To visualize how to get the beam radius, think of a right angle triangle; where the beam radius is the opposite side to θ.

The distance away is symbolized by the adjacent side. In order to get the value of the opposite side, the computed value can be described as:

\[\tan{\theta}_{\frac{1}{2}} = \frac{r}{l} \]

This can be re-formulated to:

\[r = l \cdot \tan{\theta} \]

Where r is the radius, l is the length (or distance) away from the incident point of the beam, and the tangent of θ is left after using the definition of tangent (opposite over adjacent in the first equation), and then solving for r to get the radius.

Additionally, the absolute value has to be used as an input due to distance being a scalar quantity, but the position in 3D space is represented as a vector.

A bit of a complicating factor is introduced. This method of computation; as it is multi-dimensional, has the issue of having the distance in each direction contributing to different radius. However, approaching the incident position vector as a reference frame shows that the beam radius is only modified when the maximum of the component’s value is changed. This is because when there is a position component value change that is not the maximum, the beam only has to rotate; rather than travel any change in distance.

To enforce this, the elementwise maximum is computed from the vector:

The beam surface area can be computed simply with the circle area equation after the radius is computed:

\[A = \pi r^2 \]

Another benefit of having computations that are real-time addressable is the extrapolation of useful data; not just the visualization. An example is the Stefan-Boltzmann Law, which can be used to calculate the minimum wattage at a given distance in this instance to reach a temperature. For example, cotton has an ignition temperature of 680 Kelvin, and that can be used as an input into the equation.

The equation itself is expressed as:

\[P = A \cdot \epsilon \sigma T^4 \]

Where P is power in watts, A is the surface area of the beam in this context, ε is the emissivity (or the ability of a material to emit thermal radiation) of a given material, σ is the Stefan-Boltzmann constant, and T is the target temperature in Kelvin.

*Note: Blender does not display float values in nodes below .001 in preview properly, so it shows in the above image as 0.*

For the purposes of visual clarity, the incident and the target position vectors are represented with cubes in the simulation.

The beam path itself can be represented as a curve line segment; where the starting point is the incident point and the endpoint is the elementwise multiplication of the normalized target position vector by the computed length of the total distance computed earlier.

\[\vec{p_{end}} = \hat{p_0} \lvert \vec{p_x} \rvert \]

In order to apply the appropriate thickness to the laser beam according to the radius, the Set Curve Radius node is used on the curve line with the Curve Parameter in order to have it have a linear increase in thickness as it gets closer to the target.

In order to display the correct wavelength color and power for the laser and to have the material synchronize to any changes in the geometry node system, the wavelength provided in Geometry Nodes for prior calculation purposes is outputed as an attribute.

The material for the laser geometry is rather simple, and simply uses an emission material.

The wavelength directly goes into the color input, while the power and beam area attributes are divided to obtain the optical power density of the laser beam. One point to note is that the units for irradiance in Blender are the general SI units (watts per meter squared) for irradiance quantities. Normally, optical power density is calculated in watts per square centimeter (due to it being more practical due to most beam surface areas being very small), but to be physically accurate in respect to lighting, it is kept in those units.

This paper presents the premise and application of utilizing Blender for computational workloads and demonstrates the workflow for understanding the mathematical implementations of generally visualizing a simulation into a 3D environment procedurally; both geometrically and materially. There is a definite concern with the lack of adoption of this technique that is not unfounded presented in an implementation such as this. However, the argument can be made based on the relative lack of experience required for graphical APIs that would otherwise have to be employed in this instance, the growing maturity of Blender as both an industry standard software in various 3DCG workloads and in feature set, as well as the flexibility for incorporation of more complex use-cases demonstrate the capability of this software to be used in this field.