@@ -11,6 +11,7 @@ The underlying theory of the geothermal doublet simulation is explained in the [
-[map run and analysis](maprun_analysis.md) page demonstrating running on maps, including some plotting examples to illustrate the outputs.
-[portfolio run and analysis](portfoliorun_analysis.md) page demonstrating running on a portfolio of prospective locations, including some plotting examples to illustrate the outputs.
-[customised stochastic simulation](customised_stochastic_simulations.md) page demonstrates how to develop your own stochastic frameworks around the core pythermogis doublet simulation functionality.
-[parallelization](parallelization.md) page describes how to parallelize simulations and determine the optimal chunk size for parallelization for your hardware
!!! info "Plotting, calculations and result analysis"
pyThermoGIS is designed to enable users to run geothermal doublet simulations.
When running `calculate_doublet_performance` or `calculate_doublet_performance_stochastic` (described in more detail in [deterministic doublet](deterministic_doublet.md) and [stochastic_doublet](stochastic_doublet.md)) then each combination of input reservoir properties is an independent simulation.
This makes this a good target for parallelization, where you use more hardware resources to run processes simultaneously to decrease the execution time.
Traditionally trying to parallelize code in python has been tricky and custom in built modules such as [multiprocessing](https://docs.python.org/3/library/multiprocessing.html) have been developed to handle this task, still a lot of custom code usually had to be developed to get the right setup for your specific problem.
pythermogis however uses [xarray](https://docs.xarray.dev/en/latest/index.html) which under the hood uses [dask](https://www.dask.org/) to run paralel operations. For more details on how xarray utilizes dask for easy parallelization we direct the reader to the following: [Parallel Computing with Dask](https://docs.xarray.dev/en/latest/user-guide/dask.html).
The framework has already been implemented in pythermogis, the user simply has to define the `chunk_size` parameter when calling either `calculate_doublet_performance` or `calculate_doublet_performance_stochastic` and then the doublet simulations will occur in parallel.
See below for an explaination of what `chunk_size` is and how to determine the optimal size.
## What is chunk size and how to determine it?
dask parallelization works by applying an operation (in this case `simulate_doublet`) across 'chunks' of input data, which are a collection of data that are run in parralel.
Lets say we wish to compute 1000 doublet simulations our smallest `chunk_size` would be 1, meaning that every simulation is sent as an independent job to a processor, while the largest `chunk_size` is 1000, meaning one job is sent to a processor and the simulations are run in series.
The first example would be inefficient as there is a computational cost to chunking when it comes to organising the input and the output, while the second example is also inefficient as each simulation is run in series, the optimal `chunk_size` is likely to be between these two values.
The following figure shows how different chunk sizes affects the overall compute time. It can be seen that the most efficient chunk size (for the hardware this example was run on) is by having 100 simulations per chunk.
print(f"parralel simulation, chunk size: {sample_chunk}, took {np.mean(time_attempt):.1f} seconds to run {n_simulations} simulations, {n_simulations/mean_time[-1]:.1f} samples per second")