Parallel Fractal Renderer¶
C/C++ Python CUDA OpenMP MPI PyQt5
The Problem¶
Fractal rendering is embarrassingly parallel — each pixel can be computed independently. This makes it a perfect benchmark for comparing parallelization strategies. The question: how do CPU threading, GPU computing, and distributed computing compare when applied to the same compute-intensive task?
The Approach¶
This project implements Mandelbrot and Julia set renderers using three distinct parallelization paradigms, providing a direct comparison of their performance characteristics on the same mathematical problem.
OpenMP (CPU Multi-threading)¶
Parallelizes the pixel computation loop across CPU cores using shared-memory threading. Minimal code changes from the sequential version — pragmas handle the work distribution.
CUDA (GPU Computing)¶
Offloads the computation to the GPU, where thousands of CUDA cores process pixels in massive parallelism. Each thread computes a single pixel's escape iteration, maximizing throughput for this compute-bound task.
MPI (Distributed Computing)¶
Distributes the rendering workload across multiple processes (potentially on different machines). Each process renders a horizontal strip of the fractal, and results are gathered back to the root process.
Key Features¶
- Three paradigms — OpenMP, CUDA, MPI — on the same problem
- Mandelbrot and Julia sets with configurable parameters
- Performance benchmarking — wall-clock time, speedup, efficiency metrics
- PyQt5 visualization for interactive fractal exploration
- Scalability analysis across different problem sizes and core counts
Performance Comparison¶
| Paradigm | Best For | Scalability |
|---|---|---|
| OpenMP | Single multi-core machine | Limited by core count |
| CUDA | GPU-equipped systems | Massive parallelism (thousands of cores) |
| MPI | Multi-machine clusters | Scales across network |
Architecture¶
graph TD
A[Fractal Parameters] --> B{Paradigm}
B --> C[OpenMP]
B --> D[CUDA]
B --> E[MPI]
C --> F[CPU Threads]
D --> G[GPU Kernels]
E --> H[Distributed Processes]
F --> I[Rendered Image]
G --> I
H --> I
I --> J[PyQt5 Display]
Tech Stack¶
| Component | Technology |
|---|---|
| CPU Parallelism | OpenMP |
| GPU Computing | CUDA |
| Distributed | MPI |
| Core Language | C/C++ |
| Visualization | Python, PyQt5 |