Traditional image distorters use the HTML5 <canvas> API to rasterize the image, read
the pixel data, calculate coordinate offsets, and redraw the pixels frame-by-frame. This engine
delegates actual rendering to the browser's native hardware-accelerated CSS compositor. Furthermore, CSS
properties like skew() or rotate() are strictly affine transformations. They
permit parallelograms but prevent arbitrary quadrilaterals. To achieve genuine perspective
foreshortening, a non-affine transformation is inherently required. The engine computes a 4x4 homogenous
geometric matrix representing 3D space, dividing the coordinates by a derived homogenous coordinate
($W$). The engine essentially performs a perspective divide to project 3D coordinates mathematically
back into 2D screen space, and supplies it to the matrix3d() CSS rule.
The application is entirely zero-dependency and client-side by design. It requires no backend Node orchestration, no build steps (Webpack/Vite), and no imported NPM libraries. It exists as a transparent, standalone HTML structural sandbox to ensure it remains universally readable, instantly executable on any standard browser, and completely unburdened by external package decay.
Yes. The 4x4 homogeneous matrix math implemented to calculate perspective distortion is
a fundamental mechanism of linear algebra, not a modern browser syntax quirk. As long as CSS
specifications support the mathematical standard of matrix3d(), the engine is
mathematically immune to deprecation.
Processing files locally guarantees maximal read/write performance for the DOM and nullifies bandwidth dependencies. Relying on cloud imports introduces network latency during blob generation and compromises the engine's strict zero-dependency philosophy by requiring persistent API integrations.
It does not bypass them. YouTube and similar protected media platforms
deploy X-Frame-Options restrictions, encrypted playback streams (DRM), and proprietary
client-side blob handling algorithms that explicitly prevent third-party applications from scraping raw
audio/video bitstreams into custom video tags. You must point the tool to a raw .mp4 or
.webm file link, or simply upload local media.
When you select a local file, the browser engine invokes URL.createObjectURL(). This creates
a volatile, local reference hash to the file residing temporarily in your computer's RAM. These
pseudo-URLs are entirely ephemeral—upon closing the browser tab or reloading the workspace, the
browser's garbage-collection routine securely flushes the local memory.
No. The terms "Master" and "Proxy" specifically refer to DOM elements residing within
your own local browser tab. The engine instantiates a single, hidden "Master" <video>
element that manages the audio, and clones muted "Proxy" instance elements identically into every tile.
The loop strictly commands the timeline of all Proxy nodes to synchronize internally against the
timestamp of the Master. Zero visual or audio data ever leaves your hardware.
Every newly segmented grid tile mandates an independent, distinct matrix inversion and localized CSS paint computation per frame. Because mathematical calculations and physical renderings occur natively in the browser's DOM compositor, excessive grid densities (e.g., above 20x20 matrices) exponentially flood the hardware rendering threads, inevitably exceeding standard GPU raster capacities.
Active video synchronization forces the browser engine to command the timeline of over a hundred Proxy tiles frame-by-frame. While this prevents audio/visual tearing across the grid boundaries, the constant timeline-seeking represents an intense processing load on the CPU. Disabling Sync terminates this strict timeline enforcement, immediately cratering CPU overhead on lower-tier hardware at the acceptable cost of minor temporal visual drift.
No. The application's core logic is functionally volatile. No state-management schema
exists to store your positional matrix3d() readouts beyond the lifecycle of the active
browser tab.
Because the engine is not drawing to an HTML5 Canvas, there is no flat, 2D rasterized pixel buffer to convert to a `.png` or `.webm` byte stream. The visual output is simply an illusion created by dozens of aggressively warped DOM elements stacked on top of one another. Recording the output requires external screen-capture utility software.
No. The sandbox executes completely offline. It possesses no analytics integrations, no remote server ping mechanisms, and zero external tracking scripts.
No, it preserves them. This is the functional advantage of a Pure-DOM engine. When you warp an interface on a typical Canvas, you are merely stretching a flattened, rasterized "photograph" of the UI. Conversely, because this Sandbox explicitly maps native HTML tags via spatial CSS calculations, the browser's underlying hit-detection and accessibility hierarchy remains wholly intact.
Embedded interactive web interfaces, CSS hover events, embedded iframes, or login nodes pushed violently through 3D spatial distortion still remain mechanically functional and accessible to screen readers exactly as if they were unadjusted.