More examples of OSC-Links between TD and SC
Example: triggering sounds from video data
This example demonstrates going the other direction: to generate sound that depends on image data.
The TD network analyzes temporal changes, or movement, in the video stream and triggers a sound if a certain degree of change happens.
The approach to compare frames for motion detection follows several steps. First, both frames (current and previous) are converted to grayscale (alos a Blur can be applied to reduce noise and small irrelevant changes in lighting or camera sensor noise). This makes the motion detection more robust. Then, the the difference between the pixel values of the current and previous frame are calculated. Areas with motion will show up as non-zero values (grey), while static areas will be close to zero (black). Optional thresholding: Apply a threshold to the difference image to create a binary mask where significant changes are marked as white (1) and insignificant changes as black (0). This helps eliminate minor variations that aren't actual motion.
Limitations: This basic approach has some limitations. Changes in lighting can be misinterpreted as motion. Also in live situations it is sensitive to camera movement and shaking the camera can trigger false detections. Slow-moving objects might not be detected if their frame-to-frame difference is below the threshold.

Creating sounds from image data
Last updated