JESSIE ZHAI

Digital artist

A documentation page of rambling and experimenting.

@jessiehaseyes
jessiejiranzhai@gmail.com
11 FINAL
ICM WEEK10




GO TO:
SMILE, YOU’RE NOT ON CAMERA
P5.JS SKETCH

For my final project’s creative assignment, I’ve created an installation “Smile, you’re not on camera” that is a playful take on rejection of mass survillienvance.

SMILE, YOU’RE NOT ON CAMERA

surveillance camera as memory archive


IDEATION

When I worked at a photo studio in New York, surveillance cameras were used to monitor loading docks, manage freight elevators, and prevent theft. But they also captured special moments between my coworkers and friends. Watching these moments on playback became an unexpected source of joy.

Growing up in China, where there are over 700 million surveillance cameras across the country, I’ve always been both fascinated and uneasy about being watched, being reduced to just moving pixels on a screen.




Visually, I want to connect p5 to an external camera that fake as a security camera, and use ML5 to map out the individuals being captured and make them glow. When two bodies comes into contact, different colored auras generate around them. When they hug, animated elements will burst out from the glowing cluster.

I would like to install an external camera on the ceiling pointing down at the viewers from behind, while they watch the footage displayed on a CRT TV screen.





To me, the project is a rejection to mass surveillance, an opportunity to see ourselves through a detached, third-person perspective—a poetic visualization of the magic in human connection.


PROCESS

I first started experimenting with the body segmentation ML5 model, which allowed the detected body be masked out from the canvas. It provided me with a fairly easy starting point to experiment with the visual that I’m looking for.



For the glow effect, I first experimented with a bloom effect and glow effect shaders. I found a reference for a bloom effect shader, and tried to incorporate that into my sketch. The reference go through three different passes by bluring the image and averaging the color on canvas using createGraphics() function. 



However, when I tried to incorporate it into my sketch, I encountered a couple problems. Since the shader are layering the canvas, and averaging the rgb color, it does not really work well together with the body segmentation model, which separate the foreground and background by masking the detected bodies white and the background black. 

Some interesting glitch happening when I was adjusting the sketches, this one is feeds result back into segmentation mask into itself, which created this infinity glitch within the detected body.



Luckily, I got some advice from Jack B. Du, he helped me to figure out some different methods other than editing the shader settings in order to create the effects that I’m looking for. By adding a blur filter and the add blend mode, I was able to blur out the edges of the mask, and enhance the whiteness in order to create a glowing effect. This method is especially helpful since I’m using the color white, otherwise it might not work as well for any other color.






P5 sketch with body segmentation, external camera connection, and filters for glow effects:




Tests with sketch above, cnnected to CRT TV:


Side Challenges

With the general aesthetics set up how I wanted, a couple more ideas came to mind for the general installation of the project. I met up with Shawn, who gave me some examples and ideas on art/experiences within the realm of survillence. He gave me two actual security cameras to test out so I could get more of camera movements, and a reading of date and time.


I was very excited to test it out, however, it turned out to be a week long struggle just trying to connect to it. I was able to get some help from Marlon, Gabe, and Shawn on how to connect to it as a webcam. Although the cam is detected, but it only does not show any visuals. When it is directly connect to the CRT TV via RCA cable, some video information will come in but it only shows up as distorted statics. We figure that it might need some other converted in order to use it as a webcam. 

However, Shawn showed me a way to connect the security camera to P5 based on his previous sketch for class. This way, the footage is showing up as a image stream instead of a video. However, another problem is that p5js uses https instead of http that has many firewalls and protections so that the sketch cannot function properly. Shawn mentioned I could also have use network instead of the web editor to get around the protetections from the browser, which I might try later.

In other words, a week passed and there was no progress other than the aftermath all the testing has left on my table.




Interaction


The next challenge is to figure out what type of interaction that I want to trigger when people come into contact. I met up with Ellen to discuss a couple problems that I noticed with the original sktech, for example, the original ML5 model cannot detect poses unleass it’s within a certain range, which is far closer than how I wanted. It is also very unstable, constatly capturing the white obejcts within the room instead of human figures. We tested out some other models, for example, using the BodyPix model and only using a few keypoints on the body to simulate a human form.



I also did some research on how to achieve a similar effect with TouchDeisgner’s Mediapipe. It can create a very similar effect by using the image segmentation model and using bloom, rgb key, and the over nodes. With Torin’s example, several different models are already connected to the MediaPipe note, however the pose tracking is only able to detect one person at a time. Perhaps I could change to a different model later.



Back to P5, with John Henry’s help I was able to combine the BodyPix and the Selfie segmentation models together. 
Now by using two different gotPoses and gotResults, the two models are functioning at the same time.

Here I tried an effect when the wrists are put together, a rainbow gradient is generated on the contact point.



I didn’t love how the gradient looked on top of the gray scale image, so I thought of the circle target sketch that we did at the begining of the semester and decided to make that into an animation that appear at the contact point.

I’m using the same color array that I’ve been using for the whole semester:


To set up the target first, I created a for loop to draw five circles with increments, this way no matter how the animation changes there is always five cicles .


Since I wnated the interaction to only happen when two people put their hands together,  I needed to first check if there are more than one person in frame with poses.length, then for each of the pose, I identified the right and left wrist with the keyponts in the BodyPix model.

I used the confidence threshold to check if the people hands are within the range of 200, since there are two pairs of hands, I set them with four pairs of statements(left 1 + left 2, left 2 + right 1, right 2 + left 2, right 1 + right 2).

Then I created a wrist contact function to define what happens once the wrists are in contact, this will set the boolean of ishand contact to true, which will push into the hand contacts array.


The main interaction will happen when hands are indeed in contact, other than displaying the target circles, I also wanted the circles to have a pulse effect, which I used a sin to create the movement. I wanted the circles to grow and shrink while miantaining an somewhat even spacing between each other so I set a range for the max and min spacing and used lep to set the value for the pulse. The sizes of the cricle will strat from the base size that I’ve set in for loop in the set up function, and increase depending on the index as well as the spacing range. 



Touchups


To add on teh security footage aesthetic, I wanted to add a timestamp with dates and time, so I created a print time function and called the get date and time functions to create a string of data and printed it at the bottom left of the screen. 




I also wanted to add to the sense of survillance so I decided to draw squares around their heads, but since the BodyPix model only detects face keypoints but not the full head, so I set the square to have qual distance to left and right ear and to use that as radius position of the red square. During the process of this, I only set it once so when two people appear on screen, the skect started to measure middle point between different people’s left and right ear, casuing the red square to appear in the middle, so I had to also set it for each of the pose by adding a for loop at the beginning. 






Final





REFERENCES

Body Segmentation -ml5
Ellen’s External Camera sketch

Shawn’s security Camera sketch

Claude AI


SOLVED PROBLEMS

 
  • Combining two machine learning models together
  • Connect to external camera(thank you Ellen!)
  • Using blend mode to add effects to segmentation mask(Thanks Jack!)
  • Adding interactions to keypoints and contact points


THINGS TO IMPROVE

  • I would love to actually use the security camera instead of a webcam
  • While talking to Shawn, I was inspired by his sketch and had the idea to perhaps use ML5 poses to change camera angle and sizes. When I have time, that would be the next thing to explore
JESSIEZHAI