Wednesday, October 31, 2018

WEEK #7

This week I've connected the gaz sensor to Touchdesigner. My Animation is finally responding to respiration ! 

I want to set up a new animation, I want to change this Rose. As David recommended me I tried using the brush tool from Maya. Unfortunately I wasn't happy with the result. Next week, a beta version of my installation will be ready.


Sunday, October 28, 2018

WEEK #6

This week I began remodeling my plant on C4D because Rhino is not a good option to export my model in order to use it for motion effects. I wanted to finish modeling my drawings in 3D but I didn't.

Actually, I've mainly worked (and been stuck) all the week on my Voronoi Surfboard for the Physical Prototyping Process class (and I didn't succeed to get what I wanted).

Although, as the time is passing very fast, this week I organized a timetable that I will have to stick in order to get everything done before the deadline :


WEEK #7 - 11/05
“3D environment” MODELED + Arduino / CO² censor / TouchDesigner CONNECTED
WEEK #8 - 11/12
“3D environment” beta version READY + “3D environment” in TouchDesigner IMPLEMENTED
WEEK #9 - 11/19
“Depth Display” Kinect 3D Camera tracking system IMPROVED (test different version on Unity)
WEEK #10 - 11/26
New motion effects ADDED + CO² mask ASSEMBLED
WEEK #11 - 12/03
System V1 WORKING (find a title) (+ motion tracking in the streets or AR app of 3D model)
WEEK #12 - 12/10
Video / PDF presentation + Blog finishing (+ Website page) DONE
WEEK #13 - 12/17
… (fixing issues)
WEEK #14 - 12/24
Presentation

Sunday, October 21, 2018

WEEK #5

This week I had to pursue my idea and develop the details. As we discussed, my goal for next class was to think and draw some plants in order to add some complexity and personality to the 3D moving rose.

Here are some drawings :


Then I tried to model a plant in 3D. These days I'm learning how to use Rhino for the Prototyping Process class, so I tried to model it with this software. I was WRONG ! 
- First, it's not very straight forward to create organic surface and,
- Second if you want to use your object in a 3d animation sofware like C4D to make it look more organic, the conversion from NURBS to polygon mess everything up ! 

I will never do that again and start my modeling in C4D all over. 

Here are some shots of what I began to create with Rhino :






After that I worked on some improvement for my Depth Display TouchDesigner project. This display works great with long shapes but when I added my 3D "breathing" rose alone in the composition, the perspective illusion wasn't strong. I was sad but I could have predicted it. I hoped I could get back this illusion by positioning many roses everywhere in the range of the 3D camera. But in fact I don't think it looks as good as long shapes. 

This was a big question for me, I started reviewing my entire project idea. 

Finally I came up with the idea of modeling many different plants to create a natural 3d environment. This environment will be first suited for this kind of display and second I thought I can use this environment on many different mediums. I've got the idea to make a video of a 3D tracking of this environment in the city and I can also create an AR application. And personally, that's a good occasion to build a personal visuals stock that I can use for VJing or any other videos.   

The AR application shouldn't be a problem because I've already made something similar. When I though about that I suddenly realized that Unity may actually be the easiest way also for creating the installation of this work. I can also make a depth display and connect Arduino to Unity I guess. And obviously, it would be much more confortable for Unity to work with 3D than TouchDesigner since it's a game engine. 
I have to think about it.

Here are some screen shots of my TouchDesigner Depth Display tests :



Next week,  I will receive the gaz sensor for Arduino, so I'll be able to see if my breathing idea works well or not.  

Sunday, October 14, 2018

WEEK #4

Media Environment Studio Project 1


  • Theme : Human's impact on his environment / Human's ego




  • Description : 


A 3d model of a flower is displayed on a screen. The audience breath into a mask watching this screen (around 50 to 100 cm in front of his eyes). When he breathes the flower dies, rots. And when he holds his breathe the flower lives again.
When the flower is alive a relaxing sound is playing but the more it's dying the more the sound is being distorted and changed by a dephasing feedback.




  • Technical informations :


Inside the mask, there is a gas sensor (MQ135) recording the CO2 that the audience expires when he breathes. This sensor is connected through a cable to an Arduino hidden under the screen. The Arduino chip send this data to Touchdesigner. When the CO2 rate goes up, Touchdesigner makes the 3D model of the flower inflate. When it goes back to the normal level Touchdesigner makes the flower grow again.

A kinect tracks the head/upper body of the audience. If the audience moves his head on the right, the left, up and down, the 3d model rotates in the opposite direction in order to give the audience the feeling of depth on a flat screen.



In my opinion, using a mask can paradoxically put more distance between the human world and the nature that the audience tries to connect with during this experience. It feels like forcing the audience to be in contact with this "nature" that, in the future, some people won't be even able to interpret as a fake flower. Because I think in the future some people living in the city will have some difficulties to find another living being besides others human being.





Kinect Touchdesigner test :






Sunday, October 7, 2018

WEEK #3

I really want to work on an interaction between humans (society) and nature. I like the idea of seeing a flower as a metaphor of Nature, it's easy to understand.

But before sticking to any project ideas my main goal is to learn new tools and to understand the possibilities that I have.



1/ C4D : Melting Rose

I first wanted to learn some physics simulation in a 3D software. I wanted to make a flower rot but it looked a bit complicated so as for a first stage, I decided to make it melt/collapse from the bottom of the stem.
Then I also had to learn how to bake the animation to export it to other softwares.




2/ Touchdesigner : Interaction

Then I wanted to find a way to connect the animation speed/play to a data input, a sensor. As I don't have much sensor around me for the moment the easiest way was to take a mouse X position. Therefore, I used Touchdesigner to make this idea work.

I've first connected the image sequence rendered from my 3D animation to understand how things work. I first wanted to connect the mouse X position to the speed parameter of the animation but it didn't work, the video kept looping but slowly or fast or backward. It was a bit tricky to know that I had to connect the mouse position to the frame indexing. Also I had to map the range of the X position to my sequence frame number (91). I also wanted to make the center of the screen as the beginning of the animation. 

Once I understood that, I was ready to export a 3D animation file from C4D. Touchdesigner prefer Alembic files for 3D animations, so that's what I gave it. Then same process I've connected the mouse X position to the frame indexing. This time, I had to learn how to render in live a 3D object in Touchdesigner.    






Finally, I wanted to control my Arduino project of the Prototyping Process class through Touchdesigner instead of Processing. I looked for informations on the internet but I didn't find enough even though I now this is possible. I still tried something but I didn't succeed. Although I succeeded Touchdesigner to receive data from Arduino but not the opposite. My plan was to map the mouse X position to the angular rotation of a servo motor. I will postpone this idea.


The functions I used to try to get/send information to Arduino



Next week I would like to track the head position of the screen viewer to make it interact with the 3D camera of Touchdesigner in order to simulate a depth screen. Maybe I can try something simple with the webcam or I can try something with the Kinect. I would also love to know how to get data out of the Kinect.