Maximilian Heitsch

_03 cp 972 scaled

Designers are becoming engineers, and makers a machine. How will AI slide into the design process and what would a future automated design practice look like?

AI will be in the toolset of every future designer, perhaps as a personalized algorithm, perhaps to maximize creative output by automating trivial processes and creating visual outputs. What will be the point of purchasing stock photos, if a machine would be able to create a perfect image composition based on its creative intelligence?

We have entered a time where automated creation is possible. But what does a future automated design process look like? Rather than designers just generating one end result, perhaps the design process, and the defined rule-sets in which it will be automated, will become much more important.

Designers are becoming engineers, and makers a machine.

 

As an exercise in training and learning, designers faces were filmed and spliced to create 10,000 still frames. Once the images were cropped and augmented, they were fed as training-material1 to a deep convolutional generative adversarial network (DCGAN). A DCGAN is a network architecture, in which two algorithms compete with each other. One algorithm constantly tries to create “counterfeits“ of the learned material, while the other aims to uncover those copies as such. Every time one of the machines succeeds in deceiving or uncovering the truth, the other machine becomes more “intelligent“. Through this process of exchange, both algorithms become gradually better at what they do – training each other.

In addition, the convolutional layers of the artificial neurons help to enhance the learning-process, by emulating the natural interaction of neural cells. Each layer acts as a start-point, operator and end-point of the trespassing information. Using this technique helps to speed up the training of larger data-sets, like images.

The DCGAN was integrated into Tensorflow and written in Python. The AI was trained around 150 hours, always during night, with 480×480 input & output, in batches of 64. The videos were generated through shifting the noise-value for the DCGAN for each image. After generating a few hundred PNGs, they were concatenated through the ffmpeg module.

And so, a new creator is born, which can generate endless creations based on its own cognitive capabilities. In these particular images, the AI has created a symbiosis of designers human features, creating a trans-individual original. Designers not as individuals, but as a rudimentary collective__

 

1

Testing-material for the AI

 

 

Biography

Maximilian Heitsch is a Munich-based creative working in the fields of art, technology and graphic design. In 2012 he co-founded the design and technology-studio Moby Digg. In his work he fundamentally focuses on the interaction of shape, color, emerging technologies, such as AI or AR.