Picasso could not predict what modern day computers will be capable of. Same way that we can't predict what will be possible a few decades from today. But I like where this is going.
When you hear the phrase "working with data", what do you think about?
Crunching numbers, optimising costs, recommending the next song on Spotify, finding a correlation between a customer's gender and what size iPhone they are likely to buy?
Well you are right! But is that all that we can use data for - to produce other data?
Until recently I thought so. But then I listened to a recent Towards Data Science podcast episode with the headline "Machine Learning as a creative tool". That prompted me to explore how Machine Learning and Deep Learning are being used to create art.
The creations I have found astounded me. At Le Wagon, we often have to break the myth that coding is about math and logic. Coding is, in fact, a creative process. But the applications of Data Science for art that I have found take the creativity of coding to a whole other level. I want to take you on the same journey and I encourage you to explore the original articles linked in this post.
Let's start with the place many of us start our creative journeys - kindergarten.
It all started with data scientists at Google (it usually does, doesn't it?). They made the game Quick, Draw! that uses neural network to recognise doodles. But this article is not about image recognition, it's about using Data Science to create art!
So, three data scientists and engineers decided to do just that. They used the data collected by Google's Quick, Draw! and created the sketch-rnn - a recurring neural network model that continues your doodle based on what you've already drawn.
The model they made can even do more things than help you (or your kid) finish a doodle. It can also do things like interpolation - transforming from one doodle to another by filling intermediary steps - and variational auto-encoding, which tries to replicate the style of your doodle and create it's own. You can even task the neural network to mix stuff together - for example, by doodling a cat and asking the neural network to replicate it using the dataset of crab images. Here come the crats!
More than a doodle - a zentangle
If you are hearing the word "zentangle" for the first time, you are not alone - because it's the first time I heard it too. But it's an art form that we have all seen, and might even be guilty of ourselves, especially during those boring phone calls.
But Machine Learning enthusiast Kalai Ramea has been a fan of them for a while, and decided to apply her data science knowledge to create a "zentangle machine".
She used a technique called neural style transfer. It is a technique which generates an output (usually an image) based on a combination of features of multiple inputs - in this case, an image with a style pattern and an image with of a silhouette that needs styling - while trying to minimise losing any features of both initial images. The results are very cool! While this creates intricate art, Kalai also predicts that the same technique can be used for any visual design generation - from logos to t-shirts.
AI competes with painters at the auction
Not impressed yet? Okay, let's look at something more impressive - $400,000 more.
Have you heard of the famous Edmond de Belamy? Or the Barons of Belamy? What about the Count of Belamy? No??? That's because these are not real people. But they have pretty real portraits painted of them. And while these look like 18th-century artworks, they aren't. They were painted in 2018, by a general adversarial network, built by the genius French art collective Obvious.
General adversarial networks (GANs) have been a massive leap in Deep Learning in recent years. It's a system of two networks - the generator and the discriminator. Simply put, two networks are in a continuous game - discriminator judges whether the content (in this case, painting) made by the generator is "real" or "fake", and the generator keeps producing content based on the feedback from the discriminator. This way the generator is being trained with every round until it is able to trick the "gatekeeper"! The bottom right portrait made this way, Edmond de Belamy, was picked up by Christie's - one of the major auction houses worldwide. The auction was planning to sell it below $10,000. It was sold for $432,500. And you thought it's the creative jobs that are safe from robots, ha. The same art collective, Obvious, now have a new project - using GANs to generate 18th century Japanese style art!
⬛️ AI trying it at modern art
The artworks so far have all had some sort of logic or "rules" around them - a defined style, a human shape and face. But what happens when we task a machine to be more... expressionist?
William Anderson did exactly that and trained a machine learning model to create Bauhaus style art. William used a technique called Markov chains. This machine learning algorithm is all around us - from predictive texting on your phone, to autocomplete in emails, to social media bots.
The way Markov chains work is they try to "guess" the next object based on a previous sequence of objects. The secret sauce is the "order" of the chain. For example, if on a phone has a text predictor which uses a first-order Markov chain, it will only look at the previous word to try and predict the next one. If it has a third-order Markov chain - it will look at the previous three words. Now back to art!
What if we feed the machine learning model pieces of art - pixel by pixel - instead of words? The model starts looking at sequences of colours and guessing what to draw next! I'll admit, something feels off compared to the real Bauhaus paintings above. But the interesting thing here is that there's relatively very little data being used to train this model and with Markov chains we're literally asking a machine to follow it's gut feeling and make art!
Be sure to check out William's article for more examples and even an AI-made T-shirt?
Computers are useless. They can only give answers.
Pablo Picasso, 1968
Picasso could not predict what modern-day computers will be capable of. The same way that we can't predict what will be possible a few decades from today.