One of my favorite science fiction tales is a time in the future when robots have become so sophisticated that they have evolved into intelligent servants, much like humans. Boston Dynamics presently produces the most sophisticated robot demonstrating this, capable of moving freely and interacting with people in various ways. But even science fiction has difficulty picturing a world where robots can think for themselves. You’re closer than you would imagine to this world.
Machines and robots will not only improve in the not-too-distant future, but they will also start to show signs of creativity. It’s possible that in the near future, they may be able to produce more straightforward creative outputs than people. But even though I think robots can mimic human creativity, I don’t believe this is the same as genuine creativity. Sceptical? Let’s first look at the technological developments that will result in breakthroughs, and then we’ll see my predictions for the kind of employment that creative people will soon lose to robots:
1) Making mental models: Many robot technological advancements have aimed to increase their independence (e.g., by enabling them to move around autonomously in a new environment and recognize faces and commands). The major advances in the near future come from studies into how to make machines mimic human thought. This procedure is becoming increasingly more efficient in recent years thanks to big data, deep learning algorithms, and the capacity to distribute processing power over thousands of computers in the cloud. For instance, Skype can now translate a real-time video chat between two people if they speak different languages. The European Union has already started investing €1 billion over the next ten years in research that will likely involve tests that replicate thinking processes in the human brain. Before that, IBM developed Watson, a brand-new class of knowledge supercomputer, which was successful in winning the game show “Jeopardy.” Jeopardy questions are frequently vague and rely on cryptic implications inside them. Thus, Watson needed to analyze queries more humanistically to reply, and he did so extremely well. This is in contrast to earlier supercomputers used to search for data faster.
Watson has only grown more formidable and impressive over time. And two examples demonstrate how it is already displaying creative traits, coming up with concepts that not only no human has encoded into it but which no human has previously conceived of and which have been judged to be excellent by human beings: (1a) Chef Watson is developing new dishes. With the help of a partnership between IBM and the food publication Bon Appetit, Watson was given access to the magazine’s extensive recipe database. Watson learned from successful recipe samples and determined which elements work well together. Watson then investigates this data in more detail, examining elements such as the molecular structure of distinct substances, their flavor profiles, and how they respond to cooking. (1b) Writing a TED talk of its own. IBM has started the Watson AI XPrize. The team that designs an AI that can craft its own compelling TED talk.
2) Machine Learning: Many scientists are investigating how to enable robots to gradually develop their knowledge of their surroundings in order to make them more autonomous. The goal is to lessen the need for humans to pre-program all of the information they require. In order to learn how to move, these machines study their bodies and gain new knowledge in the same way that young children do. It can even imagine what is happening in the thoughts of those with whom it is engaging. Even if it’s fascinating, the major innovations will result from letting machines that can learn concepts learn from the data on the internet. Google built a neural network using 16,000 processors in 2012 and fed it YouTube thumbnails of random images. It could construct a sense of resemblance across numerous photos and determine the most common thing without any prior knowledge. It was a cat, in case you weren’t able to tell. Regards, YouTube! These machines will soon look at items and see the descriptions that humans have programmed and the meaning people attach to them if given greater processing power and time. This was taken a step further by Google around 2015, which permitted these robots to begin viewing visuals in a “dreamlike” manner. They did this to understand how their system was actually “seeing” the images being sent to it. As a result, they permitted it to merge portions of an image that it thought were similar to other photos it already knew. It will therefore alter the patterns on a butterfly with eye-shaped patterns to resemble eyes. Alternatively, if it notices a cloud that it thinks is somewhat like a dog’s head, it will make it appear more like a dog’s head. With the use of this technology, computers, and neural networks are now able to improve at interpretation, a talent that they had been lousy at. Based on some initial data and production requirements, an advanced version of this would soon be able to create works that had not yet been created.
(3) Big data, quick experimentation, and forecasts: One of the major developments in analytics in recent years is “Big Data,” which is already capable of predicting everything from your Google Autocomplete searches to the model of toaster you should get from Amazon to the movies you should watch on a Tuesday night. A system can determine underlying tendencies more accurately than a person ever could by being fed enough data, and it can also forecast what might succeed in the future. You can already guess what kind of songs you’ll like. The Yossarian lives metaphorical search engine is one of my other favorite experiments. It allows you to search for an idea rather than a specific piece of information and delivers results of what its database of internet searches indicates are relevant metaphorical concepts. Like a virtual brainstorming session, almost. Experts in the field of music provide feedback to Pandora’s Music Genome Project on hundreds of songs, including how the lyrics function, elements of the basic melody, genre, style, tempo, and impact. When creating a personal track list that is aired as a radio station, it also conducts hundreds of trials with its millions of users. It receives real-time feedback on how well it worked based on how the user engages with the recommended music. In order to create a list of new music a consumer could love, it uses this information to understand how individuals respond to and enjoy various components of music in multiple contexts. What about the next stage of the development of big data? Computers can already comprehend spoken language, word meanings, and voice. Big data could probably identify the underlying patterns and forecast new lyrics if it examined the lyrics of every song produced in the last 100 years to see how popular they were. Additionally, it could immediately try them out on humans to see how they did. Imagine a program that could take a notion, come up with metaphors for it, utilize big data to forecast probable songs that would be popular, and then come up with 100 slightly different variations of the same concept. It could create music by “singing” words over a synthesized track using a computer voice and then publish each version on YouTube or a radio streaming service. It would then modify the content and style based on user input and popularity, repeat the experiment, gather more feedback, and iterate until it had a song that users enjoyed. At that point, it would release it to its iTunes account without ever receiving a note from a human. Big data can also be used to examine historical connections between various types of media and online discussion, as well as how those connections affected the success of later media. Would it have been possible to foresee the popularity of “Vampire”-based literature earlier? Could it have predicted the emergence of a musical genre that, like Grunge in the 1990s, reflected the attitudes of a particular demographic? How far in advance could you make a popular prediction? If not improved, all of this will someday be made possible by big data.
So what follows? Machines will soon replace some parts of the creative process, but I don’t think they’ll ever fully replace a person’s creativity. This results from the clear distinction between creativity (developing fresh, worthwhile ideas) and craft (actualizing those ideas). Machines will eventually surpass humans in craftsmanship, and in many cases, they already have (manufacturing, image processing); nonetheless, they can only answer the questions “What?” and “How?” but not “Why?” No matter how much analysis was done on it from however many millions of sources, unless a machine has progressed beyond taking inputs as data and using data as experiences, all of its information is still second-hand from humans. Therefore, the following are the forecasts for creative vocations that will be at least largely replaced by robots in the upcoming ten years:
(A) Advertising: Before a complete campaign launch, programs will create and test hundreds or thousands of designs, slogans, etc., online. They will tweak the campaign and iterate until they find the perfect message based on user feedback.
(B) Music: The first song to be composed, sung, and recorded entirely digitally will be released. It will probably include highly cliché lyrics about “Love,” “Beauty,” and a lot of the term “Baby.” But there will be much more complexity and variety on the second album. Furthermore, the live concerts will have numerous lighting effects but no personality.
(C) Architecture & Design: A program will generate numerous designs that all match the fundamental requirements by giving a building or product the necessary functionality.
(D) Writing, Screenwriting, & TV: Software will be able to forecast what books, movies, and TV shows will be popular in the next 1, 2, & 3 years by identifying the underlying trends in public opinion. It will then compare this to prior movies to identify story arcs that the book, film, or TV program should follow to increase its chances of success.
Georgios Ardavanis – 15/04/2023