We use cookies in order to improve the quality and usability of the HSE website. More information about the use of cookies is available here, and the regulations on processing personal data can be found here. By continuing to use the site, you hereby confirm that you have been informed of the use of cookies by the HSE website and agree with our rules for processing personal data. You may disable cookies in your browser settings.
109028, Moscow,
2/8 Khitrovsky Pereulok, Building 5 (metro «Kitay-Gorod», «Kurskaya», «Chistiye Prudy»)
Time has always posed challenges for those who work in the media industry. Such a complex field has a great number of interrelated components – from social change to revolutionary advances in technology.
We suggest taking a look at social processes in a much broader way by studying journalism, media management, directing, editing, the stages of the creative process, and the production cycle of creating a media product – simply put, everything that can be called journalism, media, and communications.
NY: Routledge, 2024.
Baysha O., Chukasheva K.
Russia in Global Affairs. 2024. Vol. 22. No. 4. P. 136-154.
Baysha O., Chukasheva K.
In bk.: Media, Dissidence and the War in Ukraine. NY: Routledge, 2024. Ch. 6. P. 101-118.
Lapina-Kratasyuk E., Oiva M.
Haastatteluaineisto Yves Montand Neuvostoliitossa, lähdemateriaali. http://urn.fi/urn:nbn:fi:lb-2020081502. The Language Bank of Finland, 2021
What do you need to know to make friends with neural networks? How the achievements of science reflected in art, and what are might become the trend of the next year? We talked about this and much more, to Stanislav Milovidov, Senior Lecturer at the Faculty of Creative Industries of the Media Institute.
At the intersection of teaching and research
Many of the training courses I teach are in one way or another related to either transmedia storytelling or artificial intelligence. In addition to these, there are also general theoretical courses like Media History and Theory, because in order to explore artificial intelligence and transmedia storytelling in relation to media, a theoretical framework is needed. Almost all of my courses have not been taught before, and half of them were invented this year. For example, "Fundamentals of Prompt Engineering." I call them "venture courses": previously such courses simply did not exist and, accordingly, there is no clear methodology for teaching them. In this case, it is an iterative, constantly evolving learning process. We are, in fact, like a laboratory: first we have a few seminars, and then we have the task of, for example, making a comic book or a 3D model. It's especially good to train on surrealistic images, because then you get an understanding of how objects line up in space and interact with each other. This is not a classical approach to education, but a laboratory and partly exploratory approach.
As for the creation of fundamentally new courses, it makes sense to think in the direction of experimental formats. I love going to contemporary art exhibitions, because media artists are bold and talented in their work, using various technologies and media effects. They often focus on things that may later become practice, or may of course just fade into history. It is possible to explore what they do, trying to assess the potential of turning a particular approach into a media product: how and for what purposes it can be used, and how the format of storytelling changes. You need to come up with your own way of communicating with your audience, and it is great to have students involved.
Exhibition of the experimental project "ArtMedia&Science Laboratory"
The main objective of this exhibition is to make visible the research being conducted by various faculties of the National Research University Higher School of Economics. It usually happens that the results of scientists' work become known to the outside world when they speak at a conference or publish an article in a scientific journal. However, it should be understood that before being published, any scientific article undergoes peer review, and the journal itself may be published only a few times a year. Thus, it takes at least a year, if not a couple of years, from researchers getting results to those results becoming visible. In addition, in order to make this material available to the general reader, a journalist must be found who will tell about the achievements of scientists in a popular scientific, understandable language. This takes time. This creates a distance between what researchers are doing today and when everyone will know about it.
Art&Science tries to overcome this distance by translating the scientific process into an artistic form and paying attention to current, happening right now research. For Art&Science, the outcome of the research is not very important in the sense that in this direction the artist is quite happy if there are five competing hypotheses. He doesn't have to wait to see which one wins. He narrates the struggle of these hypotheses through artistic means. In this way, an attempt is made to make these investigations visible here and now. For example, the exhibit featured an exhibit related to a technique to assess the activity of the LDL receptor in human monocytes. The main goal of the research is to create personalized treatments for elevated blood cholesterol and related diseases. Human DNA is made up of four types of nucleotides, and based on these inputs, artificial intelligence came up with a four-note music system. It could be played, and the melody varied as the nucleotides changed and certain mutations occurred. It turned out to be a complete attraction.
"You won't be able to turn off your computer because it will beg you not to."
I am skeptical of artificial intelligence as a direct threat that we used to read about in 20th century science fiction. At today's level of technology, a machine uprising is more of a hoax than a new reality. By the way, it is curious: in American science fiction there are a lot of post-apocalyptic stories related to technology, while in Soviet science fiction writers (Lem, Strugatsky brothers, Bulychev) everything looked less pessimistic. In many ways, perhaps, this is due to Soviet ideology, which dictated a positive attitude to technology, which in turn could lead people to a bright future.
There are, of course, such indirect threats as, for example, the disappearance of some professions, as many people are now talking about. But, as I see it, this is the kind of threat that we experience every half-century. A good example is the profession of the typist, a woman who sat and typed on a typewriter. Now we do it ourselves. It begs the question: when this profession disappeared, did all the workers starve to death? No. They probably retrained. Many of them, for example, became proofreaders and still worked with text, and some found a use for their knowledge in other fields. You can think of examples from even earlier in history, when big computers on punched cards were in use. Back then, it took a special person to pierce the punch cards to then load the data into the computer. Those people also dissolved into the professional environment, retooled.
I think the professions that many people say may disappear mostly require so-called hard skills. At non/fiction exhibitions, there is a constant discussion about how artificial intelligence is entering the book publishing industry. Many experts have roughly the same idea: the work of an artist is needed when a person wants the cover of a book to be made by a specific artist who would embody his or her own vision. If we are talking about mass production, for example, coloring books for children, then the artist is no longer needed here. This is a technological work in which artificial intelligence fits in very well. I have a very positive attitude towards artificial intelligence technology, as any technology that simplifies life and performs routine tasks for us, leaving us more time to do something more interesting. This is the basic function of any technology. Intelligent ones are no exception.
The question of existential threats usually concerns so-called general artificial intelligence technologies, i.e. some kind of artificial consciousness - a situation where you won't be able to turn off your computer because it will beg you not to. However, this is a distant, so far fantastic story, because first we have to not just understand what human consciousness is, but to make a mathematical model of it. But so far we do not even theoretically understand very well what human consciousness is: of course, there are competing hypotheses on this subject, but none of them succeeds.
From the basics of programming to creating your own exhibition
At the end of last year, I drew the attention of colleagues and students to the fact that modern neurons, especially such as ChatGPT, have become quite good at programming. This is an important and very underrated tool, because now anyone can use this technology to make a simple script, a program that will perform some task. There is, however, the problem that this approach is not very well industrialized yet. Experts have already said that these amateur programs created on ChatGPT do not fit into the ecosystems of big IT giants, as this process has a lot of technical difficulties and nuances. But in general, the situation is similar to what happened with photography in the late noughties. Today everyone has a camera in their smartphone and we take a huge number of photos, but we don't become photographers. When we need a good photo shoot, we hire a professional in the right field: food photography, wedding or advertising photography. It's the same with programming: to solve global problems, we will call professional programmers from large companies, but each of us has tools for every day, simple programming.
In three months, I mastered programming with ChatGPT and made a small artwork in Python, which is now on display in the Krasnokholmskaya Gallery. It is a biometric sensor that reads a fingerprint. Using computer vision, the program makes a maze of papillary line patterns. I had problems with linking this sensor with the program, and several technical solutions that are applied there were proposed by ChatGPT. Conceptually it is a very simple work, but conceptually it poses a question for everyone about their relationship to biometric data collection. Even though it is a rewritable file and I am not collecting any personal data, it still causes internal discomfort for many people.
I have noticed that there is a certain psychological barrier here: when you show humanities students the program code, they get a look of terror in their eyes. It reminds of a situation when a person is afraid of dogs, and in front of him there is a Rottweiler - the quintessence of his fear. To make a person stop being afraid of this Rottweiler, it is necessary to find a special approach: slowly bring the person closer, then let the dog sniff your hand, try to stroke it, and the person will be convinced that the Rottweiler is not scary at all. It is the same with program code, although neurons make human interaction with code much more comfortable. If we don't, we are missing a very serious opportunity that helps the students themselves, especially those working on projects on computer games and gamified practices. In addition, data journalism students will be able to write a program that analyzes data. The key is to come up with the mechanics of that analysis.
"Working with a neural network is magic."
What we call artificial intelligence is actually machine learning technologies, large language models or large multimodal models. These terms from computer science refer to the fact that this machine obeys exclusively machine laws - all the laws that any other computer technology operates under. One of them is object-oriented programming. When you create generative images or programs on ChatGPT, you realize very quickly that the machine operates with objects. For example, the desire to make a seemingly simple visual function turns out to be that this function adds extra objects to us, which in turn interact with the virtual space. This small decorative element can complicate the work by a third.
One of the problems is that all programs are different: there are graphics, text, music programs; some work with video format or 3D models. Each has unique functionality. When we move from object-oriented design to specific neural networks in the course and start working in them, it turns out that some approaches may not be relevant six months later (a new version of the algorithm has been released), and we have to learn new functions. On the one hand, this is normal, but on the other hand, it can sometimes be a demotivator. In addition, neural networks are a kind of "black boxes" or uninterpretable algorithms. One can often hear the phrase "neural networks hallucinate". So a lot of people like to say that working with a neural network is like magic, some kind of magic trick, because there is no guarantee that things will go completely as originally intended.
We had two established cultural codes that modern neural networks have shattered. Thanks to 20th century science fiction, we're used to robots not lying and robots not making mistakes. Modern neural networks have proven us otherwise: first, they can "lie" and second, they can be wrong. We're talking now mostly about large language models related to creativity. Besides, we have all been told all the time that a machine can type on a keyboard, a machine can also drive a car, and only a human can compose songs or draw pictures. But suddenly a machine was able to do things that had previously been the domain of humans, and this had a profound effect on our worldview. Of course, some people still disagree with this, but this is a matter of debate.
Unlike humans, who keep the context of the past conversation in mind during a dialog, neural networks have a hard time doing this. In the process of creating an answer, it essentially generates the statistically most likely solution to the problem. A person usually visualizes the result and then starts moving towards its achievement. In the case of a neural network, we also imagine the result, but we are in the process of movement, and at some point the neural network may produce an error, realizing that in the process of generating the expected result it has strayed from the intended path. A lot of things are thought out in case of failures, but it seems to us that if we asked for it, it should always work the first time. To make friends with artificial intelligence, we need to start simple: minimize the number of metaphors and, on the contrary, rationalize the problem statement as much as possible.
The new life of video games
When the Internet came along, a multimedia world formed around us and many different media formats began to exist within digital media. For example, on YouTube we are able to not only watch videos but also read texts. Video games emerged and stories began to be created that were initially structured and developed in such a way that the user moves from one platform to another, assembling the story like a jigsaw puzzle. Now almost all content is transmedia: we have TV series being filmed, then spin-offs being produced, and we no longer think of a movie coming out without a website being created for it - and transmedia marketing strategies are evolving along with that.
It's likely that in a couple of years we'll be dealing with a completely different level of production for video games, for example. There are objective trends that the picture will be indistinguishable from a movie image, and the world will be filled with characters with artificial intelligence. Everyone likes multiplayer games because people want to play with people, and many characters are still unplayable and presented to us either through scripted replicas or cut-scenes. If we connect ChatGPT to these characters, we get a character with whom, for example, we can sit around a campfire in a fantasy world and talk about in-game topics. At one time players were impressed by the appearance of an open world. Here we are dealing with an open social world, when characters will remember us, and the player will be able to quarrel with them, and then (if desired) to make up. Experiments with this are already underway, and developers have demonstrated how it works with several add-ons to popular video games. However, this is a big problem for the game industry because there is no way to add a large language model to every user's computer. Data centers and supercomputers have such capabilities for now. With neural networks, we have reached a situation where the capacity of data centers is already somewhere under the cap. In addition, a communication channel will need to be developed to stream photorealistic images and dialog to the player's device - and that's data transfer potential for 6G networks at a minimum.
"We're going to see a world where we stop distinguishing deepfakes from reality"
For the first time in our lives, we are encountering, not in a philosophical sense, but in a very real sense, a system that has agency, and non-human agency at that. In the past, when a programmer created a program, he understood what the result of a certain input would be. Here, however, we cannot predict exactly how the network learns and where distortions may occur. We can detect them and correct them, often post facto. The neural network used in autopilot makes the decision to brake or go, turn or not. It assesses the traffic situation and makes a decision, but the human is responsible for that decision. At the same time, he cannot predict its actions absolutely accurately, so, in fact, he is responsible only for the fact that he let the car, controlled by the neural network, on the route at his own risk.
This is a big problem: the neural network makes a decision, but is not responsible for it. What to do about it is not very clear. There is, of course, the option to ban everything, but those who ban the use of neural networks will immediately lose out in the competition. And if there is no regulation at all, it will also be a mess. That is why certain rules are being introduced now, especially in European countries: the use of transparent data sets (so that it is clear what data the neural network learned from), the need to keep detailed documentation so that in case of unforeseen situations, everything can be promptly investigated and corrected.
Many people are now talking about the SORA neural network, which creates highly realistic videos based on a text query. Its launch is likely to be in 2025, taking into account all possible re-assurances. It is likely that it is in 2025 that we will see a world where we stop distinguishing deepfakes from reality, and more serious legal regulation will start to emerge on this ground. In terms of possible trends, it seems that this year will be the year of 3D, because now quite a few algorithms for generative 3D models have entered the market. In the past, this was not very popular because in order to print a model on a 3D printer, you had to first create it, for example, in Blender - and this is not the easiest task. However, since 3D neural networks have emerged that build models on demand (all that is left is to bring them to the right quality), this could spur sales of 3D printers as well. This is a very rich and fruitful idea. 3D printing technology is already available in various production facilities and workshops at theaters - for example, at the Bolshoi - and its potential for everyday life is manifesting itself right now. It seems to me that by the end of the year we will see 3D printing take off - and maybe it will even become a new trend.
Ksenia Zhakova, 2nd year student of the Journalism educational program.
Translation by Juan Pablo Flor