Content
newsletter Newsletter

What are the creative limits of today's AI systems, and can we humans surpass them? And where does the Greek prophet Tiresias come in? A guest post by Joël Doat.

 „Die Zukunft wird […] zum modellierten Raum der Gegenwart.“
(“The future becomes the modelled space of the present.”)

Barbara Eder, Sehen wie Teiresias

With this sentence, Barbara Eder takes up a central theme of the debate surrounding algorithms and data in artificial intelligence in her recently published essay “Sehen wie Teiresias” (“Seeing like Teriesias”). Can machines predict the future or are they just creating probabilistic fictions?

Dystopian headlines promise the demise of creativity when such modern artificial intelligence can take over increasingly complex tasks. But with a deeper look into Eder’s used sources, not only further we can find the limits of artificial intelligence, but we can derive some opportunities, which the individual human being can still use for himself. Because, in the end, the machine is still an image of ourselves: understanding and transcending our own limits always enabled new ways of creativity. These are the opportunities which the machine does not yet know.

At the beginning, Eder reviews the essay “Tiresias, Or Our Knowledge Of Future Events” from Alfred Schütz, the founder of phenomenological sociology. While this is being discussed in more detail in the next section, afterwards, we will also review the “classic” paradigms of contemporary machine learning: supervised, unsupervised, and reinforced.

Ad
Ad

The Past is in the Future

But what would be if we started from the assumption that such a system is able to depict every possible creative product, including the one that didn’t exist yet? So, we come into the position of Schütz to interpret again the ancient Greek saga about Tiresias - a man who saw Athena naked and went blind. As compensation for his misfortune, he was given the gift of experiencing the future. Then, as a blind seer, he is condemned to perceive the future without his own present.

Schütz uses this mythical figure to explore the ways in which human knowledge of the future is determined by past experiences. In this analogy, his purely phenomenological position of our daily experience emphasises the paradoxes we encounter when assuming our predictions of the future are correct.

The most important concept from Schütz’ essay is called “anticipated hindsight”. A phenomenon he derives from the story when comparing our perception of the world with anything Tiresias might have seen. Since we can’t see the future, any forecasting or future prediction is nothing but a hindsight that we transposed to the frame of the future. Schütz also describes it as the impossibility to describe the categorical membership of an event before we experienced it.

“Once materialized, the state of affairs brought forth by our actions will necessarily have quite other aspects than those projected. In this case foresight is not distinguished from hindsight by the dimension of time in which we place the event.”

Alfred Schütz, Tiresias, Or Our Knowledge Of Future Events

Our prediction about the future only exists in correspondence to what we already know from the present and the past. More concretely, what we imagine of the future always depends on the understanding of the existing information we processed. Thus, that limits our expectations of the future to a mere projection of experiences.

As implication to the original question, AI that is just trained in terms of past information (or current information in case of recurrent machine learning) just projects this knowledge base into anything potentially created with it.

Recommendation

Consequently, any creation will lack in novelty while being limited to this projected space. This defines the conceptual space where we can regain novelty. Because, as Karl Popper famously said,

“We may become the makers of our fate when we have ceased to pose as its prophets.”

Karl Popper

So, where are these formal limits?

In the supervised and unsupervised methods of machine learning, a clearly defined probability space is actually given in a mathematical sense. We built such models based on already existing information (for example, a database) and the definition of variables with which information is differentiated. Afterwards, we train it with some old-fashioned statistics to obtain “statistics on steroids” as Meredith Broussard called it.

But since we are simply using math here, every possible statement this model can make about the given information is already built into the system. Consequently, possible outputs are very much surrounded by static bounds of what the model is able to say. Of course, the difference we get is the varying likeliness of an output to happen given the input.

So, that leaves us with two formal limits:

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
  1. Information and variables define a static space of possible statements.
  2. The information the model was trained on lays in the past and cannot react to new information.

Both these limits demonstrate again the dilemma we had with Tiresias. Using a static model and pre-existing information is merely anticipated hindsight based on past knowledge and experience, respectively. Resulting creative expression of this AI remains a projection of the past, unable to form innovation. So, breaking out of this space means breaking out of the data situation and processing logic.

With reinforcement learning, we can weaken the second limitation. This method can react to the present situation by updating some variables according to new information. More concretely, models can perform certain action, perceive the outcome, and adapt the action for the next similar situation accordingly. The probabilities in the model do not remain static and gain flexibility in terms of their output.

Here again, the structure of the model does not change and hence, the space of possible statements continues to follow a deterministic pattern. Even though the determination of future output remains “anticipated hindsight”, one might ask whether this is enough to create unprecedented creative expression. With increasing complexity of these models, maybe at some point, it will be able to continuously create innovation with enough “future present moments”.

To investigate this idea further, we turn to the next section, exploring some ideas of Elena Esposito who claims that our present predictions of the “future present” is merely fiction. All we can get achieve is the idea of a “present future”.

The Fiction of Probabilities

In her book of essays “The fiction of probable realities”, Elena Esposito sketches the parallels between the creation of the first novels on one hand and probability theory on the other. Both happened in the 17th century and according to her, that is not a coincidence. It was the century of “possible worlds”.

Similarly, to our previous discussion, she starts with the openness of reality. That is, a human agent can decide on its own and entail unprecedented events – the space of events isn’t closed by a pre-chosen theory or model. Statistics, however, requires closing this space due defining variables applied on finite and past information. Even when a statistical model comes close enough to predict the current human behaviour, humans become aware of them and start to adapt. In other words, in every decision we are able to factor in the insecurities, the calculated probabilities, and the statistical knowledge other people base their decisions on. This starts a recursive process of including another person’s intention into our own intention.

Probability theory is then not only a tool for observation, but a tool for practical reason. Due to this exploding complexity of predicting the future, it becomes impossible to grasp the truth or the reality of a future present- it remains merely fiction and at best a tool to observe the observer. Esposito summarizes it beautifully into

“Reality is unlikely, and that is the problem.”

Elena Esposito, The fiction of probable realities

Where does that leave us? The prediction of a future present in such a way is nothing but fiction. Nonetheless, as a tool for practical reasoning, it gives us the opportunity to investigate “present futures”. It gives us an objective base to discuss what we are trying to predict and how we are doing it. Even though this doesn’t have to match the future, it surely helps us to plan it.

Assuming now that machine learning models can predict every possible creative expression in the future, entails that we have nothing to learn anymore from our experience.  Every potential aspect is already covered by the processing patterns of AI. Thanks to statistics we not only have reduced our insecurities about the future but also robbed the future from its new information.

But this will just give us products that have been pressed through the shape of the model – similarly to a cast iron for mass production. Creative expression reduced to a probabilistic space will sooner or later become prescriptive rather than predetermined. Hence, opening the space of possible events also opens again the possibility for new and authentic creative expression.

A Chance for Human Creativity?

Since the new forms of artificial intelligence, such as ChatGPT or Dall-E, many people in creative professions fear for their future. Between self-writing essays and photo-realistic images, there is no stopping corresponding tasks in the professional field from being replaced by the machine. Between lower costs and nearly eliminated time between idea and product, the skills of actual writers and artists are becoming increasingly irrelevant. Therefore, the question for many is whether creativity will die out in the face of modern artificial intelligence. However, given the perspective previously discussed, exits from this predicament open up through possible exits from its probability spaces.

Every formal system is based on unique presuppositions, which thus predetermines the processing logic of any incoming information. Even if new information can be generated by new events, it must be processed in the same way and thus, can only move within the space of a possible statement that the system is able to pose. In the case of machine learning, this is based on statistical models shaping the patterns that statements can adhere to. Even though the starting point of training a model can be quasi-randomised, the course of analyses always depends on these patterns. These predetermined patterns create then a mould for the products of every such system.

After all, isn’t this pattern as materialisation of the underlying principles what always enable mass production with homogenous results? While being able to understand these presuppositions requires a greater understanding of mathematical and formal logic, they can provide a glimpse of what the machine in question is not capable of. Our interpretation of data can exceed the predetermined interpretation of machines since we are the ones implementing the rules for interpretation. The creative process, then, can surpass the machine’s capabilities when it integrates questioning technical realities and finds a novel interpretation of data. In other words, to break the rules, we must understand them. Maybe the path of a future artist will lead through engineering?

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Joël Doat

Joël Doat currently works as a freelancer in the fields of software quality, technical communication, and teaching. He has a background in mathematics and is currently studying philosophy. His interest lies in conceptualizing software development and our relationship with the digital from a mathematical-philosophical perspective.

Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.