The Danish start-up Jumpstory promises a Netflix for images and curates its content with artificial intelligence. Co-founder Jonathan Løw talks to me about the use of AI, authenticity, and about a fundamental decision against AI-generated content.
In 2018, Jonathan Løw and Anders Thiim founded Jumpstory, a kind of Netflix for images. The idea: access to millions of images and videos for a monthly fee of $29.
In their 20 years of experience in digital marketing and communication, both had repeatedly worked with classic stock photos and were dissatisfied with the quality, licensing models, and prices. So they designed their own platform.
In addition to simple pricing and licensing models, Jumpstory aims to stand out with the quality of the available images and videos: The company promises authentic imagery that should clearly stand out from stock photo clichés – and that is guaranteed not to have been generated by artificial intelligence.
“We are not against the use of AI, after all, we use it ourselves.”
THE DECODER: Before this interview, we talked about authenticity. Can you tell our readers what that means for Jumpstory?
Jonathan Løw: At JumpStory we have a promise to our customers, and we even have a dedicated section on our website, where we talk about this. It’s our “Authenticity Promise”.
We promise the users to only get 100% authentic and no AI-generated images. You don’t find any AI-generated stuff; no deepfakes; no cliché stuff. Only 100% real images of real people in real-life situations.
Our photos are not done by professionals, but by amateurs. They’re like the flies on the wall – capturing life, as it happens, unlike many professional stock photographers that tend to stage everything.
THE DECODER: Jumpstory uses AI to recognize authentic images. Can you explain how that works?
Jonathan Løw: At JumpStory we’re not against the use of AI, because we use it ourselves. However, we don’t think that it should be used to generate fake people or realities. This can be okay in computer games, but not as part of our everyday communications and lives, because we end up with a world, where we don’t know what is real, and what is not. And who wants to live in such a world?
Companies like OpenAI, Midjourney, etc. all use artificial intelligence, and they create synthetic media and artificial content. This is fascinating on the surface, but it poses a real risk too. Not only are the legal aspects blurry to say it the least, but also I see it as a fundamental threat to one of the core principles of humanity: trust.
Without trust, we don’t have anything. No media. No democracies. No relationships. Everything depends on trust. If you start playing with trust and reality, you’re on a very dangerous path.
However, I don’t think that tech giants like Facebook, Google, and others really care about this. They just see excellent business models arising, so they contribute to the hype of AI.
At JumpStory we work with ‘Authentic Intelligence’ instead. We use AI to identify which images are original, authentic and of course legal to use. We’re not perfect at this yet, but we’re working damn hard to achieve it because we think it’s the right thing to do.
If you go into the details of how it works, it’s like most other machine learning, where you teach the machine what to look for and focus on. In our case, we use datasets of authentic content, which we manually select and rate, and we then set the machine free and get it to learn what to look out for and how to prioritize, because at the end of the day we want the machine to present our customers with the most authentic content possible.
“Just because you can do something, doesn’t mean that you should.”
Jumpstory offers several AI tools that allow users to edit images or upload text, for example, which is analyzed by an AI model that then uploads matching images from the Jumpstory library.
In the interview, Løw tells me that the company experimented early on with GANs to generate images, but ultimately decided against using AI for image generation.
THE DECODER: If an AI system can recognize authenticity, don’t you think AI might one day be able to reproduce it?
Jonathan Løw: In the future, AI can produce almost anything, but just because you can do something, doesn’t mean that you should.
Sometimes I feel like the tech-world, which I in many ways love and have been working in for more than 20 years, is obsessed with technology rather than with ethics, moral and more philosophical debates about the purpose of technology and AI.
Right now, everyone is talking about diversity and taking the hyped media agendas to the tech-space, but this is much more fundamental than current trends in society. This is about what we want computers to do, and what we don’t want them to do.
THE DECODER: Can you explain why you decided against using AI-generated images?
Jonathan Løw: I don’t want AI to reproduce authenticity and totally blur the lines between reality and fake. Not because it could threaten part of my business, but because I think that it’s ethically and morally wrong and a threat to the trust that I talked about before.
We see again and again that technology is often ahead of both the law and our ethics, and I find this fundamentally worrying. It’s inevitable given the billions and billions of dollars invested in tech rather than in philosophy and ethics for instance, but even though it’s inevitable, we should still question and challenge it.
“The legal issues are potentially massive.”
While Jumpstory strongly opposes AI-generated content, OpenAI’s DALL-E 2 and Midjourney are already in wide use. However, Løw says the use of such systems poses potentially legal risks: he sees images from DALL-E 2 and other systems as edited replications of datasets that may not be used for commercial purposes.
THE DECODER: You told me that you see some legal risks, or at least a greater risk to users, compared to, say, Jumpstory. Why is that?
Jonathan Løw: As I just mentioned, it’s often the case with new technologies that they are created, before we agree on some legal boundaries for them. This is also very much the case with Midjourney, DALL-E 2, Google Imagen and so on.
The legal issues are potentially massive. […] Systems like DALL-E 2 sources / scraped images from countless public websites, and there is no direct legal precedent in the U.S. that upholds publicly available data as fair use. So, the legal issues both apply to the images generated AND the dataset used to train them.
There are big problems with the rights to the imagery and the people, places, and objects within the imagery that models like DALL-E 2 are trained on.
THE DECODER: Assuming these risks can be mitigated, where do you personally see applications for generative AI systems like DALL-E 2?
Jonathan Løw: Many people have asked me if we think that DALL-E 2 will close down the stock photo industry. My reply to this question is no. However, it will for sure challenge some parts of the industry – for example, the illustrations part of the industry.
I also think that generative AI can be a really cool contribution to designers out there – both on an idea level and actually creating some designs. I don’t think that a lot of things in life are black and white, so I don’t see this killing all creative careers, but probably it will challenge graphic designers to re-invent how they’re working, and what part of the value chain they should focus on.
When it comes to the stock photo industry specifically, some stock image platforms may begin to use these new technologies to expand their service offerings as well as their stock image repositories. At least this is what I see happening if you talk about Shutterstock and iStock. But if we talk about JumpStory, we don’t want to go down that path, as I’ve mentioned before.
As a company, you shouldn’t only think about, where you can make quick and new money, but also what you believe in, and what is right. Call us old school, but at JumpStory we really, really love the real world, so we want to contribute to a world, where people continue to trust one another as much as possible, and deepfakes and AI-generated visuals are a serious threat to this.