There are fundamental flaws in the AI dilemma

Profile picture of Mark Vletter
Mark Vletter
9 May 2023
Clock 6 min

Do you enjoy reading Mark’s articles? Sign up for his founder’s newsletter now and stay up to date with his latest writings.

Social media has had numerous negative effects on society. Addiction, misinformation, mental health issues, polarization, and a big debate between censorship and free speech. Tristan Harris is one of the people highlighting the issues of social media and was featured in the Emmy-winning Netflix documentary, The Social Dilemma.

Last week he and Aza Raskin gave a great talk on the AI dilemma which I can highly recommend. They highlight many of the possible issues we might see with AI, but watching the presentation I felt friction. I think Tristan and Aza make some fundamental mistakes in their thinking. In this article, I want to explain some of the fundamental differences I see between AI and Social Media and why the future might be radically different than we all anticipate.

Understanding incentives and business models

To understand why AI in its current stage is fundamentally different from Social Media you have to look at the primary incentives and business models of AI and Social media companies.

Ads & the indirect business model

The primary goal of companies in the current system is simple. They want to create shareholder value. The business model of both social media and search is advertisement. A user gets to use a search product for free, but they get advertisement in return. This means Google’s real customers are the companies that pay for the advertising. This is called an indirect business model.

Wrong incentives for search ads caused data collection

For search companies, this business model has a big downside. They made the assumption that if a search engine knows more about its users, it can tailor its ads to that user. This incentivized data collecting and targeted ads.

Wrong incentives for social media led to data collection, fake news, polarization, and addictive platforms

For social media companies, the indirect business model has had an even bigger negative incentive. The time people spend on your platform becomes the main driver because more time means more ads. The quality of the content – true or false, good or bad – becomes irrelevant, as long as people stay on the platform and engage with the content. This resulted in more data collection, fake news, polarization, and addictive platforms. Because again, it is the advertiser who is the real customer. Access to the social media users is the customer value they create for the advertiser.

For social media platforms to be successful they need:

  • More users
  • More time spent on the social media platforms
  • More data on the end-users
  • Content people engage with

The business model used and customer value created by AI companies

With this in mind, it’s good to look at the business model of AI companies, and look at the customer value they create.

AI and the direct business model

Let’s start with a tool like Midjourney. Their goal is to generate the best images for me as a user. If the quality is high, I’ll pay for a subscription. They use a freemium model, in which I can use the basic service for free and if I want to really start using it I have to start paying a monthly fee. This is called a direct business model. The users pay directly for the service that is being used.

ChatGPT does the same thing. I can use ChatGPT 3.5 for free, but if the service is busy, paying customers get priority. If I want to use the better GPT 4 model, I have to pay as well.

So if I get value out of the service they offer, I have to pay to maximize the value, again using a direct business model.

Now this is not the complete story, and to zoom in a little further, we have to look at what AI companies need to be successful.

What AI companies need to build a good product

For AI models to become better they need a few things, which include:

  • Datasets: a collection of structured and organized data used for training, validating, and testing AI models.
  • Feedback on the output: the AI models need feedback on the quality of the output from users to become better. This is called human-reinforced learning.

Right now, users give input to the model – which adds to the dataset- and they give feedback adding to the quality of the output. Both make the product better. There is a direct link between customer value; the things that the AI companies need to be successful, and the business model. As long as my data is secure and private and I remain the owner of my data, I’m even willing to give the AI company more of my data because there is a direct benefit for me as a user.

Alignment between customer value and business value

This direct alignment between customer value, business value, and the business model is key to building healthy businesses. This is also where the current AI business model and the indirect ads business model differ.

Companies like Microsoft (Bing) and Google are experimenting with advertisements in AI tools. That is logical from a business perspective: Google’s primary business model is ads and 80% of the company’s revenue is ads. But it would be better for society if the ads model dies and the indirect business model disappears. Businesses will benefit most from AI tools, and OpenAI – the company behind ChatGPT – knows this. They are launching a business offering for ChatGPT in the near future. And, as an entrepreneur, I don’t mind paying for such a service at all.

Microsoft is also integrating ChatGPT in their business offerings so the business customer will be paying directly – again reinforcing the direct business model.

Expertise in Social might – or any other area – does not translate that well to AI

If you zoom out a little further, the speakers do something that a lot of experts do, myself included. If you are extremely knowledgeable about a phenomenon, you tend to ascribe yourself predictive powers about how another phenomenon will develop. The speakers seem to make the assumption that we all need to learn from how social media developed and impacted society in order to intervene earlier on likely things to go wrong with AI.

But no matter how “awake” we are now, we can’t predict what will happen with generative AI. The scenarios they predict make sense, but even given they use a direct business model, it is very likely we will go in a completely different direction than we predict right now.

We all know that “past results do not guarantee the future” but we often completely ignore that warning immediately after it is uttered. Experts might not be better at predicting the future than laymen.

So the main message from the presentation that does ring true is: this change seems to bring a major paradigm shift. We need to keep a very close eye on AI developments and react in time if something happens. But let’s not pretend we have a clue about what that “something” is 😉

Thanks, Mark Meinema for the new insight you gave me and for helping me in writing this article.

More stories to read

On our blog we post about a lot of stuff, just go for it and read some posts for your own fun.

Go to the blog
On-premise AI: safeguarding privacy in voicemail transcriptions

from 12 September 2024

On-premise AI: safeguarding privacy in voicemail transcriptions

Read more Arrow right
Accessibility: this is how we make our products accessible to customers with disabilities

from 16 August 2024

Accessibility: this is how we make our products accessible to customers with disabilities

Read more Arrow right