A systemic analysis of GenAI's future
What's next for GenAI?
By Peter Horvath, Strategy & Service Design Lead at Whitespace
Date: 22 September 2024
There have been a lot of predictions about Generative Artificial Intelligence (GenAI), but most of it is usually text-based argument. We at Whitespace believe that visualizing things makes them not just easier to understand, but also easier to discuss. So we used causal loop diagramming (a systemic design technique) to add a new layer to the GenAI discussion.
We were looking to answer the question: What’s next for Gen AI?
Causal loops should be consumed in bite-sized chunks – allowing for logical steps and consequences to be examined individually. We break down our story accordingly:
A gradually closing loop?
As Generative AI training data increases, so does GenAI performance. This drives GenAI usage, which drives further need for development in AI. Advances in AI development don’t directly lead to better training data, so this loop can close in multiple ways. This is what the next steps are examining.

The AI death spiral
Increased GenAI usage results in an increase in the amount of AI-generated (unoriginal, lower quality) content. This, along with AI development's appetite for new training data increases the amount of AI-generated content in AI training, creating a death spiral of decreasing quality. (The Curse of Recursion: Training on Generated Data Makes Models Forget)

The rise of marketing content
We believe that increased GenAI usage drives marketers’ and influencers’ appetite to be included in GenAI training sets, e.g., for referrals, references, or to push their views on the world. AI development's appetite for data means this type of content gets increasingly included in training sets. This is not unbiased content, so it drives down AI training data quality. In addition, AI can be tuned for engagement, meaning AI has a good chance to drown human-generated content both in views and also in training sets – if these are weighted by human reactions!

Proprietary AI
AI development's appetite for training data could be satisfied with the high-quality proprietary content of a company, driving data quality up. However, this results in closed, proprietary GenAI (e.g., The importance of proprietary data sets in the context of AI).

Show me the money
The fact that original training set data in many models was sourced unethically or at least in a questionable way, drives content owners to demand ongoing compensation (see actions of artists here, of newspaper publishers here, or of the author’s guild here, led by George RR Martin and John Grisham). This compensation may be “quid pro quo” - e.g., Google crawling your site gets you web traffic in return - but what will AI companies crawling your data get you? The fact that AI development requires even more training data means content providers have a strong bargaining position. This can end in two ways:
1. So long and thanks for all the fish
If no solution is found, content owners will start blocking AI companies from accessing their content (hosting companies are already making this easy). As a next step they will request the removal of their content from the training set, decreasing its quality. (See Unlearning in LLMsS link: )

2. Cleaning up the act
Business models are found that ensure payment to content owners while maintaining or even increasing data quality, and decreasing unethical aspects. (Remember that YouTube used largely illegally acquired content when Google bought it in 2006, and OpenAI already started signing deals, e.g., with News Corp.)
If GenAI on its own becomes profitable despite paying for content usage, then GenAI can become a profitable product in its own right; but if GenAI on its own is not profitable, it just becomes an add-on to existing products.

Conclusion
All of the above sub-stories can happen at once, and the impact of some will be stronger than that of others. As an individual, you may not have strong control over either of these, but this type of dissection of the GenAI narrative can help you have more mature discussions in your company or organization about your GenAI strategy.
We are well aware that the above map is incomplete, e.g., we deliberately left out loops involving the pricing of (currently free-to-the-public) GenAI, as we believe they do not influence the loops we wanted to examine. In causal mapping, you should never try to map the entire world.
We are happy to discuss how we use AI and/or systemic design in our projects. Please visit our contact page and drop us a line.