Artificial intelligence is causing the demise of the traditional internet, while the emergence of a new internet is encountering difficulties in its creation.

Artificial intelligence is causing the demise of the traditional internet

Over the past few months, there has been a rapid accumulation of signs and indications. Major players like Google and Twitter are making moves to disrupt the traditional web experience. Companies are replacing human-driven content with AI-generated output, leading to concerns about the quality and reliability of information. Layoffs in online media are becoming prevalent, and job postings for AI editors expecting an unreasonably high volume of articles per week are on the rise. Platforms like Etsy and LinkedIn are inundated with AI-generated spam, while misinformation spreads in a self-perpetuating loop among chatbots. Popular websites such as Reddit, Stack Overflow, and the Internet Archive are facing challenges from data scrapers and struggling to adapt to the changes brought about by AI.

The current state of affairs reflects the ongoing demise of the old web and the arduous birth of the new web. The internet has been continuously evolving and undergoing transformations for years, often driven by the rise of apps and algorithms. However, the emergence of AI as a catalyst in this current wave of change is notable. AI systems, particularly the prevalent generative models, have the capability to effortlessly scale and produce vast amounts of text, images, and even music and video. However, the quality of these outputs is often subpar, and their production relies on data from the previous web era, which they imperfectly recreate. Companies scrape information from the open web and generate machine-generated content that competes with the established platforms and content creators. Websites and users are grappling with these transformations, attempting to navigate the landscape and determine how to adapt, if possible.

Prominent platforms like Reddit, Wikipedia, Stack Overflow, and Google are experiencing the strain caused by the infiltration of AI systems. Reddit moderators are staging blackouts in response to increased charges for API access and the threat posed by AI data scraping. Wikipedia is debating the use of AI language models to write articles, recognizing the advantages in terms of speed and scope but wary of the potential for fabricated facts and misleading information. Stack Overflow’s moderators have banned AI-generated content due to inaccuracies and the difficulty of discerning correct answers from the generated output, but the site’s management plans to leverage AI tools and charge firms that scrape its data, leading to conflicts over standards and enforcement.

The impact of AI on the web becomes particularly significant when considering Google’s influence. As a dominant search engine, Google plays a central role in distributing attention and revenue across the internet. However, the rise of alternative search engines powered by AI, such as Bing AI and ChatGPT, has prompted Google to experiment with replacing its traditional search results with AI-generated summaries. This shift, if implemented, could have seismic effects on the web. Google’s new system often reproduces website content verbatim without proper source attribution, potentially harming revenue-strapped sites and depleting Google’s own pool of human-generated content. The consequences of this change are hard to predict, as it could damage various web domains, from product reviews to recipe blogs, news outlets, and wikis. Websites may resort to locking down access and charging fees, leading to a significant reordering of the web’s economy. Ultimately, Google could undermine the ecosystem that fostered its own success or irreversibly transform it, putting its own existence at risk.

Allowing AI to assume a dominant role in information dissemination could lead to a decline in the overall quality of the web. AI-generated content, while impressive in its recombination of text, often lacks accuracy and real-world grounding. It is challenging to detect and correct misinformation produced by AI, making it a subtle and insidious problem. While human-generated content is not immune to misinformation, the potential loss of platforms that currently foster human expertise would hinder our ability to rectify collective errors. The impact of AI on the web is multifaceted, as seen in various examples.