HomeNewsTo understand the risks posed by AI, follow the money

To understand the risks posed by AI, follow the money

Time and again, leading scientists, technologists, and philosophers have made spectacularly terrible guesses concerning the direction of innovation. Even Einstein was not immune, claiming, “There shouldn’t be the slightest indication that nuclear energy will ever be obtainable,” just ten years before Enrico Fermi accomplished construction of the primary fission reactor in Chicago. Shortly thereafter, the consensus switched to fears of an imminent nuclear holocaust.

Similarly, today’s experts warn that an artificial general intelligence (AGI) doomsday is imminent. Others retort that enormous language models (LLMs) have already reached the height of their powers.

It’s difficult to argue with David Collingridge’s influential thesis that attempting to predict the risks posed by latest technologies is a idiot’s errand. Given that our leading scientists and technologists are often so mistaken about technological evolution, what likelihood do our policymakers have of effectively regulating the emerging technological risks from artificial intelligence (AI)?

We must heed Collingridge’s warning that technology evolves in uncertain ways. However, there may be one class of AI risk that is usually knowable prematurely. These are risks stemming from misalignment between an organization’s economic incentives to make the most of its proprietary AI model in a selected way and society’s interests in how the AI model must be monetised and deployed.

Photograph of Albert Einstein in his office at Princeton University, New Jersey, taken by Roman Vishniac in 1942.
The Magnes Collection of Jewish Art and Life/Flickr, CC BY-NC-SA

The surest approach to ignore such misalignment is by focusing exclusively on technical questions on AI model capabilities, divorced from the socio-economic environment during which these models will operate and be designed for profit.

Focusing on the economic risks from AI shouldn’t be simply about stopping “monopoly,” “self-preferencing,” or “Big Tech dominance”. It’s about ensuring that the economic environment facilitating innovation shouldn’t be incentivising hard-to-predict technological risks as corporations “move fast and break things” in a race for profit or market dominance.

It’s also about ensuring that value from AI is widely shared, by stopping premature consolidation. We’ll see more innovation if emerging AI tools are accessible to everyone, such that a dispersed ecosystem of recent firms, start-ups, and AI tools can arise.

OpenAI is already becoming a dominant player with US$2 billion (£1.6 billion) in annual sales and hundreds of thousands of users. Its GPT store and developer tools have to return value to those that create it with a purpose to ensure ecosystems of innovation remain viable and dispersed.

By fastidiously interrogating the system of economic incentives underlying innovations and the way technologies are monetised in practice, we are able to generate a greater understanding of the risks, each economic and technological, nurtured by a market’s structure. Market structure shouldn’t be simply the variety of firms, but the price structure and economic incentives out there that follow from the institutions, adjoining government regulations, and available financing.

Degrading quality for higher profit

It is instructive to think about how the algorithmic technologies that underpinned the aggregator platforms of old (think Amazon, Google and Facebook amongst others) initially deployed to profit users, were eventually reprogrammed to extend profits for the platform.

The problems fostered by social media, search, and advice algorithms was never an engineering issue, but one in every of financial incentives (of profit growth) not aligning with algorithms’ secure, effective, and equitable deployment. As the saying goes: history doesn’t necessarily repeat itself nevertheless it does rhyme.

To understand how platforms allocate value to themselves and what we are able to do about it, we investigated the role of algorithms, and the unique informational set-up of digital markets, in extracting so-called economic rents from users and producers on platforms. In economic theory, rents are “super-normal profits” (profits which can be above what could be achievable in a competitive market) and reflect control over some scarce resource.

Importantly, rents are a pure return to ownership or a point of monopoly power, moderately than a return earned from producing something in a competitive market (comparable to many producers making and selling cars). For digital platforms, extracting digital rents normally entails degrading the standard of knowledge shown to the user, on the idea of them “owning” access to a mass of consumers.

For example, Amazon’s hundreds of thousands of users depend on its product search algorithms to point out them the perfect products available on the market, since they’re unable to examine each product individually. These algorithms save everyone money and time: by helping users navigate through hundreds of products to search out those with the best quality and the bottom price, and by expanding the market reach of suppliers through Amazon’s delivery infrastructure and immense customer network.

These platforms made markets more efficient and delivered enormous value each to users and to product suppliers. But over time, a misalignment between the initial promise of them providing user value and the necessity to expand profit margins as growth slows has driven bad platform behaviour. Amazon’s promoting business is a working example.

Amazon’s promoting

In our research on Amazon, we found that users still are inclined to click on the product results at the highest of the page, even after they are not any longer the perfect results but as an alternative paid promoting placements. Amazon abuses the habituated trust that users have come to position in its algorithms, and as an alternative allocates user attention and clicks to inferior quality, sponsored, information from which it profits immensely.

We found that, on average, the most-clicked sponsored products (advertisements) were 17% costlier and 33% lower ranked in accordance with Amazon’s own quality, price, and recognition optimising algorithms. And because product suppliers must now pay for the product rating that they previously earned through product quality and popularity, their profits go down as Amazon’s go up, and costs rise as a number of the cost is passed on to customers.

Amazon is one essentially the most striking examples of an organization pivoting away from its original “virtuous” mission (“to be essentially the most customer-centric company on Earth”) towards an extractive business model. But it is much from alone.

Google, Meta, and virtually all other major online aggregators have, over time, come to preference their economic interests over their original promise to their users and to their ecosystems of content and product suppliers or application developers. Science fiction author and activist Cory Doctorow calls this the “enshittification” of Big Tech platforms.

But not all rents are bad. According to the economist Joseph Schumpeter, rents received by a firm from innovating will be helpful for society. Big Tech’s platforms got ahead through highly modern, superior, algorithmic breakthroughs. The current market leaders in AI are doing the identical.

So while Schumpeterian rents are real and justified, over time, and under external financial pressure, market leaders began to make use of their algorithmic market power to capture a greater share of the worth created by the ecosystem of advertisers, suppliers and users with a purpose to keep profit growing.

User preferences were downgraded in algorithmic importance in favour of more profitable content. For social media platforms, this was addictive content to extend time spent on platform at any cost to user health. Meanwhile, the final word suppliers of value to their platform – the content creators, website owners and merchants – have had handy over more of their returns to the platform owner. In the method, profits and profit margins have grow to be concentrated in a couple of platforms’ hands, making innovation by outside corporations harder.

A platform compelling its ecosystem of firms to pay ever higher fees (in return for nothing of commensurate value on either side of the platform) can’t be justified. It is a red light that the platform has a level of market power that it’s exploiting to extract unearned rents. Amazon’s most up-to-date quarterly disclosures (Q4, 2023), shows year-on-year growth in online sales of 9%, but growth in fees of 20% (third-party seller services) and 27% (promoting sales).

What is vital to recollect within the context of risk and innovation is that this rent-extracting deployment of algorithmic technologies by Big Tech shouldn’t be an unknowable risk, as identified by Collingridge. It is a predictable economic risk. The pursuit of profit via the exploitation of scarce resources under one’s control is a story as old as commerce itself.

Technological safeguards on algorithms, in addition to more detailed disclosure about how platforms were monetising their algorithms, could have prevented such behaviour from going down. Algorithms have grow to be market gatekeepers and value allocators, and at the moment are becoming producers and arbiters of information.

Risks posed by the following generation of AI

The limits we place on algorithms and AI models shall be instrumental to directing economic activity and human attention towards productive ends. But how much greater are the risks for the following generation of AI systems? They will shape not only what information is shown to us, but how we predict and express ourselves. Centralisation of the ability of AI within the hands of a couple of profit-driven entities which can be more likely to face future economic incentives for bad behaviour is unquestionably a foul idea.

Thankfully, society shouldn’t be helpless in shaping the economic risks that invariably arise after each latest innovation. Risks caused from the economic environment during which innovation occurs aren’t immutable. Market structure is formed by regulators and a platform’s algorithmic institutions (especially its algorithms which make market-like allocations). Together, these aspects influence how strong the network effects and economies of scale and scope are in a market, including the rewards to market dominance.

Technological mandates comparable to interoperability, which refers to the power of various digital systems to work together seamlessly; or “side-loading”, the practice of putting in apps from sources aside from a platform’s official store, have shaped the fluidity of user mobility inside and between markets, and in turn the power for any dominant entity to durably exploit its users and ecosystem. The web protocols helped keep the web open as an alternative of closed. Open source software enabled it to flee from under the thumb of the PC era’s dominant monopoly. What role might interoperability and open source play in keeping the AI industry a more competitive and inclusive market?

Disclosure is one other powerful market-shaping tool. Disclosures can require technology corporations to supply transparent information and explanations about their products and monetisation strategies. Mandatory disclosure of ad load and other operating metrics might need helped to forestall Facebook, for instance, from exploiting its users’ privacy with a purpose to maximise ad dollars from harvesting each user’s data.

But an absence of knowledge portability, and an inability to independently audit Facebook’s algorithms, meant that Facebook continued to profit from its surveillance system for longer than it must have. Today, OpenAI and other leading AI model providers refuse to reveal their training data sets, while questions arise about copyright infringement and who must have the fitting to profit from AI-aided creative works. Disclosures and open technological standards are key steps to try and make sure the advantages from these emerging AI platforms are shared as widely as possible.

Market structure, and its impact on “who gets what and why”, evolves because the technological basis for a way firms are allowed to compete a market evolves. So perhaps it’s time to turn our regulatory gaze away from attempting to predict the particular risks that may arise as specific technologies develop. After all, even Einstein couldn’t try this.

Instead, we must always attempt to recalibrate the economic incentives underpinning today’s innovations, away from dangerous uses of AI technology and towards open, accountable, AI algorithms that support and disperse value equitably. The sooner we acknowledge that technological risks are incessantly an outgrowth of misaligned economic incentives, the more quickly we are able to work to avoid repeating the mistakes of the past.

We aren’t against Amazon offering promoting services to firms on its third-party marketplace. An appropriate amount of promoting space can indeed help lesser-known businesses or products, with competitive offerings, to achieve traction in a good manner. But when promoting almost entirely displaces top-ranked organic product results, promoting becomes a rent extraction device for the platform.


An Amazon spokesperson said:

We disagree with plenty of conclusions made on this research, which misrepresents and overstates the limited data it uses. It ignores that sales from independent sellers, that are growing faster than Amazon’s own, contribute to revenue from services, and that lots of our promoting services don’t appear on the shop.

Amazon obsesses over making customers’ lives easier and a giant a part of that’s ensuring customers can quickly and conveniently find and discover the products they need in our store. Advertisements have been an integral a part of retail for a lot of a long time and anytime we include them they’re clearly marked as ‘Sponsored’. We provide a mixture of organic and sponsored search results based on aspects including relevance, popularity with customers, availability, price, and speed of delivery, together with helpful search filters to refine their results. We have also invested billions within the tools and services for sellers to assist them grow and extra services comparable to promoting and logistics are entirely optional.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read