HomeNews“Act quickly and break things”: Trump's $500 billion AI project poses major...

“Act quickly and break things”: Trump's $500 billion AI project poses major risks

As one in every of his first moves because the forty seventh President of the United States, Donald Trump announced a brand new $500 billion project called Stargate to speed up the event of artificial intelligence (AI) within the United States.

The project is a partnership between three major technology firms – OpenAI, SoftBank and Oracle. Trump called it “by far the most important AI infrastructure project in history” and said it is going to help “preserve the long run of technology” within the US.

However, tech billionaire Elon Musk had a distinct opinion: Claim without evidence on his platform X that the backers of the project “actually don’t have the cash.” X, which shouldn’t be included in Stargate, can also be working on developing AI and Musk is a rival to OpenAI CEO Sam Altman.

In addition to the Stargate announcement, Trump also revoked one Implementing regulation signed by his predecessor Joe Biden, which aimed to deal with and control AI risks.

Taken together, these two steps embody a typical mentality in technology development that’s best summed up by the phrase: “Move fast and break things.”

What is Stargate?

The USA is already the Leader on this planet in relation to AI development.

The Stargate project will significantly expand this lead over other nations.

A network of knowledge centers is being built across the United States. These centers will house massive computer servers needed to run AI programs like ChatGPT. These servers run 24/7 and require significant amounts of electricity and water to operate.

According to a opinion from OpenAI, the development of latest data centers as a part of Stargate is already underway within the US state of Texas:

(W)We are evaluating potential locations across the country for added campuses as we finalize final agreements.

US President Donald Trump speaks on the White House alongside Softbank CEO Masayoshi Son, Oracle Chief Technology Officer Larry Ellison and OpenAI CEO Sam Altman.
Julia Demaree Nikhinson

An imperfect – but promising – order

Trump's increased investment in AI development is encouraging. It could help advance the various potential advantages of AI. For example, AI can improve the prognosis of cancer patients by quickly analyzing medical data and detecting early signs of disease.

But Trump's simultaneous revocation of Biden Implementing regulation in regards to the “secure and trustworthy development and use of AI” is deeply worrying. This could mean that any potential advantages of Stargate will quickly be trumped by its potential to worsen existing harms from AI technologies.

Yes, Biden's order was missing essential technical details. But it was a promising begin to developing safer and more responsible AI systems.

A key issue it sought to deal with was the gathering of non-public data by tech firms for AI training without obtaining prior consent.

AI systems collect data from across the web. Even if data is freely available on the web for human use, that doesn't mean AI systems should use it for training. Additionally, once a photograph or text is fed into an AI model, it can’t be removed. There have been quite a few cases of Artists sue AI art generators for unauthorized use their work.

Another issue Biden's order was intended to deal with was the chance of harm — particularly to people from minority communities.

Most AI tools aim to extend accuracy for almost all. Without the suitable design, they will make extremely dangerous decisions for some.

For example, in 2015, a picture recognition algorithm developed by Google routinely marked Images of black people as “gorillas.” The same problem was later discovered in AI systems from other firms similar to Yahoo and Apple stays unresolved a decade later, because these systems are sometimes inscrutable even for his or her creators.

Because of this opacity, it’s crucial to design AI systems accurately from the beginning. Problems could be deeply rooted within the AI ​​system itself, worsen over time, and be nearly inconceivable to repair.

As AI tools increasingly make essential decisions, similar to reviewing resumes, minorities are much more disproportionately affected. For example, AI-powered facial recognition software is more prone to misidentify Black people and other people of color, which has led to false arrests and incarcerations.

Faster and more powerful AI systems

Trump's two AI announcements in the primary days of his second term as US president show that his primary give attention to AI – and that of the world's largest technology firms – is on developing ever faster and more powerful AI systems.

If we compare an AI system to a automobile, it’s like developing the fastest automobile possible while ignoring essential safety features similar to seat belts or airbags to be able to keep it lighter and due to this fact faster.

For each cars and AI, this approach could mean very dangerous machines ending up within the hands of billions of individuals all over the world.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read