The RAND Corporation, a think tank with deep ties to tech billionaires' funding networks, particularly through open philanthropy, played an important role in drafting President Joe Biden's executive order on AI.
Accordingly PoliticallyThis order, heavily influenced by effective altruism, a philosophy that advocates a data-driven approach to philanthropy, introduced comprehensive AI reporting requirements.
RAND's involvement has raised eyebrows as a result of its significant funding from groups like Open Philanthropy, which is connected to technology leaders like Dustin Moskovitz. According to RAND spokesman Jeffrey Hiday, RAND exists to “conduct research and evaluation on critical problems with the day after which share that research, evaluation and expertise with policymakers.”
This included comprehensive consultation on the recent Executive Order, including the drafting of ultimate documents.
Earlier this yr, the RAND Corporation received over $15 million in voluntary grants from Open Philanthropy dedicated to AI and biosecurity projects.
Open Philanthropy, known for its effective altruistic approach, has each personal and financial ties to AI firms equivalent to Anthropic and OpenAI.
In addition, leaders at RAND are embedded in the company structures of those AI firms at the best levels in what stays a comparatively compact industry, no less than within the United States.
Critics argue that the think tank's give attention to effective altruism could distort its research focus and overshadow immediate AI concerns equivalent to racial bias or copyright infringement.
The momentum at RAND also reflects a broader trend Effective altruism is increasingly shaping AI policy – no less than enter into his narrative. This movement, supported by controversial figures equivalent to Sam Bankman-Fried, advocates for addressing long-term existential risks, including those posed by advanced AI, equivalent to the event of bioweapons.
However, this focus has been criticized for potentially serving the interests of top technology firms by diverting attention from existing harms from AI.
Essentially, effective altruism risks deferring immediate, practical motion in favor of more hypothetical, long-term plans.
OpenAI's Internal Battle: Altruism vs. Commercialization
OpenAI, originally a non-profit organizationis now fighting the stress between these altruistic goals and the realities of business and the profit motive, especially after investments like Microsoft's $1 billion and its recent $86 billion valuation.
It was relatively easy for OpenAI to take care of this philosophy when it was relatively lonely at the highest of the generative AI industry.
With increasing competition, particularly from Google's Gemini Ultra, which directly threatens GPT-4's superiority, it just isn’t particularly easy to exercise restraint and caution while maintaining the coveted position on the forefront of AI models.
Tensions at OpenAI reached their peak under the leadership of CEO Sam Altman. His approach to managing OpenAI embodied the conflict between Silicon Valley techno-capitalism and the growing narrative across the risks of AI. There was speculation that Altman didn’t take security seriously at the corporate, but this remained unconfirmed.
Despite board members' concerns about his commitment to security and transparency, Altman's reinstatement as CEO marked a pivotal moment in the corporate's history and raised questions on the influence of effective altruism and the facility of the board.
The query now could be: Can effective altruism and long-term ideals coexist with the rapid business and technological advances within the AI sector?
Regulation can protect AI’s value-adding activities
starting of the yr Luke Serau's memo on Google suggested that the open source AI community poses a direct challenge to the dominance of all leading AI developers.
The memo said: “We did loads of looking over our shoulders at OpenAI. Who will achieve the following milestone? What will probably be the following step? But the inconvenient truth is that we aren’t able to win this arms race, and neither is OpenAI. While we argued, a 3rd faction quietly ate our lunch. Of course I’m talking about open source.”
Open source models, using the instance of Metas LLaMA and Mistral's Mixtralare quickly closing the gap between grassroots innovation and Big Tech.
While the push for AI regulation by firms like Google and OpenAI is portrayed as a step toward responsible AI development, it could also undermine the open source AI community that provides a decentralized alternative to centralized models.
Of course, open source models are also cheaper and enable firms, research institutions, students and other users to construct a certain degree of private responsibility and sovereignty into their solutions.
Are business AI developers driven by real concerns in regards to the secure and ethical development of AI, or are they partly strategic maneuvers to take care of market dominance and control over AI innovation?
The interface between altruism, politics and economics within the AI industry is awfully complex. As AI advances, reconciling these different interests will proceed to cause disagreements.