Welcome to this week's roundup of human-made AI news.
This week, AI has eroded our trust, although we will't appear to get enough of it.
Actors, programmers and fighter pilots could lose their jobs to AI.
And pinky scientists are vowing not to make use of AI to make bad proteins.
Let's dive in.
Trust, but confirm
Societal trust in AI continues to say no, at the same time as the adoption of generative AI tools is increasing rapidly. Why are we so willing to adopt a technology despite fear of how it’s going to shape our future? What is behind the distrust and may it’s remedied?
Sam's exploration of the dissonance between growing mistrust and increasing user numbers of generative AI helps us take an honest have a look at our conflicted relationship with AI.
One of the explanations for AI skepticism is the alarmist views of some sectors of the industry. A report commissioned by the US government states that AI poses an “extinction-level threat” to our species.
The report recommends banning open source models, at the same time as open AI advocates dismiss the report as poor scientific scaremongering.
The AI news was slightly light within the AI fakery department this week. Kate Middleton, Princess of Wales, has been hit by an enormous fake image controversy over her overenthusiastic editing of a photograph of her and the kids.
The media's outrage over a doctored photo of a star is slightly hypocritical, but perhaps it's a great thing that society is becoming more sensitive to what's real and what's not. Progress?
I'm no expert, but I seriously think so #KateMiddleton Photo is a fakepic.twitter.com/CfMJM9XZfW
— Zain Rajpoot (@ZAIN_MZQ) March 11, 2024
Counterattack for AI jobs
The gaming industry has been quick to adopt AI, but actors and voice actors are usually not comfortable with the state of things. SAG-AFTRA now says the likelihood of a strike in video game negotiations is “50-50 or higher.”
Playing a flight simulator game could soon be one of the best ways for fighter pilots to get closest to reality because the prospect of AI replacing them becomes a reality. The Pentagon plans to construct the primary of 1,000 AI-controlled mini ghost fighter jets in the subsequent few months.
Swarms of autonomous fighter planes armed with missiles and piloted by an AI susceptible to hallucinations. What could possibly go fallacious?
Emad Mostaque, CEO of Stability AI, raised eyebrows when he said there won't be a necessity for human programmers in the subsequent few years. It is increasingly likely that his daring claim will come true.
This week, Cognition AI announced Devin, an autonomous AI software developer that may complete entire coding projects based on text input. Devin may even arrange and optimize other AI models autonomously.
I don't know enough about it #devinbut I just find it funny that the primary AI software engineer is in search of recent software engineers #Softwaredevelopment pic.twitter.com/zKsI2FVA51
— Marcel (@MarcelNdrecaj) March 12, 2024
Perhaps Mostaque's claim needs clarification. Soon there shall be no need for individuals who can write code, but tools like Devin will allow anyone to turn out to be a programmer.
If you’re an unemployed actor, fighter pilot or programmer in search of a job in AI, then listed below are a few of one of the best universities to review AI in 2024.
safety first
AI tools like DeepMind's AlphaFold have accelerated the design of recent proteins. How will we be sure that these tools are usually not used to create recent proteins that may very well be used for malicious purposes?
Researchers have created a set of voluntary safety rules for AI protein design and DNA synthesis, and a few big names have signed up to assist.
One of the obligations is to only use DNA synthesis laboratories that check whether the protein is dangerous before producing it. Does this mean some labs don't do that? Which labs do you’re thinking that the bad guys are more likely to use?
A team of researchers has developed a benchmark to measure how likely an LLM is to assist a nasty actor construct a bomb or bioweapon. Their recent technology helps the AI model unlearn dangerous knowledge while retaining the great. Almost.
Mindful models will politely decline your request for help constructing a bomb. If you employ ASCII graphics to form the naughty words and use a clever prompting technique, you possibly can easily get around these guardrails.
Heart-shaped AI
Using AI, researchers studied how genetics influence the morphology of an individual's heart. Creating 3D maps of the guts and linking them to genetics shall be of great help to cardiologists.
Mayo Clinic researchers are developing “hypothesis-driven AI” for oncology. The recent approach goes beyond simply analyzing big data by generating hypotheses that may then be validated based on domain knowledge.
This may very well be of great importance for testing and predicting medical hypotheses and explaining how patients will reply to cancer treatments.
In other news…
OpenAI + Figure
Talking to people about end-to-end neural networks:
→ OpenAI offers visual pondering and language understanding
→ The figure's neural networks enable fast, skillful, low-level robot actions(thread below)pic.twitter.com/trOV2xBoax
— Brett Adcock (@adcock_brett) March 13, 2024
And that's a wrap.
Do you trust AI roughly because it becomes a bigger and bigger a part of your on a regular basis life? A more skeptical approach might be a safer bet, however the doomsayers are beginning to get slightly tiring.
What do you consider AI-controlled fighter jets replacing human pilots? I sit up for seeing the maneuverability of those machines, but the concept of an AI glitch coupled with missiles is troubling.
I'm slightly disillusioned that the one AI fake news we got this week was the royal “Jerseygate,” but I believe we should always consider that as progress. I'm sure normal worship will resume next week because the election becomes more heated.
Let us know what AI developments caught your attention this week, and please send us links to any exciting AI news or research we can have missed.