HomeFeaturesDAI#57 – Tricky AI, exam challenge, and conspiracy cures

DAI#57 – Tricky AI, exam challenge, and conspiracy cures

Welcome to this week’s roundup of AI news made by humans, for humans.

This week, OpenAI told us that it’s pretty sure o1 is kinda secure.

Microsoft gave Copilot an enormous boost.

And a chatbot can cure your belief in conspiracy theories.

Let’s dig in.

It’s pretty secure

We were caught up in the thrill of OpenAI’s release of its o1 models last week until we read the high quality print. The model’s system card offers interesting insight into the security testing OpenAI did and the outcomes may raise some eyebrows.

It seems that o1 is smarter but additionally more deceptive with a “medium” danger level in keeping with OpenAI’s rating system.

Despite o1 being very sneaky during testing, OpenAI and its red teamers say they’re fairly sure it’s secure enough to release. Not so secure should you’re a programmer searching for a job.

If OpenAI‘s o1 can pass OpenAI‘s research engineer hiring interview for coding — 90% to 100% rate…

……then why would they proceed to rent actual human engineers for this position?

Every company is about to ask this query. pic.twitter.com/NIIn80AW6f

Copilot upgrades

Microsoft unleashed Copilot “Wave 2” which can give your productivity and content production a further AI boost. If you were on the fence over Copilot’s usefulness these recent features would be the clincher.

The Pages feature and the brand new Excel integrations are really cool. The way Copilot accesses your data does raise some privacy questions though.

More strawberries

If all of the recent speak about OpenAI’s Strawberry project gave you a craving for the berry then you definitely’re in luck.

Researchers have developed an AI system that guarantees to rework how we grow strawberries and other agricultural products.

This open-source application could have a big impact on food waste, harvest yields, and even the value you pay for fresh fruit and veg at the shop.

Too easy

AI models are getting so smart now that our benchmarks to measure them are nearly obsolete. Scale AI and CAIS launched a project called Humanity’s Last Exam to repair this.

They want you to submit tough questions that you think that could stump leading AI models. If an AI can answer PhD-level questions then we’ll get a way of how close we’re to achieving expert-level AI systems.

If you think that you could have an excellent one you might win a share of $500,000. It’ll must be really tough though.

Source: X

Curing conspiracies

I really like an excellent conspiracy theory, but a number of the things people consider are only crazy. Have you tried convincing a flat-earther with easy facts and reasoning? It doesn’t work. But what if we let an AI chatbot have a go?

Researchers built a chatbot using GPT-4 Turbo they usually had impressive ends in changing people’s minds in regards to the conspiracy theories they believed in.

It does raise some awkward questions on how persuasive AI models are and who decides what ‘truth’ is.

Just since you’re paranoid, doesn’t mean they’re not after you.

Stay cool

Is having your body cryogenically frozen a part of your backup plan? If so, you’ll be joyful to listen to AI is making this crazy idea barely more plausible.

An organization called Select AI used AI to speed up the invention of cryoprotectant compounds. These compounds stop organic matter from turning into crystals through the freezing process.

For now, the applying is for higher transport and storage of blood or temperature-sensitive medicines. But if AI helps them find a extremely good cryoprotectant, cryogenic preservation of humans could go from a moneymaking racket to a plausible option.

AI is contributing to the medical field in other ways that may make you a bit nervous. New research shows that a surprising amount of doctors are turning to ChatGPT for help to diagnose patients. Is that an excellent thing?

If you’re enthusiastic about what’s happening in medicine and considering a profession as a physician you might wish to rethink that in keeping with this professor.

This is the ultimate warning for those considering careers as physicians: AI is becoming so advanced that the demand for human doctors will significantly decrease, especially in roles involving standard diagnostics and routine treatments, which might be increasingly replaced by AI.… pic.twitter.com/VJqE6rvkG0

In other news…

Here are another clickworthy AI stories we enjoyed this week:

Gen-3 Alpha Video to Video is now available on web for all paid plans. Video to Video represents a brand new control mechanism for precise movement, expressiveness and intent inside generations. To use Video to Video, simply upload your input video, prompt in any aesthetic direction… pic.twitter.com/ZjRwVPyqem

And that’s a wrap.

It’s not surprising that AI models like o1 present more risk as they get smarter, however the sneakiness during testing was weird. Do you think that OpenAI will keep on with its self-imposed safety level restrictions?

The Humanity’s Last Exam project was an eye-opener. Humans are struggling to seek out questions tough enough for AI to resolve. What happens after that?

If you suspect in conspiracy theories, do you think that an AI chatbot could change your mind? Amazon Echo is at all times listening, the federal government uses big tech to spy on us, and Mark Zuckerberg is a robot. Prove me incorrect.

Let us know what you think that, follow us on X, and send us links to chill AI stuff we can have missed.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read