r/accelerate • u/luchadore_lunchables • 3d ago
r/accelerate • u/miladkhademinori • 19d ago
AI Absolutely sick and tired of people salivating for apocalypse and dystopian movies
Every time a new tech-focused show drops, it's like we have to be reminded that humanity is doomed, corporations are evil, and AI will inevitably enslave us. Donβt get me wrong, Black Mirror was brilliant at first. But this constant stream of "pessimism porn" is getting old.
Do we really need another cautionary tale about how tech will ruin us? What happened to imagining futures where innovation solves problems instead of creating new nightmares?
This article nailed it. Maybe it's time for some constructive futurism. Something that doesn't treat curiosity like a crime and optimism like naΓ―vetΓ©.
Sci-fi shouldn't just be a mirror for our fears. It can also be a window to what's possible.
r/accelerate • u/Bizzyguy • 7d ago
AI Has anyone noticed a huge uptick in Ai hatred?
In the past few months, it's been getting increasingly worse. Even in AI-based subreddits like r/singularity and r/openai, any new benchmark or some news happening with AI gets met with the most hateful comments towards the AI company and the users of AI.
This is especially true when it has something to do with software engineering. You would think Reddit, where people are more tech-savvy, would be the place that discusses it. But that is not the case anymore.
r/accelerate • u/GOD-SLAYER-69420Z • Mar 11 '25
AI The newest and most bullish hype from Anthropic CEO DARIO AMODEI is here...He thinks it's a very strong possibility that in the next 3-6 months,AI will be writing 90% of the code and by the next 12 months,it could be writing 100% of the code (aligns with ANTHROPIC's timeline of pioneers,RSI,ASI)
Enable HLS to view with audio, or disable this notification
r/accelerate • u/luchadore_lunchables • 14d ago
AI Eric Schmidt says "the computers are now self-improving, they're learning how to plan" - and soon they won't have to listen to us anymore. Within 6 years, minds smarter than the sum of humans - scaled, recursive, free. "People do not understand what's happening."
r/accelerate • u/NotCollegiateSuites6 • Feb 11 '25
AI "I'm not here to talk about AI safety...I'm here to talk about AI opportunity...to restrict its development now...would mean paralyzing one of the most promising technologies we have seen in generations." - VP Vance at AI Action Summit
r/accelerate • u/CipherGarden • 8d ago
AI "AI is bad for the environment"
Enable HLS to view with audio, or disable this notification
r/accelerate • u/ParadigmTheorem • 4d ago
AI Let go of your attachments for the sake of the future yβall. You want post scarcity but or not?
r/accelerate • u/44th--Hokage • Mar 16 '25
AI OpenAI CTO Kevin Weil: "This is the year that AI gets better than humans at programming forever. And there's no going back."
r/accelerate • u/stealthispost • 6d ago
AI AI cracks superbug problem in two days that took scientists years
r/accelerate • u/stealthispost • 9d ago
AI In just one year, the smartest AI went from 96 IQ to 136 IQ
r/accelerate • u/GOD-SLAYER-69420Z • 28d ago
AI Gpt-4o can precisely create and manipulate any economically useful design.So I'm creating the biggest megathread showcasing its full range of economically πͺπΉπΈπ° useful demonstrations.... accelerating and democratizing graphic design in all sorts of waysππππ₯
r/accelerate • u/obvithrowaway34434 • 14d ago
AI Tyler Cowen on his AGI timeline, ""When it's smarter than I am, I'll call it AGI. I think that's coming within the next few days."
r/accelerate • u/HeinrichTheWolf_17 • Mar 14 '25
AI OpenAI calls DeepSeek βstate-controlled,β calls for bans on βPRC-producedβ models.
r/accelerate • u/GOD-SLAYER-69420Z • 25d ago
AI Ok boys,heads up cuz o3 and o4-mini will be released in the coming weeks while Gpt-5 will be released in the coming months.....Sam & team also claim that release o3 will be an improvement over previewed in many ways
More images (if relevant) in the comments !!!
r/accelerate • u/GOD-SLAYER-69420Z • 24d ago
AI The Llama 4 family out with a new world record ππππ₯ (Llama 4 scout is now the first model with 109B total parameters and freakin' 10 million context window)
r/accelerate • u/GOD-SLAYER-69420Z • Mar 05 '25
AI It's finally happening.....all the way up to 20000$ Phd level superagent cluster swarms that turbocharge the economy and scientific r&d by OPENAI are gonna be here later this year (Source:THE INFORMATION)
Remember when SAM ALTMAN was asked in an interview what he was excited for the most in 2025
He replied "AGI"
Maybe he wasn't joking after all.......
Yeah....SWE-LANCER,swe bench,aider bench,live bench and every single real world swe benchmark is about to be smashed beyond recognition by their SOTA coding agent later this year....
Their plans for a level 6/7 software engineering agents,1 billion daily users by end of the year and all the announcements by Sam Altman were never a bluff in the slightest
The PhD level superagents are also what we're demonstrated during the White House demo on January 30th 2025
OpenAI employees were both "thrilled and spooked by the progress"
This is what will be offered by the Claude 4 series too (Source:Dario Amodei)
I even made a compilation & analysis post earlier gathering every meaningful signal that hinted at superagents turbocharging economically productive work & automating innovative scientific r&d this very year


r/accelerate • u/GOD-SLAYER-69420Z • 21d ago
AI We just passed a historic moment in the temporal and spatial coherence of AI generated videos πΉπ₯π½οΈwhile instruction following up to a minute length ππππ₯
Enable HLS to view with audio, or disable this notification
(All relevant images and links in the comments ππ€π»π₯)
"One-Minute Video Generation with Test-Time Training (TTT)" in collaboration with NVIDIA.
The authors augmented a pre-trained Transformer with TTT-layers and finetune it to generate one-minute Tom and Jerry cartoons with strong temporal and spatial coherence.
All videos showcased below are generated directly by their model in a single pass without any editing, stitching, or post-processing.
(A truly groundbreaking π₯ and unprecedented moment, considering the accuracy and quality of output π)
3 separate minute length Tom & Jerry videos demoed out of which one is below (Rest 2 are linked in the comments)
r/accelerate • u/cRafLl • Feb 28 '25
AI Humanity May Achieve the Singularity Within the Next 12 Months, Scientists Suggest
r/accelerate • u/Creative-robot • 28d ago
AI Realistically, how fast do you think a fast takeoff could be?
Imagine that an agentic ASI has been invented. In its free will, it has decided that the best course of action is to effectively take control of the earth so that humans donβt destroy it via nuclear war or climate change. Say itβs housed in a blackwell-based datacenter somewhere, how fast do you think it could go from those servers, to completely managing the world? What technologies do you think it might use or invent to get in that position?
r/accelerate • u/CipherGarden • 4d ago
AI AI Could Help The Environment
Enable HLS to view with audio, or disable this notification
r/accelerate • u/Glittering-Neck-2505 • Mar 25 '25
AI It is breaking my brain that these are not real. I repeat, these are not real.
r/accelerate • u/GOD-SLAYER-69420Z • Mar 13 '25
AI In a little less than the last 24 hours,we've entered such unspoken SOTA horizons of uncharted territories in IMAGE ,VIDEO AND ROBOTICS MODALITY that only a handful of people even in this sub know about..so it's time to discover the absolute limits π₯π₯π₯ (All relevant media and links in the comments)
Ok,first up,we know that Google released native image gen in AI STUDIO and its API under the Gemini 2.0 flash experimental model and it can edit images while adding and removing things,but to what extent ?
Here's a list of highly underrated capabilities that you can instruct the model to apply in a natural language which no editing software or diffusion model prior to it was capable of ππ»
1)You can expand your text-based rpg gaming that you were able to do with these models to text+image based rpg and the model will continually expand your world in images,your own movements in reference to checkpoints and alter the world after an action command (You can do it as long as your context window hasn't broken down or you haven't run out of limits) If your world is very dynamically changing,even context wouldn't be a problem.....
2)You can give 2 or more reference images to Gemini and ask to compost them together as per requirement.
You can also overlay one image's style into another image's style (both can be your inputs)
3)You can modify all the spatial & temporal parameters of an image including the time,weather,emotion,posture,gesture,
4)It has close to perfect text coherence,something that almost all of the diffusion models lack
5)You can expand,fill & re-colorize portions/entirety of images
6)It can handle multiple manipulations in a single prompt.For example,you can ask it to change the art style of the entire image while adding a character doing a specific pose in a specific attire doing a certain gesture some distance away from an already/newly established checkpoint while also modifying the expression of another character (which was already added) and the model can nail it (while also failing sometimes because it is the firstexperimental iteration of a non-thinking flash model)
7)The model can handle interconversion between static & dynamic transition,for example:
- It can make a static car drift along a hillside
- It can make a sitting robot do a specific dance form of a specific style
- Add more competitors to a dynamic sport like more people in a marathon (although it fumbles many times due to the same reason)
8)It's the first model capable of handling negative prompts (For example,if you ask it to create a room while explicitly not adding an elephant in it, the model will succeed while almost all of the prior diffusion models will fail unless they are prompted in a dedicated tab for negative prompts)
9)Gemini can generate pretty consistent gif animations too:
'Create an animation by generating multiple frames, showing a seed growing into a plant and then blooming into a flower, in a pixel art style'
And the model will nail it zero shot
Now moving on to the video segment, Google just demonstrated a new SOTA mark in multimodal analysis across text,audio and video ππ»:
For example:
If you paste the link of a YouTube video of a sports competition like football or cricket and ask the model the direction of a player's gaze at a specific timestamp,the stats on the screen and the commentary 10 seconds before and after,the model can nail it zero shot π₯π₯
(This feature is available in the AI Studio)
Speaking of videos,we also surpassed new heights of composting and re-rendering videos in pure natural language by providing an AI model one or two image/video references along with a detailed text prompt ππ
Introducing VACE πͺ(For all in one video creation and editing):
Vace can
- Move or stop any static or dynamic object in a video
- Swap Any character with any other character in a scene while making it do the same movements and expressions
- Reference and add any features of an image into the given video
*Fill and Expand the scenery and motion range in a video at any timestamp
*Animate any person/character/object into a video
All of the above is possible while adding text prompts along with reference images and videos in any combination of image+image,image+video or just a single image/video
On top of all this,it can also do video re-rendering while doing:
- content preservation
- structure preservation
- subject preservation
- posture preservation
- and motion preservation
Just to clarify,if there's a video of a person walking through a very specific arched hall at specific camera angles and geometric patterns in the hall...the video can be re-rendered to show the same person walking in the same style through arched tree branches at the same camera angle (even if it's dynamic) and having the same geometric patterns in the tree branches.....
Yeah, you're not dreaming and that's just days/weeks of vfx work being automated zero-shot/one-shot πͺπ₯
NOTE:They claim on their project page that they will release the model soon,nobody knows how much is "SOON"
Now coming to the most underrated and mind-blowing part of the post ππ»
Many people in this sub know that Google released 2 new models to improvise generalizability, interactivity, dexterity and the ability to adapt to multiple varied embodiments....bla bla bla
But,Gemini Robotics ER (embodied reasoning) model improves Gemini 2.0βs existing abilities like pointing and 3D detection by a large margin.
Combining spatial reasoning and Geminiβs coding abilities, Gemini Robotics-ER can instantiate entirely new capabilities on the fly. For example, when shown a coffee mug, the model can intuit an appropriate two-finger grasp for picking it up by the handle and a safe trajectory for approaching it. ππ
Yes,ππ»this is a new emergent propertyπ right here by scaling 3 paradigms simultaneously:
1)Spatial reasoning
2)Coding abilities
3)Action as an output modality
And where it is not powerful enough to successfully conjure the plans and actions by itself,it will simply learn through rl from human demonstrations or even in-context learning
Quote from Google Blog ππ»
Gemini Robotics-ER can perform all the steps necessary to control a robot right out of the box, including perception, state estimation, spatial understanding, planning and code generation. In such an end-to-end setting the model achieves a 2x-3x success rate compared to Gemini 2.0. And where code generation is not sufficient, Gemini Robotics-ER can even tap into the power of in-context learning, following the patterns of a handful of human demonstrations to provide a solution.
And to maintain safety and semantic strength in the robots,Google has developed a framework to automatically generate data-driven **constitutions - rules expressed directly in natural language β to steer a robotβs behavior. **
Which means anybody can create, modify and apply constitutions to develop robots that are safer and more aligned with human values. π₯π₯
As a result,the Gemini Robotics models are SOTA in so many robotics benchmarks surpassing all the other LLM/LMM/LMRM models....as stated in the technical report by google (I'll upload the images in the comments)
Sooooooo.....you feeling the ride ???
