Is Zuckerberg an idiot? Or does he have an actual plan with this?
Seems to me it’s completely useless like Metaverse.
If the LLM is so stupid it can’t figure out the sides of an equal sign can be reversed as simple as in 2+2=4 <=> 4=2+2. He will never achieve general intelligence by just throwing more compute power at it.
As powerful as LLM is, it’s still astoundingly stupid when it hits its limitations.
The difference is that we can go beyond that limitation. Even self-coding AI will either solve a problem, or compound its own inefficiencies before asking an operator to help out.
The difference here is that Zuck is not planning on inventing or revolutionizing anything. He’s just throwing more computation power at an already inefficient method of modeling AI.
You can already ask ChatGPT to model a real life scenario with a simple math equation. There is at least a rough model of how basic math can be used to solve problems.
Yes, thanks, swipe typing picked up “not” instead of “more”. Maybe someone can throw some more compute at the swipe typing algorithm to better pick up on the context of the sentence when picking words.
Is Zuckerberg an idiot? Or does he have an actual plan with this?
Seems to me it’s completely useless like Metaverse.
If the LLM is so stupid it can’t figure out the sides of an equal sign can be reversed as simple as in 2+2=4 <=> 4=2+2. He will never achieve general intelligence by just throwing more compute power at it.
As powerful as LLM is, it’s still astoundingly stupid when it hits its limitations.
Humans are astoundigly stupid when they hit their limitations.
Spoken like a true AI!!! ;-)
The difference is that we can go beyond that limitation. Even self-coding AI will either solve a problem, or compound its own inefficiencies before asking an operator to help out.
Your post sounds almost as dense as:
“everything that can be invented has been invented.” - Duell 1899.
The difference here is that Zuck is not planning on inventing or revolutionizing anything. He’s just throwing more computation power at an already inefficient method of modeling AI.
Some people do. Some people refuse to ask for help.
I don’t know much, but from what I know, we still haven’t reach a point of diminishing returns, so more power = more better.
deleted by creator
You can already ask ChatGPT to model a real life scenario with a simple math equation. There is at least a rough model of how basic math can be used to solve problems.
Not necessarily since you also need better techniques. A competitor could easily surpass you with less by being smarter about how the AI is trained.
Trying to achieve AGI by throwing more compute at LLMs is like trying to reach the moon by building a more powerful hot air balloon.
Assuming that “not compute” should be “more compute” I totally agree. That’s a very apt analogy.
Yes, thanks, swipe typing picked up “not” instead of “more”. Maybe someone can throw some more compute at the swipe typing algorithm to better pick up on the context of the sentence when picking words.