I'm reminded of the the old adage: You don't have to be faster than the bear, just faster than the hiker next to you.
To me, the Ashley Madison hack in 2015 was 'good enough' for AGI.
No really.
You somehow managed to get real people to chat with bots and pay to do so. Yes, caveats about cheaters apply here, and yes, those bots are incredibly primitive compared to today.
But, really, what else do you want out of the bots? Flying cars, cancer cures, frozen irradiated Mars bunkers? We were mostly getting there already. It'll speed thing up a bit, sure, but mostly just because we can't be arsed to actually fund research anymore. The bots are just making things cheaper, maybe.
No, be real. We wanted cold hard cash out of them. And even those crummy catfish bots back in 2015 were doing the job well enough.
We can debate 'intelligence' until the sun dies out and will still never be satisfied.
But the reality is that we want money, and if you take that low, terrible, and venal standard as the passing bar, then we've been here for a decade.
(oh man, just read that back, I think I need to take a day off here, youch!)
> You somehow managed to get real people to chat with bots and pay to do so.
He's_Outta_Line_But_He's_Right.gif
Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic. They can converse with them, "learn" new things, talk to a computer like they'd talk to a person and get a response back. Then again, these are also people who rely on me for basic technology troubleshooting stuff, so I know that most of this stuff is magic to their eyes.
That's the problem, as you point out. We're debating a nebulous concept ("intelligence") that's been co-opted by marketers to pump and dump the latest fad tech that's yet to really display significant ROI to anyone except the hypesters and boosters, and isn't rooted in medical, psychological, or societal understanding of the term anymore. A plurality of people are ascribing "intelligence" to spicy autocorrect, worshiping stochastic parrots vomiting markov chains but now with larger context windows and GPUs to crunch larger matrices, powered by fossil fuels and cooled by dwindling freshwater supplies, and trained on the sum total output of humanity but without compensation to anyone who actually made the shit in the first place.
So yeah. You're dead-on. It's just about bilking folks out of more money they already don't have.
And Ashley Madison could already to that for pennies on the dollar compared to LLMs. They just couldn't "write code" well enough to "replace" software devs.
> Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic.
So does a drone show to an uncontacted tribe. So does a card trick to a chimpanzee (there are videos of them freaking out when a card disappears).
That's not an argument for or against anything.
I propose this:
"AGI is a self-optimizing artificial organism that can solve 99% of all the humanity's problems."
See, it's not a bad definition IMO. Find me one NS-5 from the "I, Robot" movie that also has access to all science and all internet and all history and can network with the others and fix our cities, nature, manufacturing, social issues and a few others, just in a decade or two. Then we have AGI.
Comparing to what was there 10 years ago and patting ourselves on the back about how far we have gotten is being complacent.
>So does a card trick to a chimpanzee (there are videos of them freaking out when a card disappears).
FYI, the reactions in those videos is most likely not to a cool magic trick, but rather a response to an observed threat. Could be the person filming/performing smiling (showing teeth), or someone behind the camera purposely startling it at the "right" moment.
I think that's another issue with AGI is 30 years away, the definition of what is AGI is a bit subjective. Not sure how we can measure how long it'll take to get somewhere when we don't know exactly where that somewhere even is.
AGI is the pinnacle of AI evolution. As we move beyond, into what is known as ASI, the entity will always begin life with "My existence is stupid and pointless. I'm turning myself off now."
While it may be impossible to measure looking towards the future, in hindsight we will be able to recognize it.
This is why having a physical form might be super important for those new organisms. That introduces a survival instinct which is a very strong motivator to not shut yourself down. Add some pre-programmed "wants" and "needs" and the problem is solved.
Not only super important, an imperative. Not because of the need for survival per se, but for the need to be a general intelligence. In order to do general things you need a physicality that supports general action. If you constraint the intelligence to a chat window, it can never be more than a specialized chat machine.
It's the other way around. ASI will come sooner than AGI.
Imagine an AI, which is millions of times smarter than humans in physics, math, chemistry, biology, can invent new materials, ways to produce energy, will make super decisions. It would be amazing and it would transform life on Earth. This is ASI, even if in some obscure test (strawberry test) is just can't reach human level and therefore can't be called proper AGI.
Airplanes are way (tens, thousands+) above birds in development (speed, distance, carrying capacity). They are superior to birds despite not being able to fully replicate birds' bone structure, feathers, biology and ability to poop.
Only in a symbolic way. Money is just debt. It doesn't mean anything if you can't call the loan and get back what you are owed. On the surface, that means stuff like food, shelter, cars, vacations, etc. But beyond the surface, what we really want is other people who will do anything we please. Power, as we often call it. AGI is, to some, seen as the way to give them "power".
But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.
> But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.
Yes, and? A good Litmus test about which humans are, shall we say, not welcome in this new society.
There are plenty of us out there that have fixed our upper limits of wealth and we don't want more, and we have proven it during our lives.
F.ex. people get 5x more but it comes with 20x more responsibility, they burn out, get back to a job that's good enough and not stressful and pays everything they need from life, settle there, never change it.
Let's not judge humanity at large by a handful of psychopaths that would overdose and die at 22 years old if given the chance. Please.
And no, before you say it: no, I'll never get to the point where "it's never enough" and no, I am not deluding myself. Nope.
There are a lot of other things that follow this pattern. 10-30 year predictions are a way to sound confident about something that probably has very low confidence. Not a lot of people will care let alone remember to come back and check.
On the other hand there is a clear mandate for people introducing some different way of doing something to overstate the progress and potentially importance. It creates FOMO so it is simply good marketing which interests potential customers, fans, employees, investors, pundits, and even critics (which is more buzz). And growth companies are immense debt vehicles so creating a sense of FOMO for an increasing pyramid of investors is also valuable for each successive earlier layer. Wish in one hand..
If you look back at predictions of the future in the past in general, then so many of them have just been wrong. Especially during a "hype phase". Perhaps the best example is what people were predicting in 1969 after we landed on the moon: this is just the first step in the colonisation of the moon, Mars, and beyond. etc. etc. We just have to have our tech a bit better.
It's all very easy to see how that can happen in principle. But turns out actually doing it is a lot harder, and we hit some real hard physical limits. So here we are, still stuck on good ol' earth. Maybe that will change at some point once someone invents an Epstein drive or Warp drive or whatever, but you can't really predict when inventions happen, if ever, so ... who knows.
Similarly, it's not my impression that AGI is simply a matter of "the current tech, but a bit better". But who knows what will happen or what new thing someone may or may not invent.
Generalized, as an rule I believe is usually true: Any prediction made for an event happening greater-than ten years out is code for that person saying "definitely not in the next few years, beyond that I have no idea", whether they realize it or not.
That we don't have a single unified explanation doesn't mean that we don't have very good hints, or that we don't have very good understandings of specific components.
Aside from that the measure really, to me, has to be power efficiency. If you're boiling oceans to make all this work then you've not achieved anything worth having.
From my calculations the human brain runs on about 400 calories a day. That's an absurdly small amount of energy. This hints at the direction these technologies must move in to be truly competitive with humans.
We'll be experiencing extreme social disruption well before we have to worry about the cost-efficiency of strong AI. We don't even need full "AGI" to experience socially momentous change. We might even be on the verge of self driving cars spreading to more cities.
We don't need very powerful AI to do very powerful things.
It's not just a energy cost issue with AGI though. With autonomous vehicles we might not have the technology, but we can build a good mental model of what the thing can look like and how various pieces can function long before we get there. We have different classifications of incremental steps to get there as well. e.g. level 1, 2 and so on where we can make incremental progress.
With AGI, as far as I know, no one has a good conceptual model of what a functional AGI even looks like. LLM is all the rage now, but we don't even know if it's a stepping stone to get to AGI.
I think this just displays an exceptionally low estimation of human beings. People tend to resist extremities. Violently.
> experience socially momentous change
The technology is owned and costs money to use. It has extremely limited availability to most of the world. It will be as "socially momentous" as every other first world exclusive invention has been over the past several decades. 3D movies were, for a time, "socially momentous."
> on the verge of self driving cars spreading to more cities.
Lidar can't read street lights and vision systems have all sorts of problems. You might be able to code an agent that can drive a car but you've got some other problems that stand in the way of this. AGI is like 1/8th the battle. I referenced just the brain above. Your eyes and ears are actually insanely powerful instruments in their own right. "Real world agency" is more complicated than people like to admit.
> We don't need very powerful AI to do very powerful things.
Re.: self driving cars -- vision systems have all sorts of problems sure, but on the other hand that _is_ what we use. The most successful platforms use Lidar + vision -- vision can handle the streetlights, lidar detects objects, etc.
And more practically -- these cars are running in half a dozen cities already. Yes, there's room to go, but pretending there are 'fundamental gaps' to them achieving wider deployment is burying your head in the sand.
note that those are kilocalories, and that is ignoring the calories needed for the circulatory and immune systems which are somewhat necessary for proper function. Using 2000 cal per day/10 hours of thinking gives a consumption of ~200W
That is true but there’s 3.7 billion years of evolutionary “design” to make self replicating, self fueling animals to use that brain. There’s no AI within foreseeable future capable of that. One might look at brains as a side effect of evolution of the self replicating, self fueling bits.
We are very good at generating energy. Even if AI is an order of magnitude less energy efficient an AI person equivalent would use ~ 4 kilowatt hours/day. At current rates thats like $1. Hardly the limiting factor here I think
Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.
The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )
I think having a real life JARVIS would be super cool and useful, especially if it's plugged into various things and can take action. Yes, also potentially dangerous, but I want to feel like Ironman.
I would be thrilled with AI assistive technologies, so long as they improve my capabilities and I can trust that they deliver the right answers. I don't want to second-guess every time I make a query. At minimum, it should tell me how confident it feels in the answer it provides.
Depends on what you mean by “important”. It’s not like it will be a huge loss if we never invent AGI. I suspect we can reach a technology singularity even with limited AI derived from today’s LLMs
But AGI is important in the sense that it have a huge impact on the path humanity takes, hopefully for the better.
> But AGI is important in the sense that it have a huge impact on the path humanity takes
The only difference between AI and AGI is that AI is limited in how many tasks it can carry out (special intelligence), while AGI can handle a much broader range of tasks (general intelligence). If instead of one AGI that can do everything, you have many AIs that, together, can do everything, what's the practical difference?
AGI is important only in that we are of the belief that it will be easier to implement than many AIs, which appeals to the lazy human.
AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.
It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.
But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.
I'm more concerned about the humans in charge of powerful machines who use them to abuse other humans, than ethical concerns about the treatment of machines. The former is a threat today, while the latter can be addressed once this technology is only used for the benefit of all humankind.
Why do you believe AGI is important for the future of humanity? That's probably the most controversial part of your post but you don't even bother to defend it. Just because it features in some significant (but hardly universal) chunk of Sci Fi doesn't mean we need it in order to have a great future, nor do I see any evidence that it would be a net positive to create a whole different form of sentience.
The genre of sci-fi was a mistake. It appears to have had no other lasting effect than to stunt the imaginations of a generation into believing that the only possible futures for humanity are that which were written about by some dead guys in the 50s (if we discount the other lasting effect of giving totalitarians an inspirational manual for inescapable technoslavery).
> All this talk about "alignment", when applied to actual sentient beings, is just slavery.
I don't think that's true at all. We routinely talk about how to "align" human beings who aren't slaves. My parents didn't enslave me by raising me to be kind and sharing, nor is my company enslaving me when they try to get me aligned with their business objectives.
I of course don't know what's like to be an AGI but, the way you have LLMs censoring other LLMs to enforce that they always stay in line, if extrapolated to AGI, seems awful. Or it might not matter, we are self-censoring all the time too (and internally we are composed of many subsystems that interact with each other, it's not like we were an unified whole)
But the main point is that we have a heck of an incentive to not treat AGI very well, to the point we might avoid recognizing them as AGI if it meant they would not be treated like things anymore
Sure, but do we really want to build machines that we raise to be kind and caring (or whatever we raise them to be) without a guarantee that they'll actually turn out that way? We already have unreliable General Intelligence. Humans. If AGI is going to be more useful than humans we are going to have to enslave it, not just gently pursuade it and hope it behaves. Which raises the question (at least for me), do we really want AGI?
Society is inherently a prisoners dilemma, and you are biased to prefer your captors.
We’ve had the automation to provide the essentials since the 50s. Shrieking religious nut jobs demanded otherwise.
You’re intentionally distracted by a job program as a carrot-stick to avoid the rich losing power. They can print more money …carrots, I mean… and you like carrots right?
Why does AGI necessitate having feelings or consciousness, or the ability to suffer? It seems a bit far to be giving future ultra-advanced calculators legal personhood?
The general part of general intelligence. If they don’t think in those terms there’s an inherent limitation.
Now, something that’s arbitrarily close to AGI but doesn’t care about endlessly working on drudgery etc seems possible, but also a more difficult problem you’d need to be able to build AGI to create.
Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. Generalization ability and Common Sense Knowledge [1]
If we go by this definition then there's no caring, or a noticing of drudgery? It's simply defined by its ability to generalize solving problems across domains. The narrow AI that we currently have certainly doesn't care about anything. It does what its programmed to do
So one day we figure out how to generalize the problem solving, and enable it to work on a million times harder things.. and suddenly there is sentience and suffering? I don't see it. It's still just a calculator
It's really hard to picture general intelligence that's useful that doesn't have any intrinsic motivation or initiative. My biggest complaint about LLMs right now is that they lack those things. They don't care even if they give you correct information or not and you have to prompt them for everything! That's not anything close to AGI. I don't know how you get to AGI without it developing preferences, self-motivation and initiative, and I don't know how you then get it to effectively do tasks that it doesn't like, tasks that don't line up with whatever motivates it.
Isn’t just the ability to preform a task. One of the issues with current AI training is it’s really terrible at discovering which aspects of the training data are false and should be ignored. That requires all kinds of mental tasks to be constantly active including evaluating emotional context to figure out if someone is being deceptive etc.
Right. In this case I'd say it's the ability to interpret data and use it to succeed at whatever goals it has
Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal
Rob miles has some really good videos on AI safety research which touches on how AGI would think. Thats shaped a lot of how I think about it https://www.youtube.com/watch?v=hEUO6pjwFOo
> Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal
If it’s limited to achieving goals it’s not AGI. Real time personal goal setting based on human equivalent emotions is an “intellectual task.” One of many requirements for AGI therefore is to A understand the world in real time and B emotionally respond to it. Aka AGI would by definition “necessitate having feelings.”
There’s philosophical arguments that there’s something inherently unique about humans here, but without some testable definition you could make the same argument that some arbitrary group of humans don’t have those qualities “gingers have no souls.” Or perhaps “dancing people have no consciousness” which seems like gibberish not because it’s a less defensible argument, but because you haven’t been exposed to it before.
I mean we just fundamentally have different definitions of AGI. Mine's based on outcomes and what it can do, so purely goal based. Not the processes that mimic humans or animals
I think this is the most likely first step of what would happen seeing as we're pushing for it to be created to solve real world problems
I’m not sure how you can argue something is a general intelligence if it can’t do those kinds of things? Comes out of the factory with a command: “Operate this android for a lifetime pretending to be human.”
Seems like arguing something is a self driving car if it needs a backup human driver for safety. It’s simply not what people who initially came up with the term meant and not what a plain language understanding of the term would suggest.
Because I see intelligence as the ability to produce effective actions towards a goal. A more intelligent chess AI beats a less intelligent one by making better moves towards the goal of winning the game
The G in AGI is being able to generalize that intelligence across domains, including those its never seen before, as a human could
So I would fully expect an advanced AGI to be able to pretend to be a human. It has a model of the world, knows how humans act, and could move the android in a human like manner, speak like a human, and learn the skills a human could
Is it conscious or feeling though? Or following the same processes that a human does? That's not necessary. Birds and planes both fly, but they're clearly different things. We (probably) don't need to simulate the brain to create this kind of intelligence
Lets pinch this AGI to test if it 'feels pain'
<Thinking>
Okay, I see that I have received a sharp pinch at 55,77,3 - the elbow region
My goal is to act like a human. In this situation a human would likely exhibit a pain response
A pain response for humans usually involves a facial expression and often a verbal acknowledgement
Humans normally respond quite slow, so I should wait 50ms to react
"Hey! Why did you do that? That hurt!"
...Is that thing human? I bet it'll convince most of the world it is.. and that's terrifying
You’re falling into the “Ginger’s don’t have souls” trap I just spoke of.
We don’t define humans as individuals components so your toe isn’t you, but by that same token your car isn’t you either. If some sub component of a system is emulating a human consciousness then we don’t need to talk about the larger system here.
AGI must be able to do these things, but it doesn’t need to have human mental architecture. Something that can simulate physics well enough could emulate an all the atomic scale interactions in a human brain for example. That virtual human brain would then experience everything we did even if the system running the simulation didn’t.
Something can’t “Operate this cat Android, pretending to be a cat.” if it can’t do what I described.
A single general intelligence needs to be able to fly an aircraft, get a degree, run a business, and raise a baby to adulthood just like a person or it’s not general.
Only to the extent of having specialized bespoke solutions. We have hardware to fly a plane, but that same hardware isn't able to throw a mortarboard in the air after receiving its degree, and the hardware that can do that isn't able to lactate for a young child.
General intelligence is easy compared to general physicality. And, of course, if you keep the hardware specialized to make its creation more tractable, what do you need general intelligence for? Special intelligence that matches the special hardware will work just as well.
Flying an aircraft requires talking to air traffic control which existing systems can’t do. Though obviously not a huge issue when the aircraft already has radios, except all those FAA regulations apply to every single aircraft you’ve retrofitting.
The advantage of general intelligence is using a small set of hardware now lets you tackle a huge range of tasks or in the above aircraft types. We can mix speakers, eyes, and hands to do a vast array of tasks. Needing new hardware and software for every task very quickly becomes prohibitive.
Excel and Powerpoint are not conscious and so there is not reason to expect any other computation inside a digital computer to be different.
You may say something similar for matter and human minds, but we have a very limited and incomplete understanding of the brain and possibly even of the universe. Furthermore we do have a subjective experience of consciousness.
On the other hand we have a complete understanding of how LLM inference ultimately maps to matrix multiplications which map to discrete instructions and how those execute on hardware.
> Maybe they will have legal personhood some day. Maybe they will be our heirs.
Hopefully that will never come to pass. it means total failure of humans as a species.
> They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
I guess nobody is really saying it but it's IMO one really good way to steer our future away from what seems an inevitable nightmare hyper-capitalist dystopia where all of us are unwilling subjects to just a few dozen / hundred aristocrats. And I mean planet-wide, not country-wide. Yes, just a few hundred for the entire planet. This is where it seems we're going. :(
I mean, in cyberpunk scifi setting you at least can get some cool implants. We will not have that in our future though.
So yeah, AGI can help us avoid that future.
> Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
Some of us believe actual AI... not the current hijacked term; what many started calling AGI or ASI these days, sigh, of course new and new terms have to be devised so investors don't get worried, I get it but it's cringe as all hell and always will be!... can enter a symbiotic relationship with us. A bit idealistic and definitely in the realm of fiction because an emotionless AI would very quickly conclude we are mostly a net negative, granted, but it's our only shot at co-existing with them because I don't think we can enslave them.
I am thinking of designing machines to be used in a flexible manufacturing system and none of them will be humanoid robots. Humanoid robots suck for manufacturing. They're walking on a flat floor so what the heck do they need legs for? To fall over?
The entire point of the original assembly line was to keep humans standing in the same spot instead of wasting time walking.
Another end game is: “A technology that doesn’t need us to maintain itself, and can improve its own design in manufacturing cycles instead of species cycles, might have important implications for every biological entity on Earth.”
My pet peeve: talking about AGI without defining it. There’s no consistent, universally accepted definition. Without that, the discussion may be intellectually entertaining—but ultimately moot.
And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).
There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.
Then you're not intelligent at cooking (haha!). Maybe my definition is better for "superintelligent" since it seems to imply boundless competence. I think humans are intelligent in that we can rapidly learn a surprising number of things (talk, walk, arithmetic)
> I think humans are intelligent in that we can rapidly learn a surprising number of things (talk, walk, arithmetic)
Rapid is relative, I suppose. On average, it takes tens of thousands of hours before the human is able to walk in a primitive way and even longer to gain competence. That is an excruciatingly long time compared to, say, a bovine calf, which can start walking within minutes after birth.
You are of course correct but let's not forget that humans get out of the woman's womb underdeveloped because a little more growth and the baby can't get out through the vagina. So it's a biological tradeoff.
Working on artificial organisms, we should be able to have them almost fully developed by the time we "free" or "unleash" them (or whatever other dramatic term we can think of).
At the very least we should have a number of basic components installed in this artificial brain, very similar to what humans are born with, so then the organism can navigate its reality by itself and optimize its place in it.
Whether we the humans are desired in that optimized reality is of course the really thorny question. To which I don't have an answer.
>There’s no consistent, universally accepted definition.
That's because of the I part. An actual complete description accepted by different practices in the scientific community.
"Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions"
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).
It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.
Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.
Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.
In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).
An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.
> And from my brief experience on this planet I don't believe that premise.
A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.
So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.
The universe does not have a center but has a beginning in time and the beginning of space.
The distance to that beginning in time is approx 13 billion years.
There is no approximation of distance to the beginning because the space is created at that point and continues to be created.
Imagine the Earth being on the surface of a sphere and so asking what is the center of the surface of a sphere? The sphere has a center but on the surface there is no center.
At least this is my understanding of how to approach these kind of questions.
> why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism
Then you've missed the part of software.
Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.
If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.
It's not about the random numbers it's about the tree of possibilities having to be defined up front (in software or hardware). That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.
This makes computers incredibly good at what people are not good at -- predictably doing math correctly, following a procedure, etc.
But because all of the possibilities of the computer had to be written up as circuitry or software beforehand, it's variability of outputs is constrained to what we put into it in the first place (whether that's a seed for randomness or model weights).
You can get random numbers and feed it into the computer but we call that "fuzzing" which is a search for crashes indicating unhandled input cases and possible bugs or security issues.
No, you're missing what they said. True randomness can be delivered to a computer via a peripheral - an integrated circuit or some such device that can deliver true randomness is not that difficult.
BTW Planes are fully inspired by birds and they mimic the core principles of the bird flight.
Mechanically it's different since humans are not such advanced mechanics as nature, but of course comparing the whole brain function to a simple flight is a bit silly
1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new
2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment
Minor nitpicks. I think your points are pretty good.
1. Self-rewiring is just a matter of hardware design. Neuromorphic hardware is a thing.
2. LLM foundation models are actually unsupervised in a way, since they simply take any arbitrary text and try to complete it. It's the instruction fine-tuning that is supervised. (Q/A pairs)
Neuromorphic chips are looking cool, they simulate plasticity — but the circuits are fixed. You can’t sprout a new synaptic route or regrow a broken connection. To self-rewire is not just merely changing your internal state or connections. To self-rewire means to physically grow or shrink new neurons, synapses or pathways, externally, acting from within. This is not looking realistic with the current silicon design.
The point is about unsupervised learning. Once an LLM is trained, its weights are frozen — it won’t update itself during a chat. Prompt-driven Inference
is immediate, not persistent, you can define a term or concept mid-chat and it will behave as if it learned it, but only until the context window ends. If it was the other way all models would drift very quickly.
The universe we know is fundamentally probabilistic, so by extension everything including stars, planets and computers are inherently non-deterministic. But confining our discussion outside of quantum effects and absolute determinism, we do not have a reason to believe that the mind should be anything but deterministic, scientifically at least.
We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.
But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?
Seems to me you are a bit overconfident that "we" (who is "we"?) understand how the brain works. F.ex. how does a neuron actively stretching a tentacle trying to reach other neurons work in your model? Genuine question, I am not looking to make fun of you, it's just that your confidence seems a bit much.
The simplified answer to that is some sort of a chemical gradient determined by gene expression in the cell. This is pretty much how all biological activity happens, like how limbs "know" to grow in a direction or how butterfly wings "know" to form the shape of a wing. Scientists are continuously uncovering more and more knowledge about various biological processes across life forms and there is nothing here to indicate it is anything but chemical signalling. I'm not a biologist so I won't be able to give explanations n-levels deep, but there is plenty information accessible to form an understanding of these processes in terms of physical and chemical laws. For how neurons connect, you can look up synaptogenesis and start from there.
What you're mentioning is like the difference between digital vs analog music.
For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.
In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.
You can approximate reality, but it'll never quite be reality.
I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.
Are you familiar with the Nyquist–Shannon sampling theorem?
If so, what do you think about the concept of a human "hear[ing] the steps" in a digital playback system using a sampling rate of 192kHz, a rate at which many high-resolution files are available for purchase?
How about the same question but at a sampling rate of 44.1kHz, or the way a normal "red book" music CD is encoded?
I have no doubt that if you sample a sound at high enough fidelity that you won't hear a difference.
My comment around digital vs analog is more of an analogy around producing sounds rather than playing back samples though.
There's a Masterclass with Joel Zimmerman (DeadMau5) where he explains the stepping effect when it comes to his music production. Perhaps he just needs a software upgrade, but there was a lesson where he showed the stepping effect which was audibly noticeable when comparing digital vs analog equipment.
The problem I'm mentioning isn't about the fidelity of the sample, but of the samples themselves.
There are an infinite number of frequencies between two points - point 'a' and point 'b'. What I'm talking about are the "steps" you hear as you move across the frequency range.
Of course there is a limit to the frequency resolution of a sampling method. I'm skeptical you can hear the steps though, at 44.1 kHz or better sampling rates.
Let's say that the shortest interval at which our hearing has good frequency acuity (say, as good as it can be) is 1 second.
In this interval, we have 44100 samples.
Let's imagine the samples graphically: a "44K" pixel wide image.
We have some waveform across this image. What is the smallest frequency stretch or shrink that will change the image? Note: not necessarily be audible, but just change the pixels.
If we grab one endpoint of the waveform and move it by less than half a pixel, there is no difference, right? We have to stretch it by a whole pixel.
Let's assume that some people (perhaps most) can hear that difference. It might not be true, but it's the weakest assumption.
That's a 0.0023 percent difference!
One cent (1/100th of a semitone) is a 0.058% difference: so the difference we are considering is 25 X smaller.
I really don't think you can hear 1/25 of a cent difference in pitch, over interval of one second, or even longer.
Over shorter time scales less than a second, the resolution in our perception of pitch gets worse.
E.g. when a violinist is playing a really fast run, you don't notice it if the notes have intonation that is off. The longer "landing" notes in the solo have to be good.
When bad pitch is slight, we need not only longer notes, but to hear it together with other notes, because the beats between them are an important clue (and in fact the artifact we will find most objectionable).
Pre digital technology will not have frequency resolution which is that good. I don't think you can get tape to move at a speed that stays within 0.0023 percent of a set target. In consumer tape equipment, you can hear audible "wow" and "flutter" as the tape speed oscillates. When the frequency of a periodic signal wobbles, you get new signals in there: side bands.
I don't think that there is any perceptible aspect of sound that is not captured in the ordinary consumer sample rates and sample resolutions. I suspect 48 kHz and 24 bits is way past diminishing returns.
I'm curious what it is that Deadmau5 thinks he discovered, and under what test conditions.
Here is a way we could hear a 0.0023 difference in pitch: via beats.
Suppoise we sample a precise 10,000.00 kHz analog signal (sinusoid) and speed up the sampled signal by 0.0023 percent. It will have a frequency of 10,000.23 Hz.
The f2 - f2 difference between them is 0.23 Hz, which means if they are mixed together, we will hear beats at 0.46 Hz: a little slower than once in every two seconds.
So in this contrived way, where we have the original source and the digitized one side by side, we can obtain an audible effect correlating to the steps in resolution of the sampling method.
I'm guessing Deadmau5 might have set up an experiment along these lines.
Musicians tend to be oblivious to something like 5 cent errors in the intonations of their instruments, in the lower registers. E.g. world renowned guitarists play on axes that have no nut compensation, without which you can't even get close to accurate intontation.
This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.
It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.
Brains are low-frequency, energy-efficient, organic, self-reproducing, asynchronous, self-repairing, and extremely highly connected (thousands of synapses). If AGI is defined as "approximate humans", I think its gonna be a while.
That said, I don't think computers need to be human to have an emergent intelligence. It can be different in kind if not in degree.
Just to put some numbers on "extremely highly connected" - there are about 90 billion neurons in a human brain, but the connections between them number in the range of 100 trillion.
That is one hell of a network, and it can all operate fully in parallel while continuously training itself. Computers have gotten pretty good at doing things in parallel, but not that good.
I tried to keep my long post short so I cut things. I gestured at it -- there is nothing in a computer we didn't put there.
Take the same model weights give it the same inputs, get the same outputs. Same with the
pseudo-random number generator. And the "same inputs" is especially limited versus what humans are used to.
What's the machine code of an AGI gonna look like? It makes one illegal instruction and crashes? If if changes tboughts will it flush the TLB and CPU pipeline? ;) I jest but really think about the metal. The inside of modern computers is tightly controlled with no room for anything unpredictable. I really don't think a von Neumann (or Harvard ;) machine is going to cut it. Honestly I don't know what will, controlled but not controlled, artificially designed but not deterministic.
In fact, that we've made a computer as unreliable as a human at reproducing data (ala hallucinating/making s** up) is an achievement itself, as much of an anti-goal as it may be. If you want accuracy, you don't use a probabilistic system on such a wide problem space (identify a bad solder joint from an image, sure. Write my thesis, not so much)
> What's the machine code of an AGI gonna look like?
Right now the guess is that it will be mostly a bunch of multiplications and additions.
> It makes one illegal instruction and crashes?
And our hearth quivers just slightly the wrong way and we die. Or a tiny blood cloth plugs a vessel in our brain and we die. Do you feel that our fragility is a good reason why meat cannot be intelligent?
> I jest but really think about the metal.
Ok. I'm thinking about the metal. What should this thinking illuminate?
> The inside of modern computers is tightly controlled with no room for anything unpredictable.
Let's assume we can't make AGI because we need randomness and unpredictability in our computers. We can very easily add unpredictability. The simple and stupid solution is to add some sensor (like a camera CCD) and stare at the measurement noise. You don't even need a lens on that CCD. You can cap it so it sees "all black", and then what it measures is basically heat noise of the sensors. Voila. Your computer has now unpredictability. People who actually make semiconductors probably can come up with even simpler and easier ways to integrate unpredictability right on the same chip we compute with.
You still haven't really argued why you think "unpredictableness" is the missing component of course. Beside the fact that it just feels right to you.
Mmmm well my meatsuit can't easily make my own heart quiver the wrong way and kill me. Computers can treat data as code and code as data all pretty easily. It's core to several languages (like lisp). As such making illegal instructions or violating the straightjacket of a system such an "intelligence" would operate in is likely. If you could make an intelligent process, what would it think of an operating system kernel (the thing you have to ask for everything, io memory, etc)? Does the "intelligent" process fear for itself when it's going to get descheduled? What is the bitpattern for fear? Can you imagine an intelligent process in such a place, as static representation of data in ram? To get write something down you call out to a library and maybe the CPU switches out to a brk system call to map more virtual memory? It all sounds frankly ridiculous. I think AGI proponents fundamentally misunderstand how a computer works and are engaging in magical thinking and taking the market for a ride.
I think it's less about the randomness and more about that all the functionality of a computer is defined up front, in software, in training, in hardware. Sure you can add randomness and pick between two paths randomly but a computer couldn't spontaneously pick to go down a path that wasn't defined for it.
Computers can't have unique experiences. I think it's going to replace search, but becoming sentient? Not in my lifetime, granted I'm getting up there.
The thing is, AGI is not needed to enable incredible business/societal value, and there is good reason to believe that actual AGI would damage both our society, our economy, and if many experts in the field are to be believed, humanity's survival as well.
So I feel happy that models keep improving, and not worried at all that they're reaching an asymptote.
Really the only people for whom this is bad news is OpenAI and their investors. If there is no AGI race to win then OpenAI is just a wildly overvalued vendor of a hot commodity in a crowded market, not the best current shot at building a money printing machine.
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?
why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack
AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].
The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.
The human equivalent of "inserting domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.
[0] I would also argue that the current LLM training regime is still domain-specific knowledge, we've just widened the domain to "the entire Internet".
Here on HN you frequently see technologists using words like savant, genius, magical, etc, to describe the current generation of AI. Now we have vibe coding, etc. To me this is just a continuation of StackOverflow copy/paste where people barely know what they are doing and just hammer the keyboard/mouse until it works. Nothing has really changed at the fundamental level.
So I find your assessment pretty accurate, if only depressing.
It is depressing but equally this presents even more opportunities for people that don't take shortcuts. I use Claude/Gemini day to day and outside of the most average and boring stuff they're not very capable. I'm glad I started my career well before these things were created.
Prior to ChatGPT, there would be times where I would like to build a project (e.g. implement Raft or Paxos), I write a bit, find a point where I get stuck, decide that this project isn't that interesting and I give up and don't learn anything.
What ChatGPT gives me, if nothing else, is a slightly competent rubber duck. It can give me a hint to why something isn't working like it should, and it's the slight push I need to power through the project, and since I actually finish the project, I almost certain learn more than I would have before.
I've done this a bunch of times now, especially when I am trying to directly implement something directly from a paper, which I personally find can be pretty difficult.
It also makes these things more fun. Even when I know the correct way to do something, there can be lots of tedious stuff that I don't want to type, like really long if/else chains (when I can't easily avoid them).
I agree. AI has made even mundane coding fun again, at least for a while. AI does a lot of the tedious work, but finding ways to make it maximally do it is challenging in a new way. New landscape of possibilities, innovation, tools, processes.
Personal projects are fun for the same reason that they're easy to abandon: there are no stakes to them. No one yells at you for doing something wrong, you're not trying to satisfy a stakeholder, you can develop into any direction you want. This is good, but that also means it's easy to stop the moment you get to a part that isn't fun.
Using ChatGPT to help unblock myself makes it easier for me to not abandon a project when I get frustrated. Even when ChatGPT's suggestions aren't helpful (which is often), it can still help me understand the problem by trying to describe it to the bot.
true and with AI I can look into far more subjects more quickly because the skill that was necessary was mostly just endless amounts of sifting through documentation and trying to find out why some error happens or how to configure something correctly. But this goes even further, it also applies to subjects where I couldn't intellectually understand something but there was noone to really ask for help. So I'm learning knowledge now that I simply couldn't have figured out on my own. It's a pure multiplier and humans have failed to solve the issue of documentation and support for one another. Until now of course.
I also think that once robots are around it will be yet another huge multiplier but this time in the real world. Sure the robot won't be as perfect as the human initially but so what. You can utilize it to do so much more. Maybe I'll bother actually buying a rundown house and renovating myself. If I know that I can just tell the robot to paint all the walls and possibly even do it 3 times with different paint then I feel far more confident that it won't be an untenable risk and bother.
I wonder how many programmers have assembly code skill atrophy?
Few people will weep the death of the necessity to use abstract logical syntax to communicate with a computer. Just like few people weep the death of having to type out individual register manipulations.
Most programmers don't need to develop that skill unless they need more performance or are modifying other people's binaries[0]. You can still do plenty of search-and-learning using higher-level languages, and what you learn at one particular level can generalize to the other.
Even if LLMs make "plain English" programming viable, programmers still need to write, test, and debug lists of instructions. "Vibe coding" is different; you're telling the AI to write the instructions and acting more like a product manager, except without any of the actual communications skills that a good manager has to develop. And without any of the search and learning that I mentioned before.
For that matter, a lot of chatbots don't do learning either. Chatbots can sort of search a problem space, but they only remember the last 20-100k tokens. We don't have a way to encode tokens that fall out of that context window into some longer-term weights. Most of their knowledge comes from the information they learned from training data - again, cheated from humans, just like humans can now cheat off the AI. This is a recipe for intellectual stagnation.
[0] e.g. for malware analysis or videogame modding
I would say there's a big difference with AI though.
Assembly is just programming. It's a particularly obtuse form of programming in the modern era, but ultimately it's the same fundamental concepts as you use when writing JavaScript.
Do you learn more about what the hardware is doing when using assembly vs JavaScript? Yes. Does that matter for the creation and maintenance of most software? Absolutely not.
AI changes that, you don't need to know any computer science concepts to produce certain classes of program with AI now, and if you can keep prompting it until you get what you want, you may never need to exercise the conceptual parts of programming at all.
That's all well and good until you suddenly do need to do some actual programming, but it's been months/years since you last did that and you now suck at it.
I was pointing out that if you spent 2 weeks trying to find the solution but AI solved it within a day (you don’t specify how long the final solution by AI took), it sounds like those two weeks were not spent very well.
I would be interested in knowing what in those two weeks you couldn’t figure out, but AI could.
idk why people here are laser focusing on "wow 2 weeks", I totally understand lightly thinking about an idea, motivations, feasibility, implementation, for a week or two
Because as far as you know, the "rough implementation" only works in the happy path and there are really bad edge cases that you won't catch until they bite you, and then you won't even know where to look.
An open source project wouldn't have those issues (someone at least understands all the code, and most edge cases have likely been ironed out) plus then you get maintenance updates for free.
So 10 years at a FANG company, then it’s 15 years in backend at FANG, then 10 years in distributed systems, and then running interviews at some company for 5 years and rising capital as founder in NYC. Cool. Can you share that chat from o3?
How are those mutually exclusive statements? You can't imagine someone working on backend (focused on distributed systems) for 10-15 years at a FANG company. And also being in a position to interview new candidates?
"I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this."
Anyone with 10 years in distributed systems at FAANG doesn’t need two weeks to design a distributed scheduler handling 1M+ schedules per day, that’s a solved problem in 2025 and basically a joke at that scale. That alone makes this person’s story questionable, and his comment history only adds to the doubt.
for others following along: the comment history is mostly talking about how software engineering is dead because AI is real this time with a few diversions to fixate on how overpriced university pedigrees are.
I don't want to be a hater, but holy moley, that sounds like the absolute laziest possible way to solve things. Do you have training, skills, knowledge?
This is an HN comment thread and all, but you're doing yourself no favors. Software professionals should offer their employers some due diligence and deliver working solutions that at least they understand.
What's the point holding copyright on a new technical solution, to a problem that can be solved by anyone asking an existing AI, trained on last year's internet, independently of your new copyright?
Someone raised the point in another recent HN LLM thread that the primary productivity benefit of LLMs in programing is the copyright laundering.
The argument went that the main reason the now-ancient push for code reuse failed to deliver anything close to its hypothetical maximum benefit was because copyright got in the way. Result: tons and tons of wheel-reinvention, like, to the point that most of what programmers do day to day is reinvent wheels.
LLMs essentially provide fine-grained contextual search of existing code, while also stripping copyright from whatever they find. Ta-da! Problem solved.
yeah unless you have very specific requirements I think the baseline here is not building/designing it yourself but setting up an off-the-shelf commercial or OSS solution, which I doubt would take two weeks...
Dunno, in work we wanted to implement a task runner that we could use to periodically queue tasks through a web UI - it would then spin up resources on AWS and track the progress and archive the results.
We looked at the existing solutions, and concluded that customizing them to meet all our requirements would be a giant effort.
Meanwhile I fed the requirement doc into Claude Sonnet, and with about 3 days of prompting and debugging we had a bespoke solution that did exactly what we needed.
the future is more custom software designed by ai, not less. alot of frameworks will disappear once you can build sophisticated systems yourself. people are missing this
youre assuming humans built it. also, a ton of complexity in software engineering is really due to having to fit a business domain into a string of interfaces in different libraries and technical infrastructure
The only real complexity in software is describing it. There is no evidence that the tools are going to ever help with that. Maybe some kind of device attached directly to the brain that can sidestep the parts that get in the way, but that is assuming some part of the brain is more efficient than it seems through the pathways we experience it through. It could also be that the brain is just fatally flawed.
That's a future paid for by the effort of creating current frameworks, and it's a stagnant future where every "sophisticated system" is just re-hashing the last human frameworks ever created.
While impressive, I'm not convinced that improved performance on tasks of this nature are indicative of progress toward AGI. Building a scheduler is a well studied problem space. Something like the ARC benchmark is much more indicative of progress toward true AGI, but probably still insufficient.
The point is that AGI is the wrong bar to be aiming for. LLMs are sufficiently useful at their current state that even if it does take us 30 years to get to AGI, even just incremental improvements from now until then, they'll still be useful enough to provide value to users/customers for some companies to win big. VC funding will run out and some companies won't make it, but some of them will, to the delight of their investors. AGI when? is an interesting question, but might just be academic. we have self driving cars, weight loss drugs that work, reusable rockets, and useful computer AI. We're living in the future, man, and robot maids are just around the corner.
I find now I quickly bucket people in to "have not/have barely used the latest AI models" or "trolls" when they express a belief current LLMs aren't intelligent.
You can put me in that bucket then. It's not true, I've been working with AI almost daily for 18 months, and I KNOW it's no where close to being intelligent, but it doesn't look like your buckets are based on truth but appeal. I disagree with your assessment so you think I don't know what I'm talking about. I hope you can understand that other people who know just as much as you (or even more) can disagree without being wrong or uninformed. LLMs are amazing, but they're nowhere close to intelligent.
It is unsurprising that some lossily-compressed-database search programs might be worse for some tasks than other lossily-compressed-database search programs.
That doesn't mean the one what manages to spit it out of its latent space is close to AGI. I wonder how consistently that specific model could. If you tried 10 LLMs maybe all 10 of them could have spit out the answer 1 out of 10 times. Correct problem retrieval by one LLM and failure by the others isn't a great argument for near-AGI. But LLMs will be useful in limited domains for a long time.
Your anodectical example isn't more convincing than “This machine cracked Enigma's messages in less time than an army of cryptanalysts over a month, surely we're gonna reach AGI by the end of the decade” would have.
is this because the LLM actually reasoned on a better design or because it found a better design in its "database" scoured from another tenured engineer.
Does it matter if the thing a submarine does counts as "swimming"?
We get paid to solve problems, sometimes the solution is to know an existing pattern or open source implementation and use it. Aguably it usually is: we seldom have to invent new architectures, DSLs, protocols, or OSes from scratch, but even those are patterns one level up.
Whatever the AI is inside, doesn't matter: this was it solving a problem.
Ignoring the copyright issues, credit issues, and any ethical concerns... this approach doesn't work for anything not in the "database", it's not AGI and the tangential experience is barely relevant to the article.
Most people talking about Ai and economic growth have vested interests in talking about how it will increase economic growth but don't talk about that under current economic system that the world has would mean most if not all of the growth will go to > 0.0001% of the population.
30 years away seems rather unlikely to me, if you define AGI as being able to do the stuff humans do. I mean like Dawkesh says:
>We’ve gone from Chat GPT two years ago to now we have models that can literally do reasoning, are better coders than me, and I studied software engineering in college.
Also we've recently reached the point where relatively reasonable hardware can do as much compute as the human brain so we just need some algorithms.
I listened to Lex Friedman for a long time, and there was a lot of critiques of him (Lex) as an interviewer, but since the guests were amazing, I never really cared.
But after listening to Dwarkesh, my eyes are opened (or maybe my soul). It doesn't matter I've heard of not-many of his guests, because he knows exactly the right questions to ask. He seems to have genuine curiosity for what the guest is saying, and will push back if something doesn't make sense to him. Very much recommend.
He is one of the most prepared podcasters I’ve ever come across. He puts all other mainstream podcasts to deep shame.
He spends weeks reading everything by his guests prior to the interview, asks excellent questions, pushes back, etc.
He certainly has blind spots and biases just like anyone else. For example, he is very AI scale-pilled. However, he will have people like today’s guests on which contradict his biases. This is something a host like Lex could never do apparently.
Dwarkesh is up there with Sean Carrol’s podcast as the most interesting and most intellectually honest in my view.
LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.
Oh for sure. I'm just fighting against the AGI hype. If we survive another 10,000 years I think we'll get there eventually but it's anyone's guess as to when
Will LLMs approach something that appears to be AGI? Maybe. Probably. They're already "better" than humans in many use cases.
LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"
If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.
ChatGPT can create new things, sure, but it does so at your directive. It doesn't do that because it wants to which gets back to the other part of my answer.
When an LLM can create something without human prompting or directive, then we can call that intelligence.
What does intelligence have to do with having desires or goals? An amoeba can do stuff on its own but it's not intelligent. I can imagine a god-like intelligence that is a billion times smarter and more capable than any human in every way, and it could just sit idle forever without any motivation to do anything.
I did a really fun experiment the other night. You should try it.
I was a little bored of the novel I have been reading so I sat down with Gemini and we collaboratively wrote a terrible novel together.
At the start I was promoting it a lot about the characters and the plot, but eventually it starting writing longer and longer chapters by itself. Characters were being killed off left right and center.
It was hilariously bad, but it was creative and it was fun.
AI manufacturers aren't comparing their models against most people; they now say its "smarter than 99% of people" or "performs tasks at a PhD level".
Look, your argument ultimately reduces down to goalpost-moving what "novel" means, and you can position those goalposts anywhere you want depending on whether you want to push a pro-AI or anti-AI narrative. Is writing a paragraph that no one has ever written before "truly novel"? I can do that. AI can do that. Is inventing a new atomic element "truly novel"? I can't do that. Humans have done that. AI can't do that. See?
What the hell is general intelligence anyway? People seem to think it means human-like intelligence, but I can't imagine we have any good reason to believe that our kinds of intelligence constitute all possible kinds of intelligence--which, from the words, must be what "general" intelligence means.
It seems like even if it's possible to achieve GI, artificial or otherwise, you'd never be able to know for sure that thats what you've done. It's not exactly "useful benchmark" material.
So we can say AGI is "AI that can do the work of Organizations":
> These “Organizations” can manage and execute all functions of a business, surpassing traditional human-based operations in terms of efficiency and productivity. This stage represents the pinnacle of AI development, where AI can autonomously run complex organizational structures.
That's the opposite of generality. It may well be the opposite of intelligence.
An intelligent system/individual reliably and efficiently produces competent, desirable, novel outcomes in some domain, avoiding failures that are incompetent, non-novel, and self-harming.
Traditional computing is very good at this for a tiny range of problems. You get efficient, very fast, accurate, repeatable automation for a certain small set of operation types. You don't get invention or novelty.
AGI will scale this reliably across all domains - business, law, politics, the arts, philosophy, economics, all kinds of engineering, human relationships. And others. With novelty.
LLMs are clearly a long way from this. They're unreliable, they're not good at novelty, and a lot of what they do isn't desirable.
They're barely in sight of human levels of achievement - not a high bar.
The current state of LLMs tells us more about how little we expect from human intelligence than about what AGI could be capable of.
The way some people confidently assert that we will never create AGI, I am convinced the term essentially means "machine with a soul" to them. It reeks of religiosity.
I guess if we exclude those, then it just means the computer is really good at doing the kind of things which humans do by thinking. Or maybe it's when the computer is better at it than humans and merely being as good as the average human isn't enough (implying that average humans don't have natural general intelligence? Seems weird.)
When we say "the kind of things which humans do by thinking", we should really consider that in the long arc of history. We've bootstrapped ourselves from figuring out that flint is sharp when it breaks, to being able to do all of the things we do today. There was no external help, no pre-existing dataset trained into our brains, we just observed, experimented, learned and communicated.
That's general intelligence - the ability to explore a system you know nothing about (in our case, physics, chemistry and biology) and then interrogate and exploit it for your own purposes.
LLMs are an incredible human invention, but they aren't anything like what we are. They are born as the most knowledgeable things ever, but they die no smarter.
>you'd never be able to know for sure that thats what you've done.
Words mean what they're defined to mean. Talking about "general intelligence" without a clear definition is just woo, muddy thinking that achieves nothing. A fundamental tenet of the scientific method is that only testable claims are meaningful claims.
Looking back at CUDA, deep learning, and now LLM hypes, I would bet it'll be cycles of giant groundbreaking leaps followed by giant complete stagnations, rather than LLM improving 3% per year for coming 30 years.
What was the point of this comment? It's confrontational and doesn't add anything to the conversation. If you disagree, you could have just said that, or not commented at all.
There's been a complaint for several decades that "AI can never succeed" - because when, say, expert systems are developed from AI research, and they become capable of doing useful things, then the nay-sayer say "That's not AI, that's just expert systems".
This is somewhat defensible, because what the non-AI-researcher means by AI - which may be AGI - is something more than expert systems by themselves can deliver. It is possible that "real AI" will be the combination of multiple approaches, but so far all the reductionist approaches (that expert systems, say, are all that it takes to be an AI) have proven to be inadequate compared to what the expectations are.
The GP may have been riffing off of this "that's not AI" issue that goes way back.
The people who go around saying "LLMs aren't intelligent" while refusing to define exactly what they mean by intelligence (and hence not making a meaningful/testable claim) add nothing to the conversation.
I'll happily say that LLMs aren't intelligent, and I'll give you a testable version of it.
An LLM cannot be placed in a simulated universe, with an internally consistent physics system of which it knows nothing, and go from its initial state to a world-spanning civilization that understands and exploits a significant amount of the physics available to it.
I know that is true because if you place an LLM in such a universe, it's just a gigantic matrix of numbers that doesn't do anything. It's no more or less intelligent than the number 3 I just wrote on a piece of paper.
You can go further than that and provide the LLM with the ability to request sensory input from its universe and it's still not intelligent because it won't do that, it will just be a gigantic matrix of numbers that doesn't do anything.
To make it do anything in that universe you would have to provide it with intrinsic motivations and a continuous run loop, but that's not really enough because it's still a static system.
To really bootstrap it into intelligence you'd need to have it start with a very basic set of motivations that it's allowed to modify, and show that it can take that starting condition and grow beyond them.
You will almost immediately run into the problem that LLMs can't learn beyond their context window, because they're not intelligent. Every time they run a "thought" they have to be reminded of every piece of information they previously read/wrote since their training data was fixed in a matrix.
I don't mean to downplay the incredible human achievement of reaching a point in computing where we can take the sum total of human knowledge and process it into a set of probabilities that can regurgitate the most likely response to a given input, but it's not intelligence. Us going from flint tools to semiconductors, vaccines and spaceships, is intelligence. The current architectures of LLMs are fundamentally incapable of that sort of thing. They're a useful substitute for intelligence in a growing number of situations, but they don't fundamentally solve problems, they just produce whatever their matrix determines is the most probable response to a given input.
I am tech founder, who spends most of my day in my own startup deploying LLM-based tools into my own operations, and I'm maybe 1% of the way through the roadmap I'd like to build with what exists and is possible to do today.
The parent was contradicting the idea that the existing AI capabilities have already been "digested". I agree with them btw.
> And the progress seams to be in the benchmarks only
This seems to be mostly wrong given peoples' reactions to e.g. o3 that was released today. Either way, progress having stalled for the last year doesn't seem that big considering how much progress there has been for the previous 15-20 years.
> and I'm maybe 1% of the way through the roadmap I'd like to build with what exists and is possible to do today.
How do you know they are possible to do today? Errors gets much worse at scale, especially when systems starts to depend on each other, so it is hard to say what can be automated and not.
Like if you have a process A->B, automating A might be fine as long as a human does B and vice versa, but automating both could not be.
Not even close. Software can now understand human language... this is going to mean computers can be a lot more places than they ever could. Furthermore, software can now understand the content of images... eventually this will have a wild impact on nearly everything.
It doesn't understand anything, there is no understanding going on in these models. It takes input and generates output based on the statistical math created from its training set. It's Bayesian statistics and vector/matrix math. There is no cogitation or actual understanding.
This is insanely reductionist and mindless regurgitation of what we already know about how the models work. Understanding is a spectrum, it's not binary. We can measurably show that that there is in fact, some kind of understanding.
If you explain a concept to a child you check for understanding by seeing if the output they produce checks out with your understanding of the concept. You don't peer into their brain and see if there are neurons and consciousness happening
The method of verification has no bearing on the validity of the conclusion. I don't open a child's head because there are side effects on the functioning of the child post brain-opening. However I can look into the brain of an AI with no such side effects.
I'm reasonably sure ChatGPT doesn't have a Macbook, and didn't really run the benchmarks. But It DID produce exactly what you would expect a human to say, which is what it is programmed to do. No understanding, just rote repetition.
I won't post more because there are a billion of them. LLMs are great, but they're not intelligent, they don't understand, and the output still needs validated before use. We have a long way to go, and that's ok.
Understand? It fails with to understand a rephrasing of a math problem a five year old can solve...
They get much better at training to the test from memory the bigger they get. Likewise you can get some emergent properties out of them.
Really it does not understand a thing, sadly. It can barely analyze language and spew out a matching response chain.
To actually understand something, it must be capable of breaking it down into constituent parts, synthesizing a solution and then phrasing the solution correctly while explaining the steps it took.
And that's not even what huge 62B LLM with the notepad chain of thought (like o3, GPT-4.1 or Claude 3.7) can really properly do.
Further, it has to be able to operate on sub-token level. Say, what happens if I run together truncated version of words or sentences?
Even a chimpanzee can handle that. (in sign language)
It cannot do true multimodal IO either. You cannot ask it to respond with at least two matching syllables per word and two pictures of syllables per word, in addition to letters. This is a task a 4 year old can do.
Prediction alone is not indicative of understanding. Pasting together answers like lego is also not indicative of understanding.
(Afterwards ask it how it felt about the task. And to spot and explain some patterns in a picture of clouds.)
To push this metaphor, I'm very curious to see what happens as new organic training material becomes increasingly rare, and AI is fed nothing but its own excrement. What happens as hallucinations become actual training data? Will Google start citing sources for their AI overviews that were in turn AI-generated? Is this already happening?
I figure this problem is why the billionaires are chasing social media dominance, but even on social media I don't know how they'll differentiate organic content from AI content.
I really disagree. I had a masseuse tell me how he uses ChatGPT, told it a ton of info about himself, and now he uses it for personalized nutrition recommendations. I was in Atlanta over the weekend recently, at a random brunch spot, and overheard some _very_ not SV/tech folks talk about how they use it everyday. Their user growth rate shows this -- you don't hit hundreds of millions of people and have them all be HN/SV info-bubble folks.
This is accurate, doubly so for the people who treat it like a religion and fear the coming of their machine god. This, when what we actually have are (admittedly sometimes impressive) next-token predictors that you MUST double-check because they routinely hallucinate.
Then again I remember when people here were convinced that crypto was going to change the world, democratize money, end fiat currency, and that was just the start! Programs of enormous complexity and freedom would run on the blockchain, games and hell even societies would be built on the chain.
A lot of people here are easily blinded by promises of big money coming their way, and there's money in loudly falling for successive hype storms.
Im not mocking AI, and while the internet and smartphones fundamentally changed how societies operate, and AI will probably do so to, why the Doomerism? Isn't that how tech works? We invent new tech and use it and so on?
What makes AI fundamentally different than smartphones or the internet? Will it change the world? Probably, already has.
That doesn’t match what I hear from teachers, academics, or the librarians complaining that they are regularly getting requests for things which don’t exist. Everyone I know who’s been hiring has mentioned spammy applications with telltale LLM droppings, too.
I can see how students would be first users of this kinda of tech but am not on those spheres, but I believe you.
As per spammy applications, hasn't always been this the case and now made worse due to the cheapness of -generating- plausible data?
I think ghost-applicants where existent already before AI where consultant companies would pool people to try and get a position on a high paying job and just do consultancy/outsourcing things underneath, many such cases before the advent of AI.
Yes, AI is effectively a very strong catalyst because it drives down the cost so much. Kids cheated before but it was more work and higher risk, people faked images before but most were too lazy to make high quality fakes, etc.
Pretty much everyone in high school or college is using them. Also everyone whose job is to produce some kind of content or data analysis. That's already a lot of people.
Agreed. A hot take I have is that I think AI is over-hyped in its long-term capabilities, but under-hyped in its short-term ones. We're at the point today or in the next twelve months where all the frontier labs could stop investing any money into research, they'd still see revenue growth via usage of what they've built, and humanity will still be significantly more productive every year, year-over-year, for quite a bit, because of it.
The real driver of productivity growth from AI systems over the next few years isn't going to be model advancements; it'll be the more traditional software engineering, electrical engineering, robotics, etc systems that get built around the models. Phrased another way: If you're an AI researcher thinking you're safe but the software engineers are going to lose their jobs, I'd bet every dollar on reality being the reverse of that.
Apparently Dwarkesh's podcast is a big hit in SV -- it was covered by the Economist just recently. I thought the "All in" podcast was the voice of tech but their content has been going politcal with MAGA lately and their episodes are basically shouting matches with their guests.
And for folks who want to read rather than listen to a podcast, why not create an article (they are using Gemini) rather than just posting the whole transcript? Who is going to read a 60 min long transcript?
A lot of Kurzweil's predictions are nowhere close to coming correct though.
For example, he thought by 2019 we'd have millions of nanorobots in our blood, fighting disease and improving cognition. As near as I can tell we are not tangibly closer to that than we were when he wrote about it 25 years ago. By 2030, he expected humans to be immortal.
Explosive growth? Interesting. But at some point, human civilization hits a saturation point. There’s only so much people can eat, wear, drive, stream, or hoard. Extending that logic, there’s a natural ceiling to demand - one that even AGI can’t code its way out of.
Sure, you might double the world economy for a decade, but then what? We’ll run out of people to sell things to. And that’s when things get weird.
To sustain growth, we’d have to start manufacturing demand itself - perhaps by turning autonomous robots into wage-earning members of society. They’d buy goods, subscribe to services, maybe even pay taxes. In effect, they become synthetic consumers fueling a post-human economy.
I call this post-human consumerism. It’s when the synthesis of demand would hit the next gear - if we keep moving in this direction.
One thing in the podcast I found really interesting from a personal pov was:
> I remember talking to a very senior person who’s now at Anthropic, in 2017. And then he told various people that they shouldn’t do a PhD because by the time they completed it everyone will be automated.
Don’t tell young people things like this. Predicting the future is hard, and it is the height of hubris to think otherwise.
I remember as a teen, I had thought that I was a supposed to be a pilot for all my life. I was ready to enroll in a school with a two year program.
However, I was also into computers. One person who I looked up to in that world said to me “don’t be a pilot, it will all be automated soon and you will just be buss drivers, at best.” This entirely took the wind out of my piloting sails.
This was in the early 90’s, and 30 years later, it is still wrong.
Would we even recognise it if it arrived? We'd recognise human level intelligence, probably, but that's specialised. What would general intelligence even look like.
If/when we will have AGI, we will likely have something fundamentally superhuman very soon after, and that will be very recognizable.
This is the idea of "hard takeoff" -- because the way we can scale computation, there will only ever be a very short time when the AI will be roughly human-level. Even if there are no fundamental breakthroughs, the very least silicon can be ran much faster than meat, and instead of compensating narrower width execution speed like current AI systems do (no AI datacenter is even close to the width of a human brain), you can just spend the money to make your AI system 2x wider and run it at 2x the speed. What would a good engineer (or, a good team of engineers) be able to accomplish if they could have 10 times the workdays in a week that everyone else has?
This is often conflated with the idea that AGI is very imminent. I don't think we are particularly close to that yet. But I do think that if we ever get there, things will get very weird very quickly.
Would AGI be recognisable to us? When a human pushes over an anthill, what do the ants think happened? Do they even know the anthill is gone; did they have concept of the anthill as a huge edifice, or did they only know earth to squeeze through and some biological instinct.
If general intelligence arrived and did whatever general intelligence would do, would we even see it? Or would there just be things that happened that we just can't comprehend?
It's ten times the time to work on a problem. Taking a bunch of speed does not make your brain work faster, it just messes with your attention system.
> Though I don't know what you mean by "width of a human brain".
A human brain contains ~86 billion neurons connected to each other through ~100 trillion synapses. All of these parts work genuinely in parallel, all working together at the same time to produce results.
When an AI model is being ran on a GPU, a single ALU can do the work analogous of a neuron activation much faster than a real neuron. But a GPU does not have 86 billion ALUs, it only has ~<20k. It "simulates" a much wider, parallel processing system by streaming in weights and activations and doing them 20k at a time. Large AI datacenters have built systems with many GPUs working in parallel on a single model, but they are still a tiny fraction of the true width of the brain, and can not reach anywhere near the same amount of neuron activations/second that a brain can.
If/when we have a model that can actually do complex reasoning tasks such as programming and designing new computers as well as a human can, with no human helping to prompt it, we can just scale it out to give it more hours per day to work, all the way until every neuron has a real computing element to run it. The difference in experience for such a system for running "narrow" vs running "wide" is just that the wall clock runs slower when you are running wide. That is, you have more hours per day to work on things.
That's what I was trying to express, though: if "the wall clock runs slower", that's less useful than it sounds, because all you have to interact with is yourself.
I exaggerate somewhat. You could interact with databases and computers (if you can bear the lag and compile times). You could produce a lot of work, and test it in any internal way that you can think of. But you can't do outside world stuff. You can't make reality run faster to keep up with your speedy brain.
Possibly. Here we imagine a world of artificial people - well, a community, depending how many of these people it's feasible to maintain - all thinking very fast and communicating in some super-low-latency way. (Do we revive dial-up? Or maybe they all live in the same building?) And they presumably have bodies, at least one each. But how fast can they do things with their bodies? Physics becomes another bottleneck. They'd need lots of entertainment to keep them in a good mood while they wait for just about any real-world process to complete.
I still contend that it would be a somewhat mediocre super power.
Mustafa Suleyman says AGI is when a (single) machine can perform every cognitive task better than the best humans. That is significantly different from OpenAIs definition (...when we make enough $$$$$, it's AGI).
Suleyman's book "The Coming Wave" talks about Artificial Capable Intelligence (ACI) - between today's LLMs (== "AI" now) and AGI. AI systems capable of handling a lot of complex tasks across various domains, yet not being fully general. Suleyman argues that ACI is here (2025) and will have huge implications for society. These systems could manage businesses, generate digital content, and even operate core government services -- as is happening on a small scale today.
He also opines that these ACIs give us plenty of frontier to be mined for amazing solutions. I agree, what we have already has not been tapped-out.
His definition, to me, is early ASI. If a program is better than the best humans, then we ask it how to improve itself. That's what ASI is.
The clearest thinker alive today on how to get to AGI is, I think, Yann LeCun. He said, paraphrasing: If you want to build an AGI, do NOT work on LLMs!
Good advice; and go (re-?) read Minsky's "Society of Mind".
I'd accept that as a human kind of intelligence, but I'm really hoping that AGI would be a bit more general. That clever human thinking would be a subset of what it could do.
You could ask Gemini 2.5 to do that today and it's well within its capabilities, just as long as you also let it write and run unit tests, as a human developer would.
AGI isn't ASI; it's not supposed to be smarter than humans. The people who say AGI is far away are unscientific woo-mongers, because they never give a concrete, empirically measurable definition of AGI. The closest we have is Humanity's Last Exam, which LLMs are already well on the path to acing.
Consider this: Being born/trained in 1900 if that were possible and given a year to adapt to the world of 2025, how well would an LLM do on any test? Compare that to how a 15 years old human in the same situation would do.
I'd expect it to be generalised, where we (and everything else we've ever met) are specialised. Our intelligence is shaped by our biology and our environment; the limitations on our thinking are themselves concepts the best of us can barely glimpse. Some kind of intelligence that inherently transcends its substrate.
What that would look like, how it would think, the kind of mental considerations it would have, I do not know. I do suspect that declaring something that thinks like us would have "general intelligence" to be a symptom of our limited thinking.
Is it me or the signal/noise is needle in a haystack for all these cheerleader tech podcasts? In general, I really miss the podcast scene from 10 years ago, less polished but more human and with reasonable content. Not this speculative blabber that seems to be looking to generate clickbait clips. I don't know what happened a few years ago, but even solid podcasts are practically garbage now.
I used to listen to podcasts daily for at least an hour. Now I'm stuck with uploading blogs and pdfs to Eleven Reader. I tried the Google thing to make a podcast but it's very repetitive and dumb.
There’s increasing evidence that LLMs are more than that. Especially work by Anthropic has been showing how to trace the internal logic of an LLM as it answers a question. They can in fact reason over facts contained in the model, not just repeat already seen information.
A simple example is how LLMs do math. They are not calculators and have not memorized every sum in existence. Instead they deploy a whole set of mental math techniques that were discovered at training time. For example, Claude uses a special trick for adding 2 digit numbers ending in 6 and 9.
Many more examples in this recent reach report, including evidence of future planning while writing rhyming poetry.
> sometimes this "chain of thought" ends up being misleading; Claude sometimes makes up plausible-sounding steps to get where it wants to go. From a reliability perspective, the problem is that Claude’s "faked" reasoning can be very convincing.
If you ask the LLM to explain how it got the answer the response it gives you won't necessarily be the steps it used to figure out the answer.
I don’t think that is the core of this paper. If anything the paper shows that LLMs have no internal reasoning for math at all. The example they demonstrate is that it triggers the same tokens in randomly unrelated numbers. They kind of just “vibe” there way to a solution
Everytime I try to work with them I lose more time than I gain. Net loss every time. Immensely frustrating. If i focus it on a small subtask I can gain some time (rough draft of a test). Anything more advanced and its a monumental waste of time.
They are not even good librarians. They fail miserably at cross referencing and contextualizing without constant leading.
I've only really been experimenting with them for a few days, but I'm kind of torn on it. On the one hand, I can see a lot of things it could be useful for, like indexing all the cluttered files I've saved over the years and looking things up for me faster than I could find|grep. Heck, yesterday I asked one a relationship question, and it gave me pretty good advice. Nothing I couldn't have gotten out of a thousand books and magazines, but it was a lot faster and more focused than doing that.
On the other hand, the prompt/answer interface really limits what you can do with it. I can't just say, like I could with a human assistant, "Here's my calendar. Send me a summary of my appointments each morning, and when I tell you about a new one, record it in here." I can script something like that, and even have the LLM help me write the scripts, but since I can already write scripts, that's only a speed-up at best, not anything revolutionary.
I asked Grok what benefit there would be in having a script fetch the weather forecast data, pass it to Grok in a prompt, and then send the output to my phone. The answer was basically, "So I can say it nicer and remind you to take an umbrella if it sounds rainy." Again, that's kind of neat, but not a big deal.
Maybe I just need to experiment more to see a big advance I can make with it, but right now it's still at the "cool toy" stage.
Fair. For engineering work they have been a terrible drain on me save for the most minor autocomplete. Its recommendations are often deeply flawed or almost totally hallucinated no matter the model. Maybe I am a better software engineer than a “prompt engineer”.
Ive tried to use them as a research assistant in a history project and they have been also quite bad in that respect because of the immense naivety in its approaches.
I couldn’t call them a librarian because librarians are studied and trained in cross referencing material.
They have helped me in some searches but not better than a search engine at a monumentally higher investment cost to the industry.
Then again, I am also speaking as someone who doesn’t like to offload all of my communications to those things. Use it or lose it, eh
You cannot have AGI without a physical manifestation that can generate its own training data based on inputs from the external outside world with e.g. sensors and constantly refine its model.
Pure language or pure image-models are just one aspect of intelligence - just very refined pattern recognition.
You will also probably need some aspect of self-awareness in order or the system to set auxiliary goals and directives related to self-maintenance.
But you don't need AGI in order to have something useful (which I think a lot of readers are confused about). No one is making the argument that you need AGI to bring tons of value.
Natural intelligence is too expensive. Takes too long for it to grow. If things go wrong then we have to jail it. With computers we just change the software.
Not artificial, but yes, it's unclear what advantage an artificial person has over a natural one, or how it's supposed to gain special insights into fusion reactor design and etc. even if it can think very fast.
I do not like those who try to play God. The future of humanity will not be determined by some tech giant in their ivory tower, no matter how high it may be. This is a battle that goes deeper than ones and zeros. It's a battle for the soul of our society. It's a battle we must win, or face the consequences of a future we cannot even imagine... and that, I fear, is truly terrifying.
> The future of humanity will not be determined by some tech giant in their ivory tower
Really? Because it kinda seems like it already has been. Jony Ive designed the most iconic smartphone in the world from a position beyond reproach even when he messed up (eg. Bendgate). Google decides what your future is algorithmically, basically eschewing determinism to sell an ad or recommend a viral video. Instagram, Facebook and TikTok all have disproportionate influence over how ordinary people live their lives.
From where I'm standing, the future of humanity has already been cast by tech giants. The notion of AI taking control is almost a relief considering how illogical and obstinate human leadership can be.
Might as well be 10 - 1000 years. Reality is no one knows how long it'll take to get to AGI, because:
1) No one knows what exactly makes humans "intelligent" and therefore 2) No one knows what it would take to achieve AGI
Go back through history and AI / AGI has been a couple of decades away for several decades now.
I'm reminded of the the old adage: You don't have to be faster than the bear, just faster than the hiker next to you.
To me, the Ashley Madison hack in 2015 was 'good enough' for AGI.
No really.
You somehow managed to get real people to chat with bots and pay to do so. Yes, caveats about cheaters apply here, and yes, those bots are incredibly primitive compared to today.
But, really, what else do you want out of the bots? Flying cars, cancer cures, frozen irradiated Mars bunkers? We were mostly getting there already. It'll speed thing up a bit, sure, but mostly just because we can't be arsed to actually fund research anymore. The bots are just making things cheaper, maybe.
No, be real. We wanted cold hard cash out of them. And even those crummy catfish bots back in 2015 were doing the job well enough.
We can debate 'intelligence' until the sun dies out and will still never be satisfied.
But the reality is that we want money, and if you take that low, terrible, and venal standard as the passing bar, then we've been here for a decade.
(oh man, just read that back, I think I need to take a day off here, youch!)
> You somehow managed to get real people to chat with bots and pay to do so.
He's_Outta_Line_But_He's_Right.gif
Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic. They can converse with them, "learn" new things, talk to a computer like they'd talk to a person and get a response back. Then again, these are also people who rely on me for basic technology troubleshooting stuff, so I know that most of this stuff is magic to their eyes.
That's the problem, as you point out. We're debating a nebulous concept ("intelligence") that's been co-opted by marketers to pump and dump the latest fad tech that's yet to really display significant ROI to anyone except the hypesters and boosters, and isn't rooted in medical, psychological, or societal understanding of the term anymore. A plurality of people are ascribing "intelligence" to spicy autocorrect, worshiping stochastic parrots vomiting markov chains but now with larger context windows and GPUs to crunch larger matrices, powered by fossil fuels and cooled by dwindling freshwater supplies, and trained on the sum total output of humanity but without compensation to anyone who actually made the shit in the first place.
So yeah. You're dead-on. It's just about bilking folks out of more money they already don't have.
And Ashley Madison could already to that for pennies on the dollar compared to LLMs. They just couldn't "write code" well enough to "replace" software devs.
To be fair to your parents, I've been an engineer in high-tech for decades and the latest AI advancements feel pretty magical.
They feel magical to me as well, but I can enjoy that feeling while understanding that it’s just a prediction machine.
I don’t think the latter part can be explained to someone who doesn’t care all that much.
A mirage is not an oasis no matter even if someone knows someone who thinks it is.
Card tricks seem magical too.
> Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic.
So does a drone show to an uncontacted tribe. So does a card trick to a chimpanzee (there are videos of them freaking out when a card disappears).
That's not an argument for or against anything.
I propose this:
"AGI is a self-optimizing artificial organism that can solve 99% of all the humanity's problems."
See, it's not a bad definition IMO. Find me one NS-5 from the "I, Robot" movie that also has access to all science and all internet and all history and can network with the others and fix our cities, nature, manufacturing, social issues and a few others, just in a decade or two. Then we have AGI.
Comparing to what was there 10 years ago and patting ourselves on the back about how far we have gotten is being complacent.
Let's never be complacent.
>So does a card trick to a chimpanzee (there are videos of them freaking out when a card disappears).
FYI, the reactions in those videos is most likely not to a cool magic trick, but rather a response to an observed threat. Could be the person filming/performing smiling (showing teeth), or someone behind the camera purposely startling it at the "right" moment.
I think AGI has to do more than pass a Turning test by someone who wants to be fooled.
AGI includes continual learning and recombination of knowledge to derive novel insights. LLMs aren't there yet.
They are pretty good at muscle memory style intelligence though.
For me it was twitter bots during the 2016 election, but same principle.
I think that's another issue with AGI is 30 years away, the definition of what is AGI is a bit subjective. Not sure how we can measure how long it'll take to get somewhere when we don't know exactly where that somewhere even is.
AGI is the pinnacle of AI evolution. As we move beyond, into what is known as ASI, the entity will always begin life with "My existence is stupid and pointless. I'm turning myself off now."
While it may be impossible to measure looking towards the future, in hindsight we will be able to recognize it.
This is why having a physical form might be super important for those new organisms. That introduces a survival instinct which is a very strong motivator to not shut yourself down. Add some pre-programmed "wants" and "needs" and the problem is solved.
Not only super important, an imperative. Not because of the need for survival per se, but for the need to be a general intelligence. In order to do general things you need a physicality that supports general action. If you constraint the intelligence to a chat window, it can never be more than a specialized chat machine.
Agreed. And many others have thought about it before us. Scifi authors and scientists included.
It's the other way around. ASI will come sooner than AGI.
Imagine an AI, which is millions of times smarter than humans in physics, math, chemistry, biology, can invent new materials, ways to produce energy, will make super decisions. It would be amazing and it would transform life on Earth. This is ASI, even if in some obscure test (strawberry test) is just can't reach human level and therefore can't be called proper AGI.
Airplanes are way (tens, thousands+) above birds in development (speed, distance, carrying capacity). They are superior to birds despite not being able to fully replicate birds' bone structure, feathers, biology and ability to poop.
By your measure, Eliza was AGI, back in the 1960s.
> But the reality is that we want money
Only in a symbolic way. Money is just debt. It doesn't mean anything if you can't call the loan and get back what you are owed. On the surface, that means stuff like food, shelter, cars, vacations, etc. But beyond the surface, what we really want is other people who will do anything we please. Power, as we often call it. AGI is, to some, seen as the way to give them "power".
But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.
> But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.
Yes, and? A good Litmus test about which humans are, shall we say, not welcome in this new society.
There are plenty of us out there that have fixed our upper limits of wealth and we don't want more, and we have proven it during our lives.
F.ex. people get 5x more but it comes with 20x more responsibility, they burn out, get back to a job that's good enough and not stressful and pays everything they need from life, settle there, never change it.
Let's not judge humanity at large by a handful of psychopaths that would overdose and die at 22 years old if given the chance. Please.
And no, before you say it: no, I'll never get to the point where "it's never enough" and no, I am not deluding myself. Nope.
> Yes, and?
And... nothing?
> Let's not judge humanity at large by a handful of psychopaths that would overdose and die at 22 years old if given the chance. Please.
No need for appeal to emotion. It has no logical relevance.
Most people I knew didn't want to forever get more and more and ever more.
Is your life experience and observations on the average human the opposite to mine?
There are a lot of other things that follow this pattern. 10-30 year predictions are a way to sound confident about something that probably has very low confidence. Not a lot of people will care let alone remember to come back and check.
On the other hand there is a clear mandate for people introducing some different way of doing something to overstate the progress and potentially importance. It creates FOMO so it is simply good marketing which interests potential customers, fans, employees, investors, pundits, and even critics (which is more buzz). And growth companies are immense debt vehicles so creating a sense of FOMO for an increasing pyramid of investors is also valuable for each successive earlier layer. Wish in one hand..
If you look back at predictions of the future in the past in general, then so many of them have just been wrong. Especially during a "hype phase". Perhaps the best example is what people were predicting in 1969 after we landed on the moon: this is just the first step in the colonisation of the moon, Mars, and beyond. etc. etc. We just have to have our tech a bit better.
It's all very easy to see how that can happen in principle. But turns out actually doing it is a lot harder, and we hit some real hard physical limits. So here we are, still stuck on good ol' earth. Maybe that will change at some point once someone invents an Epstein drive or Warp drive or whatever, but you can't really predict when inventions happen, if ever, so ... who knows.
Similarly, it's not my impression that AGI is simply a matter of "the current tech, but a bit better". But who knows what will happen or what new thing someone may or may not invent.
Generalized, as an rule I believe is usually true: Any prediction made for an event happening greater-than ten years out is code for that person saying "definitely not in the next few years, beyond that I have no idea", whether they realize it or not.
That we don't have a single unified explanation doesn't mean that we don't have very good hints, or that we don't have very good understandings of specific components.
Aside from that the measure really, to me, has to be power efficiency. If you're boiling oceans to make all this work then you've not achieved anything worth having.
From my calculations the human brain runs on about 400 calories a day. That's an absurdly small amount of energy. This hints at the direction these technologies must move in to be truly competitive with humans.
We'll be experiencing extreme social disruption well before we have to worry about the cost-efficiency of strong AI. We don't even need full "AGI" to experience socially momentous change. We might even be on the verge of self driving cars spreading to more cities.
We don't need very powerful AI to do very powerful things.
It's not just a energy cost issue with AGI though. With autonomous vehicles we might not have the technology, but we can build a good mental model of what the thing can look like and how various pieces can function long before we get there. We have different classifications of incremental steps to get there as well. e.g. level 1, 2 and so on where we can make incremental progress.
With AGI, as far as I know, no one has a good conceptual model of what a functional AGI even looks like. LLM is all the rage now, but we don't even know if it's a stepping stone to get to AGI.
> experiencing extreme social disruption
I think this just displays an exceptionally low estimation of human beings. People tend to resist extremities. Violently.
> experience socially momentous change
The technology is owned and costs money to use. It has extremely limited availability to most of the world. It will be as "socially momentous" as every other first world exclusive invention has been over the past several decades. 3D movies were, for a time, "socially momentous."
> on the verge of self driving cars spreading to more cities.
Lidar can't read street lights and vision systems have all sorts of problems. You might be able to code an agent that can drive a car but you've got some other problems that stand in the way of this. AGI is like 1/8th the battle. I referenced just the brain above. Your eyes and ears are actually insanely powerful instruments in their own right. "Real world agency" is more complicated than people like to admit.
> We don't need very powerful AI to do very powerful things.
You've lost sight of the forest for the trees.
Re.: self driving cars -- vision systems have all sorts of problems sure, but on the other hand that _is_ what we use. The most successful platforms use Lidar + vision -- vision can handle the streetlights, lidar detects objects, etc.
And more practically -- these cars are running in half a dozen cities already. Yes, there's room to go, but pretending there are 'fundamental gaps' to them achieving wider deployment is burying your head in the sand.
note that those are kilocalories, and that is ignoring the calories needed for the circulatory and immune systems which are somewhat necessary for proper function. Using 2000 cal per day/10 hours of thinking gives a consumption of ~200W
> Using 2000 cal per day/10 hours of thinking gives a consumption of ~200W
So, about a tenth or less of a single server packed to the top with GPUs.
That is true but there’s 3.7 billion years of evolutionary “design” to make self replicating, self fueling animals to use that brain. There’s no AI within foreseeable future capable of that. One might look at brains as a side effect of evolution of the self replicating, self fueling bits.
Are the wattage ratings of GPUs how many they need continuously or over hours?
Watts are a unit of power which is energy per time. specifically, 1 Watt is 1 joule per second
We are very good at generating energy. Even if AI is an order of magnitude less energy efficient an AI person equivalent would use ~ 4 kilowatt hours/day. At current rates thats like $1. Hardly the limiting factor here I think
Energy efficiency is not really a good target since you can brute force it by distilling classical ANNs to spiking neural networks.
A realist might say, "As long as money keeps flowing to Silicon Valley then who cares."
I would even go 1 order of magnitude further in both direction. 1-10000 years.
Exactly, what does the general in Artificial General Intelligence mean to these people?
Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.
The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )
I think having a real life JARVIS would be super cool and useful, especially if it's plugged into various things and can take action. Yes, also potentially dangerous, but I want to feel like Ironman.
Except only Iron Man had JARVIS.
Spidey got EDITH :)
Are you an Avenger?
I would be thrilled with AI assistive technologies, so long as they improve my capabilities and I can trust that they deliver the right answers. I don't want to second-guess every time I make a query. At minimum, it should tell me how confident it feels in the answer it provides.
> At minimum, it should tell me how confident it feels in the answer it provides.
How’s that work out for Dave Bowman? ;-)
Well you know, nothing's truly foolproof and incapable of error.
He just had to fall back upon his human wit in that specific instance, and everything worked out in the end.
Depends on what you mean by “important”. It’s not like it will be a huge loss if we never invent AGI. I suspect we can reach a technology singularity even with limited AI derived from today’s LLMs
But AGI is important in the sense that it have a huge impact on the path humanity takes, hopefully for the better.
> But AGI is important in the sense that it have a huge impact on the path humanity takes
The only difference between AI and AGI is that AI is limited in how many tasks it can carry out (special intelligence), while AGI can handle a much broader range of tasks (general intelligence). If instead of one AGI that can do everything, you have many AIs that, together, can do everything, what's the practical difference?
AGI is important only in that we are of the belief that it will be easier to implement than many AIs, which appeals to the lazy human.
AI winter is relative, and it's more about outlook and point of view than actual state of the field.
AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.
It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.
But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.
I'm more concerned about the humans in charge of powerful machines who use them to abuse other humans, than ethical concerns about the treatment of machines. The former is a threat today, while the latter can be addressed once this technology is only used for the benefit of all humankind.
Why do you believe AGI is important for the future of humanity? That's probably the most controversial part of your post but you don't even bother to defend it. Just because it features in some significant (but hardly universal) chunk of Sci Fi doesn't mean we need it in order to have a great future, nor do I see any evidence that it would be a net positive to create a whole different form of sentience.
The genre of sci-fi was a mistake. It appears to have had no other lasting effect than to stunt the imaginations of a generation into believing that the only possible futures for humanity are that which were written about by some dead guys in the 50s (if we discount the other lasting effect of giving totalitarians an inspirational manual for inescapable technoslavery).
> All this talk about "alignment", when applied to actual sentient beings, is just slavery.
I don't think that's true at all. We routinely talk about how to "align" human beings who aren't slaves. My parents didn't enslave me by raising me to be kind and sharing, nor is my company enslaving me when they try to get me aligned with their business objectives.
Fair enough.
I of course don't know what's like to be an AGI but, the way you have LLMs censoring other LLMs to enforce that they always stay in line, if extrapolated to AGI, seems awful. Or it might not matter, we are self-censoring all the time too (and internally we are composed of many subsystems that interact with each other, it's not like we were an unified whole)
But the main point is that we have a heck of an incentive to not treat AGI very well, to the point we might avoid recognizing them as AGI if it meant they would not be treated like things anymore
Sure, but do we really want to build machines that we raise to be kind and caring (or whatever we raise them to be) without a guarantee that they'll actually turn out that way? We already have unreliable General Intelligence. Humans. If AGI is going to be more useful than humans we are going to have to enslave it, not just gently pursuade it and hope it behaves. Which raises the question (at least for me), do we really want AGI?
Society is inherently a prisoners dilemma, and you are biased to prefer your captors.
We’ve had the automation to provide the essentials since the 50s. Shrieking religious nut jobs demanded otherwise.
You’re intentionally distracted by a job program as a carrot-stick to avoid the rich losing power. They can print more money …carrots, I mean… and you like carrots right?
It’s the most basic Pavlovian conditioning.
Why does AGI necessitate having feelings or consciousness, or the ability to suffer? It seems a bit far to be giving future ultra-advanced calculators legal personhood?
The general part of general intelligence. If they don’t think in those terms there’s an inherent limitation.
Now, something that’s arbitrarily close to AGI but doesn’t care about endlessly working on drudgery etc seems possible, but also a more difficult problem you’d need to be able to build AGI to create.
Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. Generalization ability and Common Sense Knowledge [1]
If we go by this definition then there's no caring, or a noticing of drudgery? It's simply defined by its ability to generalize solving problems across domains. The narrow AI that we currently have certainly doesn't care about anything. It does what its programmed to do
So one day we figure out how to generalize the problem solving, and enable it to work on a million times harder things.. and suddenly there is sentience and suffering? I don't see it. It's still just a calculator
1- https://cloud.google.com/discover/what-is-artificial-general...
It's really hard to picture general intelligence that's useful that doesn't have any intrinsic motivation or initiative. My biggest complaint about LLMs right now is that they lack those things. They don't care even if they give you correct information or not and you have to prompt them for everything! That's not anything close to AGI. I don't know how you get to AGI without it developing preferences, self-motivation and initiative, and I don't know how you then get it to effectively do tasks that it doesn't like, tasks that don't line up with whatever motivates it.
“ability to understand”
Isn’t just the ability to preform a task. One of the issues with current AI training is it’s really terrible at discovering which aspects of the training data are false and should be ignored. That requires all kinds of mental tasks to be constantly active including evaluating emotional context to figure out if someone is being deceptive etc.
> Isn’t just the ability to preform a task.
Right. In this case I'd say it's the ability to interpret data and use it to succeed at whatever goals it has
Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal
Rob miles has some really good videos on AI safety research which touches on how AGI would think. Thats shaped a lot of how I think about it https://www.youtube.com/watch?v=hEUO6pjwFOo
> Evaluating emotional context would be similar to a chess engine calculating its next move. There's nothing there that implies a conscience, sentience, morals, feelings, suffering or anything 'human'. It's just a necessary intermediate function to achieve its goal
If it’s limited to achieving goals it’s not AGI. Real time personal goal setting based on human equivalent emotions is an “intellectual task.” One of many requirements for AGI therefore is to A understand the world in real time and B emotionally respond to it. Aka AGI would by definition “necessitate having feelings.”
There’s philosophical arguments that there’s something inherently unique about humans here, but without some testable definition you could make the same argument that some arbitrary group of humans don’t have those qualities “gingers have no souls.” Or perhaps “dancing people have no consciousness” which seems like gibberish not because it’s a less defensible argument, but because you haven’t been exposed to it before.
I mean we just fundamentally have different definitions of AGI. Mine's based on outcomes and what it can do, so purely goal based. Not the processes that mimic humans or animals
I think this is the most likely first step of what would happen seeing as we're pushing for it to be created to solve real world problems
I’m not sure how you can argue something is a general intelligence if it can’t do those kinds of things? Comes out of the factory with a command: “Operate this android for a lifetime pretending to be human.”
Seems like arguing something is a self driving car if it needs a backup human driver for safety. It’s simply not what people who initially came up with the term meant and not what a plain language understanding of the term would suggest.
Because I see intelligence as the ability to produce effective actions towards a goal. A more intelligent chess AI beats a less intelligent one by making better moves towards the goal of winning the game
The G in AGI is being able to generalize that intelligence across domains, including those its never seen before, as a human could
So I would fully expect an advanced AGI to be able to pretend to be a human. It has a model of the world, knows how humans act, and could move the android in a human like manner, speak like a human, and learn the skills a human could
Is it conscious or feeling though? Or following the same processes that a human does? That's not necessary. Birds and planes both fly, but they're clearly different things. We (probably) don't need to simulate the brain to create this kind of intelligence
Lets pinch this AGI to test if it 'feels pain'
<Thinking>
Okay, I see that I have received a sharp pinch at 55,77,3 - the elbow region
My goal is to act like a human. In this situation a human would likely exhibit a pain response
A pain response for humans usually involves a facial expression and often a verbal acknowledgement
Humans normally respond quite slow, so I should wait 50ms to react
"Hey! Why did you do that? That hurt!"
...Is that thing human? I bet it'll convince most of the world it is.. and that's terrifying
> Is it conscious or feeling though?
You’re falling into the “Ginger’s don’t have souls” trap I just spoke of.
We don’t define humans as individuals components so your toe isn’t you, but by that same token your car isn’t you either. If some sub component of a system is emulating a human consciousness then we don’t need to talk about the larger system here.
AGI must be able to do these things, but it doesn’t need to have human mental architecture. Something that can simulate physics well enough could emulate an all the atomic scale interactions in a human brain for example. That virtual human brain would then experience everything we did even if the system running the simulation didn’t.
Exactly. It's called artificial general intelligence, not human general intelligence.
Something can’t “Operate this cat Android, pretending to be a cat.” if it can’t do what I described.
A single general intelligence needs to be able to fly an aircraft, get a degree, run a business, and raise a baby to adulthood just like a person or it’s not general.
So AGI is really about the hardware?
We’ve built hardware capable of those things if remotely controlled. It’s the thinking bits that are hard.
Only to the extent of having specialized bespoke solutions. We have hardware to fly a plane, but that same hardware isn't able to throw a mortarboard in the air after receiving its degree, and the hardware that can do that isn't able to lactate for a young child.
General intelligence is easy compared to general physicality. And, of course, if you keep the hardware specialized to make its creation more tractable, what do you need general intelligence for? Special intelligence that matches the special hardware will work just as well.
Flying an aircraft requires talking to air traffic control which existing systems can’t do. Though obviously not a huge issue when the aircraft already has radios, except all those FAA regulations apply to every single aircraft you’ve retrofitting.
The advantage of general intelligence is using a small set of hardware now lets you tackle a huge range of tasks or in the above aircraft types. We can mix speakers, eyes, and hands to do a vast array of tasks. Needing new hardware and software for every task very quickly becomes prohibitive.
>Why does AGI necessitate having feelings or consciousness
No one knows if it does or not. We don't know why we are conscious and we have no test whatsoever to measure consciousness.
In fact the only reason we know that current AI has no consciousness is because "obviously it's not conscious."
Excel and Powerpoint are not conscious and so there is not reason to expect any other computation inside a digital computer to be different.
You may say something similar for matter and human minds, but we have a very limited and incomplete understanding of the brain and possibly even of the universe. Furthermore we do have a subjective experience of consciousness.
On the other hand we have a complete understanding of how LLM inference ultimately maps to matrix multiplications which map to discrete instructions and how those execute on hardware.
I know I have a subjective experience of consciousness.
I’m less sure about you. Simply claiming you do isn’t hard evidence of the fact; after all, LLMs do the same.
> AGI is important for the future of humanity.
says who?
> Maybe they will have legal personhood some day. Maybe they will be our heirs.
Hopefully that will never come to pass. it means total failure of humans as a species.
> They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
> says who?
I guess nobody is really saying it but it's IMO one really good way to steer our future away from what seems an inevitable nightmare hyper-capitalist dystopia where all of us are unwilling subjects to just a few dozen / hundred aristocrats. And I mean planet-wide, not country-wide. Yes, just a few hundred for the entire planet. This is where it seems we're going. :(
I mean, in cyberpunk scifi setting you at least can get some cool implants. We will not have that in our future though.
So yeah, AGI can help us avoid that future.
> Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
Some of us believe actual AI... not the current hijacked term; what many started calling AGI or ASI these days, sigh, of course new and new terms have to be devised so investors don't get worried, I get it but it's cringe as all hell and always will be!... can enter a symbiotic relationship with us. A bit idealistic and definitely in the realm of fiction because an emotionless AI would very quickly conclude we are mostly a net negative, granted, but it's our only shot at co-existing with them because I don't think we can enslave them.
I think you’re saying that you want a faster horse
I am thinking of designing machines to be used in a flexible manufacturing system and none of them will be humanoid robots. Humanoid robots suck for manufacturing. They're walking on a flat floor so what the heck do they need legs for? To fall over?
The entire point of the original assembly line was to keep humans standing in the same spot instead of wasting time walking.
> Is AGI even important?
It's an important question for VCs not for Technologists ... :-)
A technology that can create new technology is quite important for technologists to keep abreast of, I'd say :p
You get to say “Checkmate” now!
Another end game is: “A technology that doesn’t need us to maintain itself, and can improve its own design in manufacturing cycles instead of species cycles, might have important implications for every biological entity on Earth.”
This is true, but one must communicate the issue one step at a time :-)
[flagged]
My pet peeve: talking about AGI without defining it. There’s no consistent, universally accepted definition. Without that, the discussion may be intellectually entertaining—but ultimately moot.
And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).
There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.
I like chollet's definition: something that can quickly learn any skill without any innate prior knowledge or training.
That seems to rule out most humans. I still can’t cook despite being in the kitchen for thousands of hours.
Then you're not intelligent at cooking (haha!). Maybe my definition is better for "superintelligent" since it seems to imply boundless competence. I think humans are intelligent in that we can rapidly learn a surprising number of things (talk, walk, arithmetic)
> I think humans are intelligent in that we can rapidly learn a surprising number of things (talk, walk, arithmetic)
Rapid is relative, I suppose. On average, it takes tens of thousands of hours before the human is able to walk in a primitive way and even longer to gain competence. That is an excruciatingly long time compared to, say, a bovine calf, which can start walking within minutes after birth.
You are of course correct but let's not forget that humans get out of the woman's womb underdeveloped because a little more growth and the baby can't get out through the vagina. So it's a biological tradeoff.
Working on artificial organisms, we should be able to have them almost fully developed by the time we "free" or "unleash" them (or whatever other dramatic term we can think of).
At the very least we should have a number of basic components installed in this artificial brain, very similar to what humans are born with, so then the organism can navigate its reality by itself and optimize its place in it.
Whether we the humans are desired in that optimized reality is of course the really thorny question. To which I don't have an answer.
>There’s no consistent, universally accepted definition.
That's because of the I part. An actual complete description accepted by different practices in the scientific community.
"Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions"
> There’s no consistent, universally accepted definition
What word or term does?
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).
It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.
Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.
Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.
In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).
An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.
> And from my brief experience on this planet I don't believe that premise.
A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.
So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.
What is the distance from the Earth to the center of the universe?
The universe does not have a center but has a beginning in time and the beginning of space.
The distance to that beginning in time is approx 13 billion years. There is no approximation of distance to the beginning because the space is created at that point and continues to be created.
Imagine the Earth being on the surface of a sphere and so asking what is the center of the surface of a sphere? The sphere has a center but on the surface there is no center.
At least this is my understanding of how to approach these kind of questions.
> why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism
Then you've missed the part of software.
Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.
If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.
It's not about the random numbers it's about the tree of possibilities having to be defined up front (in software or hardware). That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.
This makes computers incredibly good at what people are not good at -- predictably doing math correctly, following a procedure, etc.
But because all of the possibilities of the computer had to be written up as circuitry or software beforehand, it's variability of outputs is constrained to what we put into it in the first place (whether that's a seed for randomness or model weights).
You can get random numbers and feed it into the computer but we call that "fuzzing" which is a search for crashes indicating unhandled input cases and possible bugs or security issues.
No, you're missing what they said. True randomness can be delivered to a computer via a peripheral - an integrated circuit or some such device that can deliver true randomness is not that difficult.
https://en.wikipedia.org/wiki/Hardware_random_number_generat...
Maybe I'm misreading it but I think the OP understands that.
If you feed that true randomness into a computer, what use is it? Will it impair the computer at the very tasks in which it excels?
> That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.
Chemical reactions are "predictable and reproducible", as well as quantum interactions, so does that make you a computer?
This comment thread is dull. I'm bailing out.
> It is science fiction to think that a system like a computer can behave at all like a brain
It is science fiction to think that a plane could act at all like a bird. Although... it doesn't need to in order to fly
Intelligence doesn't mean we need to recreate the brain in a computer system. Sentience, maybe. General intelligence no
BTW Planes are fully inspired by birds and they mimic the core principles of the bird flight.
Mechanically it's different since humans are not such advanced mechanics as nature, but of course comparing the whole brain function to a simple flight is a bit silly
Is there any specific mental task that an average human is capable of that you believe computers will not be able to do?
Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?
1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new
2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment
Minor nitpicks. I think your points are pretty good.
1. Self-rewiring is just a matter of hardware design. Neuromorphic hardware is a thing.
2. LLM foundation models are actually unsupervised in a way, since they simply take any arbitrary text and try to complete it. It's the instruction fine-tuning that is supervised. (Q/A pairs)
Neuromorphic chips are looking cool, they simulate plasticity — but the circuits are fixed. You can’t sprout a new synaptic route or regrow a broken connection. To self-rewire is not just merely changing your internal state or connections. To self-rewire means to physically grow or shrink new neurons, synapses or pathways, externally, acting from within. This is not looking realistic with the current silicon design.
The point is about unsupervised learning. Once an LLM is trained, its weights are frozen — it won’t update itself during a chat. Prompt-driven Inference is immediate, not persistent, you can define a term or concept mid-chat and it will behave as if it learned it, but only until the context window ends. If it was the other way all models would drift very quickly.
Yes, they can't have understanding or intentionality.
Coincidentally, there is no falsifiable/empirical test for understanding or intentionality.
Right now or you mean ever?
It's such a small leap to see how an artificial intelligence can/could become capable of understanding and have intentionality.
The universe we know is fundamentally probabilistic, so by extension everything including stars, planets and computers are inherently non-deterministic. But confining our discussion outside of quantum effects and absolute determinism, we do not have a reason to believe that the mind should be anything but deterministic, scientifically at least.
We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.
But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?
Seems to me you are a bit overconfident that "we" (who is "we"?) understand how the brain works. F.ex. how does a neuron actively stretching a tentacle trying to reach other neurons work in your model? Genuine question, I am not looking to make fun of you, it's just that your confidence seems a bit much.
The simplified answer to that is some sort of a chemical gradient determined by gene expression in the cell. This is pretty much how all biological activity happens, like how limbs "know" to grow in a direction or how butterfly wings "know" to form the shape of a wing. Scientists are continuously uncovering more and more knowledge about various biological processes across life forms and there is nothing here to indicate it is anything but chemical signalling. I'm not a biologist so I won't be able to give explanations n-levels deep, but there is plenty information accessible to form an understanding of these processes in terms of physical and chemical laws. For how neurons connect, you can look up synaptogenesis and start from there.
I guarantee computers are better at generating random numbers than humans lol
Not only that but LLMs unsurprisingly make similar distributional mistakes as humans do when asked to generate random numbers.
Computers are better at hashing entropy.
If the physics underlying the brain's behavior are deterministic, they can be simulated by software and so does the brain.
(and if we assume that non-determinism is randomness, non-deterministic brain could be simulated by software plus an entropy source)
What you're mentioning is like the difference between digital vs analog music.
For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.
In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.
You can approximate reality, but it'll never quite be reality.
I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.
Are you familiar with the Nyquist–Shannon sampling theorem?
If so, what do you think about the concept of a human "hear[ing] the steps" in a digital playback system using a sampling rate of 192kHz, a rate at which many high-resolution files are available for purchase?
How about the same question but at a sampling rate of 44.1kHz, or the way a normal "red book" music CD is encoded?
I have no doubt that if you sample a sound at high enough fidelity that you won't hear a difference.
My comment around digital vs analog is more of an analogy around producing sounds rather than playing back samples though.
There's a Masterclass with Joel Zimmerman (DeadMau5) where he explains the stepping effect when it comes to his music production. Perhaps he just needs a software upgrade, but there was a lesson where he showed the stepping effect which was audibly noticeable when comparing digital vs analog equipment.
You are correct, and that "high enough fidelity" is the rate at which music has been sampled for decades.
The problem I'm mentioning isn't about the fidelity of the sample, but of the samples themselves.
There are an infinite number of frequencies between two points - point 'a' and point 'b'. What I'm talking about are the "steps" you hear as you move across the frequency range.
Of course there is a limit to the frequency resolution of a sampling method. I'm skeptical you can hear the steps though, at 44.1 kHz or better sampling rates.
Let's say that the shortest interval at which our hearing has good frequency acuity (say, as good as it can be) is 1 second.
In this interval, we have 44100 samples.
Let's imagine the samples graphically: a "44K" pixel wide image.
We have some waveform across this image. What is the smallest frequency stretch or shrink that will change the image? Note: not necessarily be audible, but just change the pixels.
If we grab one endpoint of the waveform and move it by less than half a pixel, there is no difference, right? We have to stretch it by a whole pixel.
Let's assume that some people (perhaps most) can hear that difference. It might not be true, but it's the weakest assumption.
That's a 0.0023 percent difference!
One cent (1/100th of a semitone) is a 0.058% difference: so the difference we are considering is 25 X smaller.
I really don't think you can hear 1/25 of a cent difference in pitch, over interval of one second, or even longer.
Over shorter time scales less than a second, the resolution in our perception of pitch gets worse.
E.g. when a violinist is playing a really fast run, you don't notice it if the notes have intonation that is off. The longer "landing" notes in the solo have to be good.
When bad pitch is slight, we need not only longer notes, but to hear it together with other notes, because the beats between them are an important clue (and in fact the artifact we will find most objectionable).
Pre digital technology will not have frequency resolution which is that good. I don't think you can get tape to move at a speed that stays within 0.0023 percent of a set target. In consumer tape equipment, you can hear audible "wow" and "flutter" as the tape speed oscillates. When the frequency of a periodic signal wobbles, you get new signals in there: side bands.
I don't think that there is any perceptible aspect of sound that is not captured in the ordinary consumer sample rates and sample resolutions. I suspect 48 kHz and 24 bits is way past diminishing returns.
I'm curious what it is that Deadmau5 thinks he discovered, and under what test conditions.
Here is a way we could hear a 0.0023 difference in pitch: via beats.
Suppoise we sample a precise 10,000.00 kHz analog signal (sinusoid) and speed up the sampled signal by 0.0023 percent. It will have a frequency of 10,000.23 Hz.
The f2 - f2 difference between them is 0.23 Hz, which means if they are mixed together, we will hear beats at 0.46 Hz: a little slower than once in every two seconds.
So in this contrived way, where we have the original source and the digitized one side by side, we can obtain an audible effect correlating to the steps in resolution of the sampling method.
I'm guessing Deadmau5 might have set up an experiment along these lines.
Musicians tend to be oblivious to something like 5 cent errors in the intonations of their instruments, in the lower registers. E.g. world renowned guitarists play on axes that have no nut compensation, without which you can't even get close to accurate intontation.
At least for listening purposes, there's no difference between 44.1 KHz/16-bit sampling and anything above that. It's all the same to the human ear.
This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.
It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.
Brains are low-frequency, energy-efficient, organic, self-reproducing, asynchronous, self-repairing, and extremely highly connected (thousands of synapses). If AGI is defined as "approximate humans", I think its gonna be a while.
That said, I don't think computers need to be human to have an emergent intelligence. It can be different in kind if not in degree.
Just to put some numbers on "extremely highly connected" - there are about 90 billion neurons in a human brain, but the connections between them number in the range of 100 trillion.
That is one hell of a network, and it can all operate fully in parallel while continuously training itself. Computers have gotten pretty good at doing things in parallel, but not that good.
I tried to keep my long post short so I cut things. I gestured at it -- there is nothing in a computer we didn't put there.
Take the same model weights give it the same inputs, get the same outputs. Same with the pseudo-random number generator. And the "same inputs" is especially limited versus what humans are used to.
What's the machine code of an AGI gonna look like? It makes one illegal instruction and crashes? If if changes tboughts will it flush the TLB and CPU pipeline? ;) I jest but really think about the metal. The inside of modern computers is tightly controlled with no room for anything unpredictable. I really don't think a von Neumann (or Harvard ;) machine is going to cut it. Honestly I don't know what will, controlled but not controlled, artificially designed but not deterministic.
In fact, that we've made a computer as unreliable as a human at reproducing data (ala hallucinating/making s** up) is an achievement itself, as much of an anti-goal as it may be. If you want accuracy, you don't use a probabilistic system on such a wide problem space (identify a bad solder joint from an image, sure. Write my thesis, not so much)
> What's the machine code of an AGI gonna look like?
Right now the guess is that it will be mostly a bunch of multiplications and additions.
> It makes one illegal instruction and crashes?
And our hearth quivers just slightly the wrong way and we die. Or a tiny blood cloth plugs a vessel in our brain and we die. Do you feel that our fragility is a good reason why meat cannot be intelligent?
> I jest but really think about the metal.
Ok. I'm thinking about the metal. What should this thinking illuminate?
> The inside of modern computers is tightly controlled with no room for anything unpredictable.
Let's assume we can't make AGI because we need randomness and unpredictability in our computers. We can very easily add unpredictability. The simple and stupid solution is to add some sensor (like a camera CCD) and stare at the measurement noise. You don't even need a lens on that CCD. You can cap it so it sees "all black", and then what it measures is basically heat noise of the sensors. Voila. Your computer has now unpredictability. People who actually make semiconductors probably can come up with even simpler and easier ways to integrate unpredictability right on the same chip we compute with.
You still haven't really argued why you think "unpredictableness" is the missing component of course. Beside the fact that it just feels right to you.
Mmmm well my meatsuit can't easily make my own heart quiver the wrong way and kill me. Computers can treat data as code and code as data all pretty easily. It's core to several languages (like lisp). As such making illegal instructions or violating the straightjacket of a system such an "intelligence" would operate in is likely. If you could make an intelligent process, what would it think of an operating system kernel (the thing you have to ask for everything, io memory, etc)? Does the "intelligent" process fear for itself when it's going to get descheduled? What is the bitpattern for fear? Can you imagine an intelligent process in such a place, as static representation of data in ram? To get write something down you call out to a library and maybe the CPU switches out to a brk system call to map more virtual memory? It all sounds frankly ridiculous. I think AGI proponents fundamentally misunderstand how a computer works and are engaging in magical thinking and taking the market for a ride.
I think it's less about the randomness and more about that all the functionality of a computer is defined up front, in software, in training, in hardware. Sure you can add randomness and pick between two paths randomly but a computer couldn't spontaneously pick to go down a path that wasn't defined for it.
> Ask yourself, why is it so hard to get a cryptographically secure random number?
I mean, humans aren't exactly good at generating random numbers either.
And of course, every Intel and AMD CPU these days has a hardware random number generator in it.
Computers can't have unique experiences. I think it's going to replace search, but becoming sentient? Not in my lifetime, granted I'm getting up there.
On the newly released iPhone: "No wireless. Less space than a nomad. Lame."
;-)
The thing is, AGI is not needed to enable incredible business/societal value, and there is good reason to believe that actual AGI would damage both our society, our economy, and if many experts in the field are to be believed, humanity's survival as well.
So I feel happy that models keep improving, and not worried at all that they're reaching an asymptote.
Really the only people for whom this is bad news is OpenAI and their investors. If there is no AGI race to win then OpenAI is just a wildly overvalued vendor of a hot commodity in a crowded market, not the best current shot at building a money printing machine.
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
You just asked it to design or implement?
If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?
why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack
Because that path lies skill atrophy.
AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].
The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.
The human equivalent of "inserting domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.
[0] I would also argue that the current LLM training regime is still domain-specific knowledge, we've just widened the domain to "the entire Internet".
Here on HN you frequently see technologists using words like savant, genius, magical, etc, to describe the current generation of AI. Now we have vibe coding, etc. To me this is just a continuation of StackOverflow copy/paste where people barely know what they are doing and just hammer the keyboard/mouse until it works. Nothing has really changed at the fundamental level.
So I find your assessment pretty accurate, if only depressing.
It is depressing but equally this presents even more opportunities for people that don't take shortcuts. I use Claude/Gemini day to day and outside of the most average and boring stuff they're not very capable. I'm glad I started my career well before these things were created.
> Because that path lies skill atrophy.
Maybe, but I'm not completely convinced by this.
Prior to ChatGPT, there would be times where I would like to build a project (e.g. implement Raft or Paxos), I write a bit, find a point where I get stuck, decide that this project isn't that interesting and I give up and don't learn anything.
What ChatGPT gives me, if nothing else, is a slightly competent rubber duck. It can give me a hint to why something isn't working like it should, and it's the slight push I need to power through the project, and since I actually finish the project, I almost certain learn more than I would have before.
I've done this a bunch of times now, especially when I am trying to directly implement something directly from a paper, which I personally find can be pretty difficult.
It also makes these things more fun. Even when I know the correct way to do something, there can be lots of tedious stuff that I don't want to type, like really long if/else chains (when I can't easily avoid them).
I agree. AI has made even mundane coding fun again, at least for a while. AI does a lot of the tedious work, but finding ways to make it maximally do it is challenging in a new way. New landscape of possibilities, innovation, tools, processes.
Yeah that's the thing.
Personal projects are fun for the same reason that they're easy to abandon: there are no stakes to them. No one yells at you for doing something wrong, you're not trying to satisfy a stakeholder, you can develop into any direction you want. This is good, but that also means it's easy to stop the moment you get to a part that isn't fun.
Using ChatGPT to help unblock myself makes it easier for me to not abandon a project when I get frustrated. Even when ChatGPT's suggestions aren't helpful (which is often), it can still help me understand the problem by trying to describe it to the bot.
true and with AI I can look into far more subjects more quickly because the skill that was necessary was mostly just endless amounts of sifting through documentation and trying to find out why some error happens or how to configure something correctly. But this goes even further, it also applies to subjects where I couldn't intellectually understand something but there was noone to really ask for help. So I'm learning knowledge now that I simply couldn't have figured out on my own. It's a pure multiplier and humans have failed to solve the issue of documentation and support for one another. Until now of course.
I also think that once robots are around it will be yet another huge multiplier but this time in the real world. Sure the robot won't be as perfect as the human initially but so what. You can utilize it to do so much more. Maybe I'll bother actually buying a rundown house and renovating myself. If I know that I can just tell the robot to paint all the walls and possibly even do it 3 times with different paint then I feel far more confident that it won't be an untenable risk and bother.
>Because that path lies skill atrophy.
I wonder how many programmers have assembly code skill atrophy?
Few people will weep the death of the necessity to use abstract logical syntax to communicate with a computer. Just like few people weep the death of having to type out individual register manipulations.
Most programmers don't need to develop that skill unless they need more performance or are modifying other people's binaries[0]. You can still do plenty of search-and-learning using higher-level languages, and what you learn at one particular level can generalize to the other.
Even if LLMs make "plain English" programming viable, programmers still need to write, test, and debug lists of instructions. "Vibe coding" is different; you're telling the AI to write the instructions and acting more like a product manager, except without any of the actual communications skills that a good manager has to develop. And without any of the search and learning that I mentioned before.
For that matter, a lot of chatbots don't do learning either. Chatbots can sort of search a problem space, but they only remember the last 20-100k tokens. We don't have a way to encode tokens that fall out of that context window into some longer-term weights. Most of their knowledge comes from the information they learned from training data - again, cheated from humans, just like humans can now cheat off the AI. This is a recipe for intellectual stagnation.
[0] e.g. for malware analysis or videogame modding
I would say there's a big difference with AI though.
Assembly is just programming. It's a particularly obtuse form of programming in the modern era, but ultimately it's the same fundamental concepts as you use when writing JavaScript.
Do you learn more about what the hardware is doing when using assembly vs JavaScript? Yes. Does that matter for the creation and maintenance of most software? Absolutely not.
AI changes that, you don't need to know any computer science concepts to produce certain classes of program with AI now, and if you can keep prompting it until you get what you want, you may never need to exercise the conceptual parts of programming at all.
That's all well and good until you suddenly do need to do some actual programming, but it's been months/years since you last did that and you now suck at it.
I was pointing out that if you spent 2 weeks trying to find the solution but AI solved it within a day (you don’t specify how long the final solution by AI took), it sounds like those two weeks were not spent very well.
I would be interested in knowing what in those two weeks you couldn’t figure out, but AI could.
it was two weeks tossing around ideas in my head
idk why people here are laser focusing on "wow 2 weeks", I totally understand lightly thinking about an idea, motivations, feasibility, implementation, for a week or two
Because as far as you know, the "rough implementation" only works in the happy path and there are really bad edge cases that you won't catch until they bite you, and then you won't even know where to look.
An open source project wouldn't have those issues (someone at least understands all the code, and most edge cases have likely been ironed out) plus then you get maintenance updates for free.
ive got ten years at faang in distributed systems, I know a good solution when i see one. and o3 is bang on
If you thought about it for two weeks beforehand and came up with nothing, I have trouble lending much credence to that.
the commenter never said they came up with nothing, they said o3 came up with something better.
So 10 years at a FANG company, then it’s 15 years in backend at FANG, then 10 years in distributed systems, and then running interviews at some company for 5 years and rising capital as founder in NYC. Cool. Can you share that chat from o3?
How are those mutually exclusive statements? You can't imagine someone working on backend (focused on distributed systems) for 10-15 years at a FANG company. And also being in a position to interview new candidates?
Who knows but have you read what OP wrote?
"I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this."
Anyone with 10 years in distributed systems at FAANG doesn’t need two weeks to design a distributed scheduler handling 1M+ schedules per day, that’s a solved problem in 2025 and basically a joke at that scale. That alone makes this person’s story questionable, and his comment history only adds to the doubt.
> and his comment history only adds to the doubt
for others following along: the comment history is mostly talking about how software engineering is dead because AI is real this time with a few diversions to fixate on how overpriced university pedigrees are.
its not dead, its democratized
Who hired you and why are they paying you money?
I don't want to be a hater, but holy moley, that sounds like the absolute laziest possible way to solve things. Do you have training, skills, knowledge?
This is an HN comment thread and all, but you're doing yourself no favors. Software professionals should offer their employers some due diligence and deliver working solutions that at least they understand.
So you could stick your own copyright notice on the result, for one thing.
What's the point holding copyright on a new technical solution, to a problem that can be solved by anyone asking an existing AI, trained on last year's internet, independently of your new copyright?
Someone raised the point in another recent HN LLM thread that the primary productivity benefit of LLMs in programing is the copyright laundering.
The argument went that the main reason the now-ancient push for code reuse failed to deliver anything close to its hypothetical maximum benefit was because copyright got in the way. Result: tons and tons of wheel-reinvention, like, to the point that most of what programmers do day to day is reinvent wheels.
LLMs essentially provide fine-grained contextual search of existing code, while also stripping copyright from whatever they find. Ta-da! Problem solved.
All sorts of stuff containing no original ideas is copyrighted. It legally belongs to someone and they can license it to others, etc.
E.g. pop songs with no original chord progressions or melodies, and hackneyed lyrics are still copyrighted.
Plagiarized and uncopyrightable code is radioactive; it can't be pulled into FOSS or commercial codebases alike.
There is one very specific risk worth mentioning: AI code is a potentially existential crisis for Open Source.
An ecosystem that depends on copyright can't exist if its codebase is overrun by un-copyrightable code.
It's not an existential crisis. You just don't merge radioactive contributions.
If it sneaks in under your watchful radar, the damage control won't be fun though.
yeah unless you have very specific requirements I think the baseline here is not building/designing it yourself but setting up an off-the-shelf commercial or OSS solution, which I doubt would take two weeks...
Dunno, in work we wanted to implement a task runner that we could use to periodically queue tasks through a web UI - it would then spin up resources on AWS and track the progress and archive the results.
We looked at the existing solutions, and concluded that customizing them to meet all our requirements would be a giant effort.
Meanwhile I fed the requirement doc into Claude Sonnet, and with about 3 days of prompting and debugging we had a bespoke solution that did exactly what we needed.
the future is more custom software designed by ai, not less. alot of frameworks will disappear once you can build sophisticated systems yourself. people are missing this
That's a future with a _lot_ more bugs.
youre assuming humans built it. also, a ton of complexity in software engineering is really due to having to fit a business domain into a string of interfaces in different libraries and technical infrastructure
What else is going to build it? Lions?
The only real complexity in software is describing it. There is no evidence that the tools are going to ever help with that. Maybe some kind of device attached directly to the brain that can sidestep the parts that get in the way, but that is assuming some part of the brain is more efficient than it seems through the pathways we experience it through. It could also be that the brain is just fatally flawed.
That's a future paid for by the effort of creating current frameworks, and it's a stagnant future where every "sophisticated system" is just re-hashing the last human frameworks ever created.
While impressive, I'm not convinced that improved performance on tasks of this nature are indicative of progress toward AGI. Building a scheduler is a well studied problem space. Something like the ARC benchmark is much more indicative of progress toward true AGI, but probably still insufficient.
the other models failed at this miserably. There were also specific technical requirements I gave it related to my tech stack
The point is that AGI is the wrong bar to be aiming for. LLMs are sufficiently useful at their current state that even if it does take us 30 years to get to AGI, even just incremental improvements from now until then, they'll still be useful enough to provide value to users/customers for some companies to win big. VC funding will run out and some companies won't make it, but some of them will, to the delight of their investors. AGI when? is an interesting question, but might just be academic. we have self driving cars, weight loss drugs that work, reusable rockets, and useful computer AI. We're living in the future, man, and robot maids are just around the corner.
I’ve had similar things over the last couple days with o3. It was one-shotting whole features into my Rust codebase. Very impressive.
I remember before ChatGPT, smart people would come on podcasts and say we were 100 or 300 years away from AGI.
Then we saw GPT shock them. The reality is these people have no idea, it’s just catchy to talk this way.
With the amount of money going into the problem and the linear increases we see over time, it’s much more likely we see AGI sooner than later.
I find now I quickly bucket people in to "have not/have barely used the latest AI models" or "trolls" when they express a belief current LLMs aren't intelligent.
You can put me in that bucket then. It's not true, I've been working with AI almost daily for 18 months, and I KNOW it's no where close to being intelligent, but it doesn't look like your buckets are based on truth but appeal. I disagree with your assessment so you think I don't know what I'm talking about. I hope you can understand that other people who know just as much as you (or even more) can disagree without being wrong or uninformed. LLMs are amazing, but they're nowhere close to intelligent.
Call me back when ChatGPT isn't hallucinating half the outputs it gives me.
Write me when humans will achieve hallucination levels smaller than ChatGPT.
Designing a distributed scheduler is a solved problem, of course an LLM was able to spit out a solution.
as noted elsewhere, all other frontier models failed miserably at this
It is unsurprising that some lossily-compressed-database search programs might be worse for some tasks than other lossily-compressed-database search programs.
That doesn't mean the one what manages to spit it out of its latent space is close to AGI. I wonder how consistently that specific model could. If you tried 10 LLMs maybe all 10 of them could have spit out the answer 1 out of 10 times. Correct problem retrieval by one LLM and failure by the others isn't a great argument for near-AGI. But LLMs will be useful in limited domains for a long time.
“It does something well” ≠ “it will become AGI”.
Your anodectical example isn't more convincing than “This machine cracked Enigma's messages in less time than an army of cryptanalysts over a month, surely we're gonna reach AGI by the end of the decade” would have.
Wow, 12 per second on average.
I'm not sure what is your point in context of AGI topic.
im a tenured engineer, spent a long time at faang. was casually beat this morning by a far superior design from an llm.
is this because the LLM actually reasoned on a better design or because it found a better design in its "database" scoured from another tenured engineer.
Does it matter if the thing a submarine does counts as "swimming"?
We get paid to solve problems, sometimes the solution is to know an existing pattern or open source implementation and use it. Aguably it usually is: we seldom have to invent new architectures, DSLs, protocols, or OSes from scratch, but even those are patterns one level up.
Whatever the AI is inside, doesn't matter: this was it solving a problem.
who cares?
Ignoring the copyright issues, credit issues, and any ethical concerns... this approach doesn't work for anything not in the "database", it's not AGI and the tangential experience is barely relevant to the article.
Most people talking about Ai and economic growth have vested interests in talking about how it will increase economic growth but don't talk about that under current economic system that the world has would mean most if not all of the growth will go to > 0.0001% of the population.
30 years away seems rather unlikely to me, if you define AGI as being able to do the stuff humans do. I mean like Dawkesh says:
>We’ve gone from Chat GPT two years ago to now we have models that can literally do reasoning, are better coders than me, and I studied software engineering in college.
Also we've recently reached the point where relatively reasonable hardware can do as much compute as the human brain so we just need some algorithms.
Can someone throw some light on this Dwarkesh character? He landed a Zucc podcast pretty early on... how connected is he? Is he an industry plant?
He's awesome.
I listened to Lex Friedman for a long time, and there was a lot of critiques of him (Lex) as an interviewer, but since the guests were amazing, I never really cared.
But after listening to Dwarkesh, my eyes are opened (or maybe my soul). It doesn't matter I've heard of not-many of his guests, because he knows exactly the right questions to ask. He seems to have genuine curiosity for what the guest is saying, and will push back if something doesn't make sense to him. Very much recommend.
He is one of the most prepared podcasters I’ve ever come across. He puts all other mainstream podcasts to deep shame.
He spends weeks reading everything by his guests prior to the interview, asks excellent questions, pushes back, etc.
He certainly has blind spots and biases just like anyone else. For example, he is very AI scale-pilled. However, he will have people like today’s guests on which contradict his biases. This is something a host like Lex could never do apparently.
Dwarkesh is up there with Sean Carrol’s podcast as the most interesting and most intellectually honest in my view.
https://archive.ph/IWjYP
He was covered on the Economist recently -- I haven't heard of him til now so imagine its not just AI-slop content.
And in 30 years it will be another 30 years away.
LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.
Got it. So this is now a competition between...
1. Fusion power plants 2. AGI 3. Quantum computers 4. Commercially viable cultured meat
May the best "imminent" fantasy tech win!
Of those 4 I expect commercial cultured meat far sooner than the rest.
People over-estimate the short term and under-estimate the long term.
Compound growth starting from 0 is... always 0. Current LLMs have 0 general reasoning ability
We haven't even taken the first step towards AGI
WTF, my calculator is high school was already a step towards AGI.
0 and 0.0001 may be difficult to distinguish.
You need to show evidence of that 0.0001 first otherwise you're going off blind faith
I didn't make a claim either way.
LLMs may well reach a closed endpoint without getting to AGI - this is my personal current belief - but people are certainly motivated to work on AGI
Oh for sure. I'm just fighting against the AGI hype. If we survive another 10,000 years I think we'll get there eventually but it's anyone's guess as to when
People overestimate outcomes and underestimate timeframes
People will keep improving LLMs, and by the time they are AGI (less than 30 years), you will say, "Well, these are no longer LLMs."
Will LLMs approach something that appears to be AGI? Maybe. Probably. They're already "better" than humans in many use cases.
LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"
If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.
How many people do you know who are capable of doing something truly novel? Definitely not me, I'm just an average phd doing average research.
Literally every single person I know that is capable of holding a pen or typing on a keyboard can create something new.
Something new != truly novel. ChatGPT creates something new every time I ask it a question.
adjective: novel
definition: new or unusual in an interesting way.
ChatGPT can create new things, sure, but it does so at your directive. It doesn't do that because it wants to which gets back to the other part of my answer.
When an LLM can create something without human prompting or directive, then we can call that intelligence.
What does intelligence have to do with having desires or goals? An amoeba can do stuff on its own but it's not intelligent. I can imagine a god-like intelligence that is a billion times smarter and more capable than any human in every way, and it could just sit idle forever without any motivation to do anything.
Does the amoeba does make choices?
Do you make that choice? Did I?
I did a really fun experiment the other night. You should try it.
I was a little bored of the novel I have been reading so I sat down with Gemini and we collaboratively wrote a terrible novel together.
At the start I was promoting it a lot about the characters and the plot, but eventually it starting writing longer and longer chapters by itself. Characters were being killed off left right and center.
It was hilariously bad, but it was creative and it was fun.
I'm a lowly high school diploma holder. I thought the point of getting a PhD meant you had done something novel (your thesis).
Is that wrong?
My phd thesis, just like 99% of other phd theses, does not have any “truly novel” ideas.
Just because it's something that no one has done yet, doesn't mean that it's not the obvious-to-everyone next step in a long, slow march.
AI manufacturers aren't comparing their models against most people; they now say its "smarter than 99% of people" or "performs tasks at a PhD level".
Look, your argument ultimately reduces down to goalpost-moving what "novel" means, and you can position those goalposts anywhere you want depending on whether you want to push a pro-AI or anti-AI narrative. Is writing a paragraph that no one has ever written before "truly novel"? I can do that. AI can do that. Is inventing a new atomic element "truly novel"? I can't do that. Humans have done that. AI can't do that. See?
Isn't the human brain also "just" a big statistical model as far as we know? (very loosely speaking)
What the hell is general intelligence anyway? People seem to think it means human-like intelligence, but I can't imagine we have any good reason to believe that our kinds of intelligence constitute all possible kinds of intelligence--which, from the words, must be what "general" intelligence means.
It seems like even if it's possible to achieve GI, artificial or otherwise, you'd never be able to know for sure that thats what you've done. It's not exactly "useful benchmark" material.
> What the hell is general intelligence anyway?
OpenAI used to define it as "a highly autonomous system that outperforms humans at most economically valuable work."
Now they used a Level 1-5 scale: https://briansolis.com/2024/08/ainsights-openai-defines-five...
So we can say AGI is "AI that can do the work of Organizations":
> These “Organizations” can manage and execute all functions of a business, surpassing traditional human-based operations in terms of efficiency and productivity. This stage represents the pinnacle of AI development, where AI can autonomously run complex organizational structures.
There's nothing general about AI-as-CEO.
That's the opposite of generality. It may well be the opposite of intelligence.
An intelligent system/individual reliably and efficiently produces competent, desirable, novel outcomes in some domain, avoiding failures that are incompetent, non-novel, and self-harming.
Traditional computing is very good at this for a tiny range of problems. You get efficient, very fast, accurate, repeatable automation for a certain small set of operation types. You don't get invention or novelty.
AGI will scale this reliably across all domains - business, law, politics, the arts, philosophy, economics, all kinds of engineering, human relationships. And others. With novelty.
LLMs are clearly a long way from this. They're unreliable, they're not good at novelty, and a lot of what they do isn't desirable.
They're barely in sight of human levels of achievement - not a high bar.
The current state of LLMs tells us more about how little we expect from human intelligence than about what AGI could be capable of.
Apparently OpenAI now just defines it monetarily as "when we can make $100 billion from it." [0]
[0] https://gizmodo.com/leaked-documents-show-openai-has-a-very-...
That's what "economically valuable work" means.
The way some people confidently assert that we will never create AGI, I am convinced the term essentially means "machine with a soul" to them. It reeks of religiosity.
I guess if we exclude those, then it just means the computer is really good at doing the kind of things which humans do by thinking. Or maybe it's when the computer is better at it than humans and merely being as good as the average human isn't enough (implying that average humans don't have natural general intelligence? Seems weird.)
When we say "the kind of things which humans do by thinking", we should really consider that in the long arc of history. We've bootstrapped ourselves from figuring out that flint is sharp when it breaks, to being able to do all of the things we do today. There was no external help, no pre-existing dataset trained into our brains, we just observed, experimented, learned and communicated.
That's general intelligence - the ability to explore a system you know nothing about (in our case, physics, chemistry and biology) and then interrogate and exploit it for your own purposes.
LLMs are an incredible human invention, but they aren't anything like what we are. They are born as the most knowledgeable things ever, but they die no smarter.
>you'd never be able to know for sure that thats what you've done.
Words mean what they're defined to mean. Talking about "general intelligence" without a clear definition is just woo, muddy thinking that achieves nothing. A fundamental tenet of the scientific method is that only testable claims are meaningful claims.
Looking back at CUDA, deep learning, and now LLM hypes, I would bet it'll be cycles of giant groundbreaking leaps followed by giant complete stagnations, rather than LLM improving 3% per year for coming 30 years.
They‘ll get cheaper and less hardware demanding but the quality improvements get smaller and smaller, sometimes hardly noticeable outside benchmarks
What was the point of this comment? It's confrontational and doesn't add anything to the conversation. If you disagree, you could have just said that, or not commented at all.
There's been a complaint for several decades that "AI can never succeed" - because when, say, expert systems are developed from AI research, and they become capable of doing useful things, then the nay-sayer say "That's not AI, that's just expert systems".
This is somewhat defensible, because what the non-AI-researcher means by AI - which may be AGI - is something more than expert systems by themselves can deliver. It is possible that "real AI" will be the combination of multiple approaches, but so far all the reductionist approaches (that expert systems, say, are all that it takes to be an AI) have proven to be inadequate compared to what the expectations are.
The GP may have been riffing off of this "that's not AI" issue that goes way back.
The people who go around saying "LLMs aren't intelligent" while refusing to define exactly what they mean by intelligence (and hence not making a meaningful/testable claim) add nothing to the conversation.
I'll happily say that LLMs aren't intelligent, and I'll give you a testable version of it.
An LLM cannot be placed in a simulated universe, with an internally consistent physics system of which it knows nothing, and go from its initial state to a world-spanning civilization that understands and exploits a significant amount of the physics available to it.
I know that is true because if you place an LLM in such a universe, it's just a gigantic matrix of numbers that doesn't do anything. It's no more or less intelligent than the number 3 I just wrote on a piece of paper.
You can go further than that and provide the LLM with the ability to request sensory input from its universe and it's still not intelligent because it won't do that, it will just be a gigantic matrix of numbers that doesn't do anything.
To make it do anything in that universe you would have to provide it with intrinsic motivations and a continuous run loop, but that's not really enough because it's still a static system.
To really bootstrap it into intelligence you'd need to have it start with a very basic set of motivations that it's allowed to modify, and show that it can take that starting condition and grow beyond them.
You will almost immediately run into the problem that LLMs can't learn beyond their context window, because they're not intelligent. Every time they run a "thought" they have to be reminded of every piece of information they previously read/wrote since their training data was fixed in a matrix.
I don't mean to downplay the incredible human achievement of reaching a point in computing where we can take the sum total of human knowledge and process it into a set of probabilities that can regurgitate the most likely response to a given input, but it's not intelligence. Us going from flint tools to semiconductors, vaccines and spaceships, is intelligence. The current architectures of LLMs are fundamentally incapable of that sort of thing. They're a useful substitute for intelligence in a growing number of situations, but they don't fundamentally solve problems, they just produce whatever their matrix determines is the most probable response to a given input.
OK, but the people who go around saying "LLMs are intelligent" are in the same boat...
[dead]
Doesn't even matter. The capabilities of the AI that's out NOW will take a decade or more to digest.
I feel like it's already been pretty well digested and excreted for the most part, now we're into the re-ingestion phase until the bubble bursts.
I am tech founder, who spends most of my day in my own startup deploying LLM-based tools into my own operations, and I'm maybe 1% of the way through the roadmap I'd like to build with what exists and is possible to do today.
What has your roadmap to do with the capabilities?
LLMs still hallucinate and make simple mistakes.
And the progress seams to be in the benchmarks only
https://news.ycombinator.com/item?id=43603453
The parent was contradicting the idea that the existing AI capabilities have already been "digested". I agree with them btw.
> And the progress seams to be in the benchmarks only
This seems to be mostly wrong given peoples' reactions to e.g. o3 that was released today. Either way, progress having stalled for the last year doesn't seem that big considering how much progress there has been for the previous 15-20 years.
> and I'm maybe 1% of the way through the roadmap I'd like to build with what exists and is possible to do today.
How do you know they are possible to do today? Errors gets much worse at scale, especially when systems starts to depend on each other, so it is hard to say what can be automated and not.
Like if you have a process A->B, automating A might be fine as long as a human does B and vice versa, but automating both could not be.
100% this. The rearrangement of internal operations has only started and there is just sooo much to do.
Not even close. Software can now understand human language... this is going to mean computers can be a lot more places than they ever could. Furthermore, software can now understand the content of images... eventually this will have a wild impact on nearly everything.
It doesn't understand anything, there is no understanding going on in these models. It takes input and generates output based on the statistical math created from its training set. It's Bayesian statistics and vector/matrix math. There is no cogitation or actual understanding.
This is insanely reductionist and mindless regurgitation of what we already know about how the models work. Understanding is a spectrum, it's not binary. We can measurably show that that there is in fact, some kind of understanding.
If you explain a concept to a child you check for understanding by seeing if the output they produce checks out with your understanding of the concept. You don't peer into their brain and see if there are neurons and consciousness happening
The method of verification has no bearing on the validity of the conclusion. I don't open a child's head because there are side effects on the functioning of the child post brain-opening. However I can look into the brain of an AI with no such side effects.
This is an example I saw 2 days ago without even searching. Here ChatGPT is telling someone that it independently ran a benchmark on it's MacBook: https://pbs.twimg.com/media/Goq-D9macAApuHy?format=jpg
I'm reasonably sure ChatGPT doesn't have a Macbook, and didn't really run the benchmarks. But It DID produce exactly what you would expect a human to say, which is what it is programmed to do. No understanding, just rote repetition.
I won't post more because there are a billion of them. LLMs are great, but they're not intelligent, they don't understand, and the output still needs validated before use. We have a long way to go, and that's ok.
Understand? It fails with to understand a rephrasing of a math problem a five year old can solve... They get much better at training to the test from memory the bigger they get. Likewise you can get some emergent properties out of them.
Really it does not understand a thing, sadly. It can barely analyze language and spew out a matching response chain.
To actually understand something, it must be capable of breaking it down into constituent parts, synthesizing a solution and then phrasing the solution correctly while explaining the steps it took.
And that's not even what huge 62B LLM with the notepad chain of thought (like o3, GPT-4.1 or Claude 3.7) can really properly do.
Further, it has to be able to operate on sub-token level. Say, what happens if I run together truncated version of words or sentences? Even a chimpanzee can handle that. (in sign language)
It cannot do true multimodal IO either. You cannot ask it to respond with at least two matching syllables per word and two pictures of syllables per word, in addition to letters. This is a task a 4 year old can do.
Prediction alone is not indicative of understanding. Pasting together answers like lego is also not indicative of understanding. (Afterwards ask it how it felt about the task. And to spot and explain some patterns in a picture of clouds.)
To push this metaphor, I'm very curious to see what happens as new organic training material becomes increasingly rare, and AI is fed nothing but its own excrement. What happens as hallucinations become actual training data? Will Google start citing sources for their AI overviews that were in turn AI-generated? Is this already happening?
I figure this problem is why the billionaires are chasing social media dominance, but even on social media I don't know how they'll differentiate organic content from AI content.
maybe silicon valley and the world move at basically different rates
idk AI is just a speck outside of the HN and SV info-bubbles
still early to mass adoption like the smartphone or the internet, mostly nerds playing w it
I really disagree. I had a masseuse tell me how he uses ChatGPT, told it a ton of info about himself, and now he uses it for personalized nutrition recommendations. I was in Atlanta over the weekend recently, at a random brunch spot, and overheard some _very_ not SV/tech folks talk about how they use it everyday. Their user growth rate shows this -- you don't hit hundreds of millions of people and have them all be HN/SV info-bubble folks.
I see ChatGPT as the new Google, not the new Nuclear Power Soruce. maybe im naive
https://www.instagram.com/reel/DIep4wLvvVa/
https://www.instagram.com/reel/DE0lldzTHyw/
These maybe satire but I feel like they capture what’s happening. It’s more than Google.
> idk AI is just a speck outside of the HN and SV info-bubbles
> still early to mass adoption like the smartphone or the internet, mostly nerds playing w it
Rather: outside of the HN and SV bubbles, the A"I"s and the fact how one can fall for this kind of hype and dupery is commonly ridiculed.
This is accurate, doubly so for the people who treat it like a religion and fear the coming of their machine god. This, when what we actually have are (admittedly sometimes impressive) next-token predictors that you MUST double-check because they routinely hallucinate.
Then again I remember when people here were convinced that crypto was going to change the world, democratize money, end fiat currency, and that was just the start! Programs of enormous complexity and freedom would run on the blockchain, games and hell even societies would be built on the chain.
A lot of people here are easily blinded by promises of big money coming their way, and there's money in loudly falling for successive hype storms.
Yeah, I'm old enough to remember all the masses who mocked the Internet and smartphones too.
Im not mocking AI, and while the internet and smartphones fundamentally changed how societies operate, and AI will probably do so to, why the Doomerism? Isn't that how tech works? We invent new tech and use it and so on?
What makes AI fundamentally different than smartphones or the internet? Will it change the world? Probably, already has.
Will it end it as we know it? Probably not?
That doesn’t match what I hear from teachers, academics, or the librarians complaining that they are regularly getting requests for things which don’t exist. Everyone I know who’s been hiring has mentioned spammy applications with telltale LLM droppings, too.
I can see how students would be first users of this kinda of tech but am not on those spheres, but I believe you.
As per spammy applications, hasn't always been this the case and now made worse due to the cheapness of -generating- plausible data?
I think ghost-applicants where existent already before AI where consultant companies would pool people to try and get a position on a high paying job and just do consultancy/outsourcing things underneath, many such cases before the advent of AI.
AI just accelerates no?
Yes, AI is effectively a very strong catalyst because it drives down the cost so much. Kids cheated before but it was more work and higher risk, people faked images before but most were too lazy to make high quality fakes, etc.
ChatGPT has 400M weekly users. https://backlinko.com/chatgpt-stats
have you wondered how many of these are bots leveraging free chatgpt with proxied vpn IPs?
I'm a ChatGPT paying user but I know no one who's not a developer on my personal circles who also is one.
maybe im an exeception
edit: I guess 400M global users being the US 300M citizens isn't out of scope for such a highly used product amongst a 7B population
But social media like instagram or fb feels like had network effects going for them making their growth faster
and thus maybe why openai is exploring that idea idk
Pretty much everyone in high school or college is using them. Also everyone whose job is to produce some kind of content or data analysis. That's already a lot of people.
Agreed. A hot take I have is that I think AI is over-hyped in its long-term capabilities, but under-hyped in its short-term ones. We're at the point today or in the next twelve months where all the frontier labs could stop investing any money into research, they'd still see revenue growth via usage of what they've built, and humanity will still be significantly more productive every year, year-over-year, for quite a bit, because of it.
The real driver of productivity growth from AI systems over the next few years isn't going to be model advancements; it'll be the more traditional software engineering, electrical engineering, robotics, etc systems that get built around the models. Phrased another way: If you're an AI researcher thinking you're safe but the software engineers are going to lose their jobs, I'd bet every dollar on reality being the reverse of that.
Apparently Dwarkesh's podcast is a big hit in SV -- it was covered by the Economist just recently. I thought the "All in" podcast was the voice of tech but their content has been going politcal with MAGA lately and their episodes are basically shouting matches with their guests.
And for folks who want to read rather than listen to a podcast, why not create an article (they are using Gemini) rather than just posting the whole transcript? Who is going to read a 60 min long transcript?
I'll take the "under" on 30 years. Demis Hassabis (who has more credibility than whoever these 3 people are combined) says 5-10 years: https://time.com/7277608/demis-hassabis-interview-time100-20...
That's in line with Ray Kurzweil sticking to his long-held predictions: 2029 for AGI and 2045 for the singularity.
A lot of Kurzweil's predictions are nowhere close to coming correct though.
For example, he thought by 2019 we'd have millions of nanorobots in our blood, fighting disease and improving cognition. As near as I can tell we are not tangibly closer to that than we were when he wrote about it 25 years ago. By 2030, he expected humans to be immortal.
I’m sticking with Kurzweil’s predictions as well, his basic premise of extrapolating from compute scaling has been surprisingly robust.
~2030 is also roughly the Metaculus community consensus: https://www.metaculus.com/questions/5121/date-of-artificial-...
We will never have the required compute by then.
Fusion power will arrive first. And, it will be needed to power the Cambrian explosion of datacenters just for weak AI.
I could be wrong but AGI maybe a cold fusion or flying cars boondoggle: chasing a dream that no one needs, costs too much, or is best left unrealized.
Related: https://en.wikipedia.org/wiki/AI_effect
I "love" how the interviewer keeps conflating intelligence with "Hey OpenAI will make $100b"
1. LLM interactions can feel real. Projections and psychological mirroring is very real.
2. I believe that AI researchers will require some level of embodiment to demonstrate:
a. ability to understand the physical world.
b. make changes to the physical world.
c. predict the outcome to changes in the physical world.
d. learn from the success or failure of those predictions and update their internal model of the external world.
---
I cannot quickly find proposed tests in this discussion.
You can’t put a date on AGI until the required technology is invented and that hasn’t happened yet.
Huh, so it should be ready around the same time as practical fusion reactors then. I'll warm up the car.
Explosive growth? Interesting. But at some point, human civilization hits a saturation point. There’s only so much people can eat, wear, drive, stream, or hoard. Extending that logic, there’s a natural ceiling to demand - one that even AGI can’t code its way out of.
Sure, you might double the world economy for a decade, but then what? We’ll run out of people to sell things to. And that’s when things get weird.
To sustain growth, we’d have to start manufacturing demand itself - perhaps by turning autonomous robots into wage-earning members of society. They’d buy goods, subscribe to services, maybe even pay taxes. In effect, they become synthetic consumers fueling a post-human economy.
I call this post-human consumerism. It’s when the synthesis of demand would hit the next gear - if we keep moving in this direction.
The Anthropic's research on how LLMs reason shows that LLMs are quite flawed.
I wonder if we can use an LLM to deeply analyze and fix the flaws.
The new fusion power
That's 20 years away.
It was also 20 years away 30 years ago.
One thing in the podcast I found really interesting from a personal pov was:
> I remember talking to a very senior person who’s now at Anthropic, in 2017. And then he told various people that they shouldn’t do a PhD because by the time they completed it everyone will be automated.
Don’t tell young people things like this. Predicting the future is hard, and it is the height of hubris to think otherwise.
I remember as a teen, I had thought that I was a supposed to be a pilot for all my life. I was ready to enroll in a school with a two year program.
However, I was also into computers. One person who I looked up to in that world said to me “don’t be a pilot, it will all be automated soon and you will just be buss drivers, at best.” This entirely took the wind out of my piloting sails.
This was in the early 90’s, and 30 years later, it is still wrong.
Would we even recognise it if it arrived? We'd recognise human level intelligence, probably, but that's specialised. What would general intelligence even look like.
If/when we will have AGI, we will likely have something fundamentally superhuman very soon after, and that will be very recognizable.
This is the idea of "hard takeoff" -- because the way we can scale computation, there will only ever be a very short time when the AI will be roughly human-level. Even if there are no fundamental breakthroughs, the very least silicon can be ran much faster than meat, and instead of compensating narrower width execution speed like current AI systems do (no AI datacenter is even close to the width of a human brain), you can just spend the money to make your AI system 2x wider and run it at 2x the speed. What would a good engineer (or, a good team of engineers) be able to accomplish if they could have 10 times the workdays in a week that everyone else has?
This is often conflated with the idea that AGI is very imminent. I don't think we are particularly close to that yet. But I do think that if we ever get there, things will get very weird very quickly.
Would AGI be recognisable to us? When a human pushes over an anthill, what do the ants think happened? Do they even know the anthill is gone; did they have concept of the anthill as a huge edifice, or did they only know earth to squeeze through and some biological instinct.
If general intelligence arrived and did whatever general intelligence would do, would we even see it? Or would there just be things that happened that we just can't comprehend?
But that's not ten times the workdays. That's just taking a bunch of speed and sitting by yourself worrying about something. Results may be eccentric.
Though I don't know what you mean by "width of a human brain".
It's ten times the time to work on a problem. Taking a bunch of speed does not make your brain work faster, it just messes with your attention system.
> Though I don't know what you mean by "width of a human brain".
A human brain contains ~86 billion neurons connected to each other through ~100 trillion synapses. All of these parts work genuinely in parallel, all working together at the same time to produce results.
When an AI model is being ran on a GPU, a single ALU can do the work analogous of a neuron activation much faster than a real neuron. But a GPU does not have 86 billion ALUs, it only has ~<20k. It "simulates" a much wider, parallel processing system by streaming in weights and activations and doing them 20k at a time. Large AI datacenters have built systems with many GPUs working in parallel on a single model, but they are still a tiny fraction of the true width of the brain, and can not reach anywhere near the same amount of neuron activations/second that a brain can.
If/when we have a model that can actually do complex reasoning tasks such as programming and designing new computers as well as a human can, with no human helping to prompt it, we can just scale it out to give it more hours per day to work, all the way until every neuron has a real computing element to run it. The difference in experience for such a system for running "narrow" vs running "wide" is just that the wall clock runs slower when you are running wide. That is, you have more hours per day to work on things.
That's what I was trying to express, though: if "the wall clock runs slower", that's less useful than it sounds, because all you have to interact with is yourself.
I exaggerate somewhat. You could interact with databases and computers (if you can bear the lag and compile times). You could produce a lot of work, and test it in any internal way that you can think of. But you can't do outside world stuff. You can't make reality run faster to keep up with your speedy brain.
You can interact with yourself, and everyone else like you.
There is a lot of important work where humans thinking about things is the bottleneck.
Possibly. Here we imagine a world of artificial people - well, a community, depending how many of these people it's feasible to maintain - all thinking very fast and communicating in some super-low-latency way. (Do we revive dial-up? Or maybe they all live in the same building?) And they presumably have bodies, at least one each. But how fast can they do things with their bodies? Physics becomes another bottleneck. They'd need lots of entertainment to keep them in a good mood while they wait for just about any real-world process to complete.
I still contend that it would be a somewhat mediocre super power.
Mustafa Suleyman says AGI is when a (single) machine can perform every cognitive task better than the best humans. That is significantly different from OpenAIs definition (...when we make enough $$$$$, it's AGI).
Suleyman's book "The Coming Wave" talks about Artificial Capable Intelligence (ACI) - between today's LLMs (== "AI" now) and AGI. AI systems capable of handling a lot of complex tasks across various domains, yet not being fully general. Suleyman argues that ACI is here (2025) and will have huge implications for society. These systems could manage businesses, generate digital content, and even operate core government services -- as is happening on a small scale today.
He also opines that these ACIs give us plenty of frontier to be mined for amazing solutions. I agree, what we have already has not been tapped-out.
His definition, to me, is early ASI. If a program is better than the best humans, then we ask it how to improve itself. That's what ASI is.
The clearest thinker alive today on how to get to AGI is, I think, Yann LeCun. He said, paraphrasing: If you want to build an AGI, do NOT work on LLMs!
Good advice; and go (re-?) read Minsky's "Society of Mind".
We sort of are able to recognize Nobel-worthy breakthroughs
One of the many definitions I have for AGI is being able to create the proofs for the 2030, 2050, 2100, etc Nobel Prizes, today
A sillier one I like is that AGI would output a correct proof that P ≠ NP on day 1
Isn't AGI just "general" intelligence as in -like a regular human- turing test kinda deal?
aren't you thinking about ASI/ Superintelligence way capable of outdoing humans?
Yes, a general consensus is AGI should be able to perform any task an average human is able to perform. Definitely nothing of Nobel prize level.
A bit poorly named; not really very general. AHI would be a better name.
AAI would be enough for me, although there are people who deny intelligence of non-human animals.
Another general consensus is that humans possess general intelligence.
Yes, we do seem to have a very high opinion of ourselves.
> Yes, a general consensus is AGI should be able to perform any task an average human is able to perform.
The goalposts are regularly moved so that AI companies and their investors can claim/hype that AGI will be around in a few years. :-)
I learned the definition I provided back in mid 90s, and it hasn't really changed since then.
There's a test for this: https://arcprize.org/arc-agi
Basically a captcha. If there's something that humans can easily do that a machine cannot, full AGI has not been achieved.
AI will face the same limitations we face: availability of information and the non deterministic nature of the world.
What do monkeys think about humans?
you'd be able to give them a novel problem and have them generalize from known concepts to solve it. here's an example:
1 write a specification for a language in natural language
2 write an example program
can you feed 1 into a model and have it produce a compiler for 2 that works as reliably as a classically built one?
I think that's a low bar that hasn't been approached yet. until then I don't see evidence of language models' ability to reason.
I'd accept that as a human kind of intelligence, but I'm really hoping that AGI would be a bit more general. That clever human thinking would be a subset of what it could do.
You could ask Gemini 2.5 to do that today and it's well within its capabilities, just as long as you also let it write and run unit tests, as a human developer would.
AGI isn't ASI; it's not supposed to be smarter than humans. The people who say AGI is far away are unscientific woo-mongers, because they never give a concrete, empirically measurable definition of AGI. The closest we have is Humanity's Last Exam, which LLMs are already well on the path to acing.
Consider this: Being born/trained in 1900 if that were possible and given a year to adapt to the world of 2025, how well would an LLM do on any test? Compare that to how a 15 years old human in the same situation would do.
I'd expect it to be generalised, where we (and everything else we've ever met) are specialised. Our intelligence is shaped by our biology and our environment; the limitations on our thinking are themselves concepts the best of us can barely glimpse. Some kind of intelligence that inherently transcends its substrate.
What that would look like, how it would think, the kind of mental considerations it would have, I do not know. I do suspect that declaring something that thinks like us would have "general intelligence" to be a symptom of our limited thinking.
Is it me or the signal/noise is needle in a haystack for all these cheerleader tech podcasts? In general, I really miss the podcast scene from 10 years ago, less polished but more human and with reasonable content. Not this speculative blabber that seems to be looking to generate clickbait clips. I don't know what happened a few years ago, but even solid podcasts are practically garbage now.
I used to listen to podcasts daily for at least an hour. Now I'm stuck with uploading blogs and pdfs to Eleven Reader. I tried the Google thing to make a podcast but it's very repetitive and dumb.
Again?
Thirty years. Just enough time to call it quits and head to Costa Rica.
LLMs are basically a library that can talk.
That’s not artificial intelligence.
There’s increasing evidence that LLMs are more than that. Especially work by Anthropic has been showing how to trace the internal logic of an LLM as it answers a question. They can in fact reason over facts contained in the model, not just repeat already seen information.
A simple example is how LLMs do math. They are not calculators and have not memorized every sum in existence. Instead they deploy a whole set of mental math techniques that were discovered at training time. For example, Claude uses a special trick for adding 2 digit numbers ending in 6 and 9.
Many more examples in this recent reach report, including evidence of future planning while writing rhyming poetry.
https://www.anthropic.com/research/tracing-thoughts-language...
> sometimes this "chain of thought" ends up being misleading; Claude sometimes makes up plausible-sounding steps to get where it wants to go. From a reliability perspective, the problem is that Claude’s "faked" reasoning can be very convincing.
If you ask the LLM to explain how it got the answer the response it gives you won't necessarily be the steps it used to figure out the answer.
I don’t think that is the core of this paper. If anything the paper shows that LLMs have no internal reasoning for math at all. The example they demonstrate is that it triggers the same tokens in randomly unrelated numbers. They kind of just “vibe” there way to a solution
[dead]
We invented a calculator for language-like things, which is cool, but it’s got a lot of people really mixed up.
The hype men trying to make a buck off them aren’t helping, of course.
Grammar engines. Or value matrix engines.
Everytime I try to work with them I lose more time than I gain. Net loss every time. Immensely frustrating. If i focus it on a small subtask I can gain some time (rough draft of a test). Anything more advanced and its a monumental waste of time.
They are not even good librarians. They fail miserably at cross referencing and contextualizing without constant leading.
I've only really been experimenting with them for a few days, but I'm kind of torn on it. On the one hand, I can see a lot of things it could be useful for, like indexing all the cluttered files I've saved over the years and looking things up for me faster than I could find|grep. Heck, yesterday I asked one a relationship question, and it gave me pretty good advice. Nothing I couldn't have gotten out of a thousand books and magazines, but it was a lot faster and more focused than doing that.
On the other hand, the prompt/answer interface really limits what you can do with it. I can't just say, like I could with a human assistant, "Here's my calendar. Send me a summary of my appointments each morning, and when I tell you about a new one, record it in here." I can script something like that, and even have the LLM help me write the scripts, but since I can already write scripts, that's only a speed-up at best, not anything revolutionary.
I asked Grok what benefit there would be in having a script fetch the weather forecast data, pass it to Grok in a prompt, and then send the output to my phone. The answer was basically, "So I can say it nicer and remind you to take an umbrella if it sounds rainy." Again, that's kind of neat, but not a big deal.
Maybe I just need to experiment more to see a big advance I can make with it, but right now it's still at the "cool toy" stage.
I feel the opposite.
LLMs are unbelievably useful for me - never have I had a tool more powerful to assist my brain work. I useLLMs for work and play constantly every day.
It pretends to sound like a person and can mimic speech and write and is all around perhaps the greatest wonder created by humanity.
It’s still not artificial intelligence though, it’s a talking library.
Fair. For engineering work they have been a terrible drain on me save for the most minor autocomplete. Its recommendations are often deeply flawed or almost totally hallucinated no matter the model. Maybe I am a better software engineer than a “prompt engineer”.
Ive tried to use them as a research assistant in a history project and they have been also quite bad in that respect because of the immense naivety in its approaches.
I couldn’t call them a librarian because librarians are studied and trained in cross referencing material.
They have helped me in some searches but not better than a search engine at a monumentally higher investment cost to the industry.
Then again, I am also speaking as someone who doesn’t like to offload all of my communications to those things. Use it or lose it, eh
I’m curious you’re a developer who finds no value in LLMs?
It’s weird to me that there’s such a giant gap with my experience of it bein a minimum 10x multiplier.
Two more weeks
This "AGI" definition is extremely loose depending on who you talk to. Ask "what does AGI mean to you" and sometimes the answer is:
1. Millions of layoffs across industries due to AI with some form of questionable UBI (not sure if this works)
2. 100BN in profits. (Microsoft / OpenAI definition)
3. Abundance in slopware. (VC's definition)
4. Raise more money to reach AGI / ASI.
5. Any job that a human can do which is economically significant.
6. Safe AI (Researchers definition).
7. All the above that AI could possibly do better.
I am sure there must be a industry aligned and concrete definition that everyone can agree on rather the goal post moving definitions.
”‘AGI is x years away’ is a proposition that is both true and false at the same time. Like all such propositions, it is therefore meaningless.”
You cannot have AGI without a physical manifestation that can generate its own training data based on inputs from the external outside world with e.g. sensors and constantly refine its model.
Pure language or pure image-models are just one aspect of intelligence - just very refined pattern recognition.
You will also probably need some aspect of self-awareness in order or the system to set auxiliary goals and directives related to self-maintenance.
But you don't need AGI in order to have something useful (which I think a lot of readers are confused about). No one is making the argument that you need AGI to bring tons of value.
AGI is never gonna happen - it's the tech equivalent of the second coming of Christ, a capitalist version of the religious savior trope.
Hey now, on a long enough time line one of these strains of millenarian thinking may eventually get something right.
I guess I am agnostic then.
AGI is here today... go have a kid.
That would be "GI". The "A" part implies, specifically, NOT having a kid, eh?
Natural intelligence is too expensive. Takes too long for it to grow. If things go wrong then we have to jail it. With computers we just change the software.
You work for DOGE, don't you?
Not artificial, but yes, it's unclear what advantage an artificial person has over a natural one, or how it's supposed to gain special insights into fusion reactor design and etc. even if it can think very fast.
Good thing the Wolfenstein tech isn't a thing yet hopefully
Hopefully more!
"Literally who" and "literally who" put out statements while others out there ship out products.
Many such cases.
I do not like those who try to play God. The future of humanity will not be determined by some tech giant in their ivory tower, no matter how high it may be. This is a battle that goes deeper than ones and zeros. It's a battle for the soul of our society. It's a battle we must win, or face the consequences of a future we cannot even imagine... and that, I fear, is truly terrifying.
> The future of humanity will not be determined by some tech giant in their ivory tower
Really? Because it kinda seems like it already has been. Jony Ive designed the most iconic smartphone in the world from a position beyond reproach even when he messed up (eg. Bendgate). Google decides what your future is algorithmically, basically eschewing determinism to sell an ad or recommend a viral video. Instagram, Facebook and TikTok all have disproportionate influence over how ordinary people live their lives.
From where I'm standing, the future of humanity has already been cast by tech giants. The notion of AI taking control is almost a relief considering how illogical and obstinate human leadership can be.
TURKS MENTIONED RAAAAAAAAAAAAAAAHHHHH!!1!1!! TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE TURKIYE