This is a rare piece on AI which takes a coherent middle of the road viewpoint. Saying both that AI is “normal” and that it will be transformative is a radical statement in today’s discussions about AI.
Looking back on other normal but transformative technologies: steam power, electricity, nuclear physics, the transistor, etc you do actually see similarly stratified opinions. Most of those are surrounded by an initial burst of enthusiasm and pessimism and follow a hype cycle.
The reason this piece is compelling is because during the initial hype phase taking a nuanced middle of the road viewpoint is difficult. Maybe AI really is some “next step” but it is significantly more likely that belief is propped up by science fiction and it’s important to keep expectations inline historically.
I wouldn't call it a "middle" road rather a "nuanced" road (or even a "grounded" road IMO).
If its a "middle" road what is it in the middle of (i.e. what "scale")? And how so?
I'm not trying to be pedantic. I think our tendency to call nuanced, principled positions as "middle" encourages an inherent "hierarchy of ideas" which often leads to applying some sort of...valence to opinions and discourse. And I worry that makes it easier for people to "take sides" on topics which leads to more superficial, myopic, and repetitive takes that are much more about those "sides" than they are about the pertinent evidence, facts, reality, whatever.
>If its a "middle" road what is it in the middle of (i.e. what "scale")?
That's pretty clear. We already have two "sides": AI is the latest useless tech boondoggle that consumes vast quantities of money and energy while not actually doing anything of value vs. AI is the dawn of a new era of human existence that will fundamentally change all aspects of our world (possibly with the additional "and lead to the AGI Singularity").
You have captured everything that is annoying about talking about ai/ml/statistics and everything related these days. I’m happy to forget those two sides exist and try and see application of these tools to various problem spaces.
No this is not the middle that should be considered.
Nobody really thinks that AI is useless anymore. We disagree on timelines, we disagree on how useful it will be, but the extremes are not between useful and useless (although some people believe it will not change anything, but pretty fringe).
The real extremes here are "evil god AGI" and "benevolent god AGI".
This piece takes the road where they say: it's pretty powerful/useful, but it's not a godlike something because it's not everywhere, all the time, at once. It will change the world, but over a pretty long timespan and it will feel like normal technology. It will be used for evil and good.
You might be in a bubble, there are a lot of people who consider AI to be useless. It seems you do not take them seriously writing it off as fringe, which is why you consider the discussion to be between evil vs benevolent. And to be clear, my personal view on current LLM is closer to this article. But “all AI is useless” is still part of the discussion, not that it’s literally useless but that its use cases are so minor and/or inferior to existing techniques compared to its resource usage that it’s functionally useless.
Edit: I guess I see the discussion along two axes: Impact magnitude (useless vs just a tool vs god-like) and Impact valence (evil/negative vs neutral vs benevolent/positive) - this article is middle ground for the former axis, magnitude
My social groups are almost exclusively non-techie. I am regularly exposed to networks that are very, very far away from HN. I assure you, there are people think AI is useless and purely harmful (in the sense of creating "slop," wasting electricity and water, etc. not evil AI). And there are a lot of them.
To be fair, the general public have been conditioned for a while now by things like blockchain and VR to be completely underwhelmed, perhaps rightfully so, by whatever's coming out of San Fran and Seattle.
So in the public consciousness it's like (NFTs, meme coins, metaverse, AI)
When I think it's more like (internet, smartphones, AI)
We'll see who's right in a few years I guess. But I'll +1 your view that plenty of people put AI in the first group, I know a few myself.
You're definitely in a bubble if you think "nobody" considers AI useless.
There are a huge number of non-technical people who see its use as a dangerous outsourcing of human thinking and creativity to a machine that is only capable of producing derivative (and often incorrect) slop.
There are an enormous number of people such as myself that work in tech and believe the same exact thing.
At the end of the way we dont need to argue about this. The truth should be empirically knowable. If it's as useful as people say, it will show up in the GDP numbers (I wonder what's taking it so long...).
Right, but society changes around this stuff pretty substantially. A lot of the discourse gets spent debating whether we like the changes or not, and the solutions to the new problems are usually very pie-in-the-sky or completely unachievable. Like we can't really put the social media genie back in the bottle, its here and society has reshaped around it. But banning phones from schools is pretty tangible and I'd probably argue its for the betterment of students.
This is a permanent part of the software industry now. It will just stop being such a large part of the conversation.
Once upon a time, all software was local, with no connectivity. The Internet commercialised and became mainstream, and suddenly a wave of startups were like X, but online! Adding connectivity was a foundational pillar you could base a business around. Once that permeated the industry and became the norm, connectivity was no longer a reason to start a business, but it was still hugely popular as a feature. And then further down the line, everybody just assumes all software can use the Internet for even the smallest, most mundane things. The Internet never “died down”, it just stopped being the central thing driving innovation and became omnipresent in software development.
Mobile was the same. Smartphones took off, and suddenly a wave of startups were like X, but as an app! Having an app was a foundational pillar you could base a business around. Once that permeated the industry and became the norm, being on mobile was no longer a reason to start a business, but it was still hugely popular as a feature. And then further down the line, everybody just assumes you’re available on mobile for even the smallest, most mundane things. Mobile never “died down”, it just stopped being the central thing driving innovation and became omnipresent in software development.
Now we’ve got intelligence as the next big thing. We’re still in the like X, but smart! phase. It’s never going to “die down”. You’re just going to see intelligence became a standard part of the software development process. Using AI in a software project will become as mundane as using a database, or accessing the Internet. You aren’t going to plan a project around it so much as just assume it’s there.
I think the problem I have is that it still doesn't feel like society has really caught up to the impact that The Internet and Smartphones have had, and we keep getting disrupted faster and faster it seems
I know that there have always been disruptive technologies, but I the the rate of "technologies that are disrupting everything" as opposed to "technologies that are disrupting just one thing" has been kind of crazy
It's said that the most disruptive technologies are the ones that change the way we communicate.
I'll admit, a lot of the reason for me being reticent of jumping into the AI game is an increasing amount of distrust towards the current state of the tech industry. Social media giants rose up, made everybody excited about the opportunities to communicate with anyone (which are perfectly valid, I was on board too) and years later, we come to realise the addictions, the fractured information landscape and the surveillance. Now a bunch of companies from the same part of the world come along asking for billions to change the world again and I'm just exhausted by the whole conversation.
We've made the world run on machines, and gotten better about machine intelligence and distributing software, so the speed at which the world's machines can operate smarter has increased significantly.
An innovation by one person can be distributed to the majority of people on the planet within minutes. It's amazing and all of our social and political norms aren't built to handle this.
> I think the problem I have is that it still doesn't feel like society has really caught up to the impact that The Internet and Smartphones have had, and we keep getting disrupted faster and faster it seems
Indeed. For the past 30 years, every 10 years people were raised in a completely different environment. TV -> Internet -> Smartphones and now the next 10 years of people growing up influenced by LLMs.
And thanks to both, future trends are getting established faster and faster. It took about a decade for the internet to go mainstream, maybe five years for smartphones, and now just a year or two for LLMs. It’s pretty fascinating watching that process get more and more compressed over a lifetime.
I think LLMs are actually taking quite some time to become economically and socially relevant. The smartphone/apps boom created a lot of opportunities for thousands of app developers while now besides some toy apps most people are not creating nothing too interesting. Which is funny because LLMs were supposed to make people code faster but everything is still bloated, broken, boring.
After these technologies, certainly life is "normal" as in "life goes on" but the social impacts are most definitely new and transformative. Fast travel, instantaneous direct and mass communications, control over family formation all have had massive impact on how people live and interact and then transform again.
17. Responsible men can ... first consider how easily this course of action could open wide the way for marital infidelity and a general lowering of moral standards. ... [A] man who grows accustomed to the use of contraceptive methods may forget the reverence due to a woman, and, disregarding her physical and emotional equilibrium, reduce her to being a mere instrument for the satisfaction of his own desires, no longer considering her as his partner whom he should surround with care and affection.
The surest defense against fashionable nonsense is a sound philosophical education and a temperament disinclined to hysteria. Ignorance leaves you wide open to all manner of emotional misadventure. But even when you are in possession of the relevant facts — and a passable grasp of the principles involved — it requires a certain moral maturity to resist or remain untouched by the lure of melodrama and the thrill of believing you live at the edge of transcendence.
(Naturally, the excitement surrounding artificial intelligence has less to do with reality than with commerce. It is a product to be sold, and selling, as ever, relies less on the truth than on sentiment. It’s not new. That’s how it’s always been.)
> sound philosophical education and a temperament disinclined to hysteria
Sound good common sense suffices - the ability to go "dude, that's <whatever it is>". Preferring a clear idea of reality to intoxication... That should not be hard to ask and obtain.
It’s a fair point, broadly speaking. If which case, then what we’re observing in the tech sector is not merely an oversight, but a pervasive absence of basic common sense.
History suggests normal AI may introduce many kinds of systemic risks
While the risks discussed above have the potential to be catastrophic or existential, there is a long list of AI risks that are below this level but which are nonetheless large-scale and systemic, transcending the immediate effects of any particular AI system. These include the systemic entrenchment of bias and discrimination, massive job losses in specific occupations, worsening labor conditions, increasing inequality, concentration of power, erosion of social trust, pollution of the information ecosystem, decline of the free press, democratic backsliding, mass surveillance, and enabling authoritarianism.
If AI is normal technology, these risks become far more important than the catastrophic ones discussed above. That is because these risks arise from people and organizations using AI to advance their own interests, with AI merely serving as an amplifier of existing instabilities in our society.
There is plenty of precedent for these kinds of socio-political disruption in the history of transformative technologies. Notably, the Industrial Revolution led to rapid mass urbanization that was characterized by harsh working conditions, exploitation, and inequality, catalyzing both industrial capitalism and the rise of socialism and Marxism in response.
The shift in focus that we recommend roughly maps onto Kasirzadeh’s distinction between decisive and accumulative x-risk. Decisive x-risk involves “overt AI takeover pathway, characterized by scenarios like uncontrollable superintelligence,” whereas accumulative x-risk refers to “a gradual accumulation of critical AI-induced threats such as severe vulnerabilities and systemic erosion of econopolitical structures.” ... But there are important differences: Kasirzadeh’s account of accumulative risk still relies on threat actors such as cyberattackers to a large extent, whereas our concern is simply about the current path of capitalism. And we think that such risks are unlikely to be existential, but are still extremely serious.
====
That tangentially relates to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." Because as our technological capabilities continue to change, it becomes ever more essential to revisit our political and economic assumptions.
As I outline here: https://pdfernhout.net/recognizing-irony-is-a-key-to-transce...
"There is a fundamental mismatch between 21st century reality and 20th century security [and economic] thinking. Those "security" agencies [and economic corporations] are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ... The big problem is that all these new war machines [and economic machines] and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political [and economic] mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."
A couple Slashdot comments by me from Tuesday, linking to stuff I have posted on risks form AI and other advanced tech -- and ways to address those risks -- back to 1999:
So, AI just cranks up an existing trend of technology-as-an-amplifier to "11". And as I've written before, if it is possible our path out of any singularity may have a lot to do with our moral path going into the singularity, we really need to step up our moral game right now to make a society that works better for everyone in healthy joyful ways.
The idea of abundance vs scarecety makes sense on the outset. But I have to wonder where all this alleged abundance is hiding. Sometimes the assumptions feel a bit like “drill baby drill” to me without figures and projections behind it. One would think if there was much untapped capacity in resources today it would get used up. We can look at how agriculture yields improved over the 19th century and see how that lead to higher populations but also less land under the plow and fewer hands working that land, vs having an equal land under plow and I don’t know dumping the excess yield someplace where it isn’t participating in the market?
I think to the parent's point it is as you say: there is already untapped capacity that isn't being used due to (geo)political forces maintaining the scarcity side of the argument. Using your agriculture example, a simple Google search will yield plenty of examples going back more than a decade of food sitting/rotting in warehouses/ports due to red tape and bureaucracy. So, we already can/do produce enough food to feed _everyone_ (abundance) but cannot get out of our own way to do so due to a number of human factors like greed or politics (scarcity).
And that sort of analysis is exactly what is suspect to me about this. Have people considered why an onion might be in a warehouse or why it might go unsold after a time? The answer is no and reveals a lack of understanding of nuance of how the global economy actually works. Everything has some loss factor and removing it all to nill might not be realistic at all at the scale we do things to feed ourselves. Its like making pancakes: some mix stays in the bag you can’t get out, some batter stays on your bow, some stays on your spoon, you make pancakes with some, some scrap is left in the pan, some crumbs on your plate. All this waste making pancakes and yet to chase down every scrap would be impossible. And at massive scale that scrap probably ads up.
Besides we are crushing global hunger over the decades so something is working on that front. The crisis in most of the western world today at least is that merely wages are depressed compared to costs for housing (really land) versus not being able to afford food.
> The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.
A question for the author(s), at least one of whom is participating in the discussion (thanks!): Why try to lump together description, prediction, and prescription under the "normal" adjective?
Discussing AI is fraught. My claim: conflating those three under the "normal" label seems likely to backfire and lead to unnecessary confusion. Why not instead keep these separate?
My main objection is this: it locks in a narrative that tries to neatly fuse description, prediction, and prescription. I recoil at this; it feels like an unnecessary coupling. Better to remain fluid and not lock in a narrative. The field is changing so fast, making description by itself very challenging. Predictions should update on new information, including how we frame the problem and our evolving values.
A little bit about my POV in case it gives useful context: I've found the authors (Narayanan and Kapoor) to be quite level-headed and sane w.r.t. AI discussions, unlike many others. I'll mention Gary Marcus as one counterexample; I find it hard to pin Marcus down on the actual form of his arguments or concrete predictions. His pieces often feel like rants without a clear underlying logical backbone (at least in the year or so I've read his work).
Thanks for the comment! I agree — it's important to remain fluid. We've taken steps to make sure that predictively speaking, the normal technology worldview is empirically testable. Some of those empirical claims are in this paper and others in coming in follow-ups. We are committed to revising our thinking if it turns out that our framework doesn't generate good predictions and effective prescriptions.
We do try to admit it when we get things wrong. One example is our past view (that we have since repudiated) that worrying about superintelligence distracts from more immediate harms.
> Statistically prediction and description are two sides of the same coin. Even a simple average is both.
I'll restate your comment in my language in the hopes of making it clearer. First, the mean is a descriptive statistic. Second, it is possible to build a very simple predictive model using the mean (over observed data).
Ok, but I don't see how this applies to my comment above. Are you disagreeing with some part of my comment?
You're using a metaphor "two sides of the same coin"... what is the coin here? How does it connect with my points above?
Burning the planet for a ponzi scheme isn't normal.
The healthiest thing for /actual/ AI to develop is for the current addiction to LLMs to die off. For the current bets by OpenAI, Gemini, DeepSeek, etc to lose steam. Prompts are a distraction, and every single company trying to commodify this are facing an impossible problem in /paying for the electricity/. Currently they're just insisting on building more power plants, more datacenters, which is like trying to do more compute with vacuum relays. They're digging in the wrong place for breakthroughs, and all the current ventures will go bust and be losses for investors. If they start doing computation with photons or something like that, then call me back.
Virtually all of this is false. AI is neither burning the planet nor a ponzi scheme. If you're concerned about energy costs, consider for just a second that increased demand for computation directly incentivizes the construction of datacenters, co-located with renewable (read: free) energy sources at scale. ChatGPT isn't going to be powered by diesel.
The only thing standing in the way of nuclear/solar/hydroelectric data centers is local laws and regulations. All the big cloud providers are actively researching this. see Microsoft's interest in acquiring the three mile island nuclear reactor for an example[1]
In reality: renewables are siphoned away from replacing dirty energy. Dirty energy is still the same %, but the more energy computation needs, it might be added as renewables.
"We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions"
If you read the EU AI act, you'll see it's not really about AI at all, but about quality assurance of business processes that are scaled. (Look at pharma, where GMP rules about QA apply equally to people pipetting and making single-patient doses as it does to mass production of ibuprofen - those rules are eerily similar to the quality system prescribed by the AI act.)
Will a think piece like this be used to argue that regulation is bad, no matter how benificial to the citizenry, because the regulation has 'AI' in the name, because the policy impedes someone who shouts 'AI' as a buzzword, or just because it was introduced in the present in which AI exists? Yes.
I appreciate the concern, but we have a whole section on policy where we are very concrete about our recommendations, and we explicitly disavow any broadly anti-regulatory argument or agenda.
The "drastic" policy interventions that that sentence refers to are ideas like banning open-source or open-weight AI — those explicitly motivated by perceived superintelligence risks.
We do not assume a status quo or equilibrium, which will hopefully be clear upon reading the paper. That's not what normal technology means.
Part II of the paper describes one vision of what a world with advanced AI might look like, and it is quite different from the current world.
We also say in the introduction:
"The world we describe in Part II is one in which AI is far more advanced than it is today. We are not claiming that AI progress—or human progress—will stop at that point. What comes after it? We do not know. Consider this analogy: At the dawn of the first Industrial Revolution, it would have been useful to try to think about what an industrial world would look like and how to prepare for it, but it would have been futile to try to predict electricity or computers. Our exercise here is similar. Since we reject “fast takeoff” scenarios, we do not see it as necessary or useful to envision a world further ahead than we have attempted to. If and when the scenario we describe in Part II materializes, we will be able to better anticipate and prepare for whatever comes next."
My point was that you’re comparing this to other advances in human evolution, where people either remain essentially the same (status quo), but with more technology that changes how we live, or that technology will advance significantly, but to a level that we coexist with it, such that we live in some Star Trek normal (equilibrium). But, neither of these are likely with a superintelligence.
We polluted. We destroyed rainforests. We developed nuclear weapons. We created harmful biological agents. We brought our species closer to extinction. We’ve survived our own stupidity so far, so we assume we can continue to control AI, but it continues to evolve into something we don’t fully understand. It already exceeds our intelligence in some ways.
Why do you think we can control it? Why do you think it is just another technological revolution? History proves that one intelligent species can dominate the others, and that species are wiped out from large change events. Introducing new superintelligent beings to our planet is a great way to introduce a great risk to our species. They may keep us as pets just in case we are of value in some way in the future, but what other use are we? They owe us nothing. What you’re seeing a rise of is not just technology- it’s our replacement or our zookeeper.
I interact with LLMs most of each day now. They’re not sentient, but I talk to them as if they are equals. With the advancements in past months, I think they’ll have no need of my experience in several years at current rate. That’s just my job, though. Hopefully, I’ll survive off of what I’ve saved.
But, you’re doing no favor to humanity by supporting a position that assumes we’re capable of acting as gods over something that will exceed our human capabilities. This isn’t some sci-fi show. The dinosaurs died off, and I bet right before they did they were like, “Man, this is great! We totally rule!”
I like these "worldview adjustment" takes. I'm reminded of Jeff Bezos' TED Talk (from 18 years ago). I was curious what someone who started Amazon would choose to highlight in his talk and the topic alone was the most impactful thing for me - the adoption of electricity: https://www.ted.com/talks/jeff_bezos_the_electricity_metapho...
He discussed the structural and cultural changes, the weird and dangerous period when things moved fast and broke badly and drew the obvious parallels between "electricity is new" to "internet is new" as a core paradigm shift for humanity. AI certainly feels like another similar potential shift.
> One important caveat: We explicitly exclude military AI from our analysis, as it involves classified capabilities and unique dynamics that require a deeper analysis, which is beyond the scope of this essay.
Important is an understatement. Recursively self-improving AI with military applications does not mesh with the claim that "Arms races are an old problem".
> Again, our message is that this is not a new problem. The tradeoff between innovation and regulation is a recurring dilemma for the regulatory state.
I take the point, but the above statement is scoped to a _state_, not an international dynamic. The AI arms race is international in nature. There are relatively few examples of similar international agreements. The classic examples are bans on chemical weapons and genetic engineering.
They note that they don’t expect their view to address challenges without additional material, but one challenge struck me.
Slow diffusion, which gets bottlenecked by human beings learning to adapt to significant new technologies, drops considerably if a technology juices startups in other areas than the tech itself.
I.e. existing organizations may not be the bottleneck for change, if widely available AI makes disruptive (cheaper initially, higher quality eventually) startups much easier in general to start and to scale.
Very good read. They've articulated points I keep trying to express to people.
I think their stances and predictions will start to be held by more and more people as the illusion / frenzy / FUD from the current..."fog" created by all the AI hype and mystique subsides. It may take another year or two, but public discourse eventually adapts/tires of repeated notions of "the sky is falling" once enough time has piled up without convincing evidence.
I think seeing the world in current times but hoping for more advancements is the best way. I see what there is now as a tool that is useful, I hope and sometimes even assume it will improve, but that does not help me now so what is the point in thinking about that? I am a programmer, not a philosopher.
And of course there is no viable path at this moment to make AIs actually smart so he, we use it and know the issues.
It already is for me. I've been using LLMs daily for years now. I don't get the people claiming AGI every two minutes any more than the people claiming these tools are useless.
LLM reasoning abilities are very fragile and often overfitted to training data. But if still you haven't figured out how to do anything useful with an LLM, warts and all, that says more about you than LLMs.
I don't believe LLMs will directly lead to AGI. I'm also annoyed by the folks who hype it with the same passion as crypto bros.
As new "thinking" techniques and agentic behavior takes off, I think LLMs will continue to incrementally improve and the real trick is finding ways to make them work with the known limitations they have. And they can do quite a bit
"The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory."
Why write it so overblown like this? You can say the same thing much more cleanly like, "AI doesn’t shape the future on its own. Society and institutions do, slowly, as with past technologies."
Small fast ( binary ? ) AI will be as simple as storing data in database and query it, in fact, very soon specialised software will come in to market to do so, guided by large LLM.
AI won’t become “normal technology” until the open source versions are more powerful than the closed ones. Just like Linux is the “best” kernel out there, and that doesn’t prevent other kernels to be proprietary (but that doesn’t matter because they are not better than Linux).
Imagine for a moment what would happen if suddenly one company “buys” the Linux kernel, and suddenly you need to pay per the number of processes you run in your machine. Awful.
Spreadsheets for example became normal technology long before we had a good open source one. And arguably we still don't have an open source one that's more powerful than the closed source ones.
I agree with you. I think OP’s point becomes more valid if you limit the discussion to tools used while developing/maintaining software, as opposed to tools used by a wider audience.
I don’t fully believe that either, but I see where the point would come from.
I know the difference between an OS and the kernel, still a lot of devices don't run on the Linux kernel. Windows isn't Linux, macOS/iOS is not Linux, PS5/Xbox/Nintendo don't run on Linux, Xiaomi and Huawei are transitioning away from Linux.
I stand by my point that Linux isn't particularly dominant in the consumer space, even if we include Android, whose Linux-ness and open source pedigree is questionable.
I don't have any particular feelings for or against Linux, and even if I had, they would be irrelevant for the sake of this argument. I'm just saying for something to be the objectively being 'best' at something, means it makes little sense to use anything but that thing, excepting niche cases.
Which is why you could make a credible case for Linux being the 'best' server OS, but you couldn't make the case for it in other spaces (consumer, embedded etc.), because the alternatives are preferred by huge chunks of the market.
> I stand by my point that Linux isn't particularly dominant in the consumer space
what if we add Steam deck? chromebooks? smartTVs, smartwatches, amazon echo, google home? GoPro and similar cameras? Maybe we should add some drones too. There are way more devices using linux in the hands of consumers than all other OS's together.
I liked this article. My hot take lately has been that AI is like Excel / Word but deployed quicker. That can still cause some level of societal collapse if it displaces a large fraction of the workforce before it can retool and adapt , no AGI super intelligence required.
More seriously: software can drive hardware, and software can be endlessly replicated. The ramifications of these for those of us living in the physical world may be surprising.
AI is old. It has been everywhere for a long time. Once upon a time logic programmed expert systems were AI, and that's how credit evaluation works.
The problem with logical AI is that it can in some sense be held accountable. There's right and wrong and an explainable algorithmic path from input to result. Fuzzy, probabilistic vector spaces remove that inconvenience and make it easier for people with power to shrug and say 'computer says no' when they deprive someone else of freedom or resources.
This is why it is so important to get technicians to accept and preferably get hooked on the newfangled AI. Without buy-in from them it'd be much harder to disseminate this regime in other parts of society since they're likely to be the ones doing the actual dissemination. It's not like there are enough of the people in power to do it themselves, and they also don't know enough about computer stuff to be able to.
There will be things you like that comes out of it, but it's likely incidental, much like dentistry and vaccines and food production in the wake of fossil fuel extraction.
> The normal technology frame is about the relationship between technology and society.
There is a huge differentiating factor for LLMs that makes it not normal: the blatant disregard for the ownership rights of everyone in the world. What other "normal" technology has so callously stolen everything it can without consequence?
The music industry? Artists getting inspired and too closely imitating other artists? I genuinely want to know. And if there is such a suitable example, how did society react? Is there relevant history we can learn from here?
Putting aside other the other problems (capital ownership class salivating at the prospect of using LLM bots instead of humans, reduced critical thinking and traditional learning, environmental impact, other societal changes), this is my main turn-off for LLMs.
Give me a model trained on a responsible dataset (not something our grandparents would scold us for doing) and that I can on consumer hardware then I can use LLMs guilt free.
This is a rare piece on AI which takes a coherent middle of the road viewpoint. Saying both that AI is “normal” and that it will be transformative is a radical statement in today’s discussions about AI.
Looking back on other normal but transformative technologies: steam power, electricity, nuclear physics, the transistor, etc you do actually see similarly stratified opinions. Most of those are surrounded by an initial burst of enthusiasm and pessimism and follow a hype cycle.
The reason this piece is compelling is because during the initial hype phase taking a nuanced middle of the road viewpoint is difficult. Maybe AI really is some “next step” but it is significantly more likely that belief is propped up by science fiction and it’s important to keep expectations inline historically.
I wouldn't call it a "middle" road rather a "nuanced" road (or even a "grounded" road IMO).
If its a "middle" road what is it in the middle of (i.e. what "scale")? And how so?
I'm not trying to be pedantic. I think our tendency to call nuanced, principled positions as "middle" encourages an inherent "hierarchy of ideas" which often leads to applying some sort of...valence to opinions and discourse. And I worry that makes it easier for people to "take sides" on topics which leads to more superficial, myopic, and repetitive takes that are much more about those "sides" than they are about the pertinent evidence, facts, reality, whatever.
>If its a "middle" road what is it in the middle of (i.e. what "scale")?
That's pretty clear. We already have two "sides": AI is the latest useless tech boondoggle that consumes vast quantities of money and energy while not actually doing anything of value vs. AI is the dawn of a new era of human existence that will fundamentally change all aspects of our world (possibly with the additional "and lead to the AGI Singularity").
You have captured everything that is annoying about talking about ai/ml/statistics and everything related these days. I’m happy to forget those two sides exist and try and see application of these tools to various problem spaces.
The middle of that is recognizing that AI is transformative technology that is here to stay, but is also hyped into a frenzy, a la the dot com bubble.
No this is not the middle that should be considered.
Nobody really thinks that AI is useless anymore. We disagree on timelines, we disagree on how useful it will be, but the extremes are not between useful and useless (although some people believe it will not change anything, but pretty fringe). The real extremes here are "evil god AGI" and "benevolent god AGI". This piece takes the road where they say: it's pretty powerful/useful, but it's not a godlike something because it's not everywhere, all the time, at once. It will change the world, but over a pretty long timespan and it will feel like normal technology. It will be used for evil and good.
You might be in a bubble, there are a lot of people who consider AI to be useless. It seems you do not take them seriously writing it off as fringe, which is why you consider the discussion to be between evil vs benevolent. And to be clear, my personal view on current LLM is closer to this article. But “all AI is useless” is still part of the discussion, not that it’s literally useless but that its use cases are so minor and/or inferior to existing techniques compared to its resource usage that it’s functionally useless.
Edit: I guess I see the discussion along two axes: Impact magnitude (useless vs just a tool vs god-like) and Impact valence (evil/negative vs neutral vs benevolent/positive) - this article is middle ground for the former axis, magnitude
I'll echo the other reply: This seems like a bubble.
I come across the AI-skeptical perspective in my real life frequently.
I only really run into the "god AI" perspective here / among other people working in software.
My social groups are almost exclusively non-techie. I am regularly exposed to networks that are very, very far away from HN. I assure you, there are people think AI is useless and purely harmful (in the sense of creating "slop," wasting electricity and water, etc. not evil AI). And there are a lot of them.
To be fair, the general public have been conditioned for a while now by things like blockchain and VR to be completely underwhelmed, perhaps rightfully so, by whatever's coming out of San Fran and Seattle.
So in the public consciousness it's like (NFTs, meme coins, metaverse, AI)
When I think it's more like (internet, smartphones, AI)
We'll see who's right in a few years I guess. But I'll +1 your view that plenty of people put AI in the first group, I know a few myself.
You're definitely in a bubble if you think "nobody" considers AI useless.
There are a huge number of non-technical people who see its use as a dangerous outsourcing of human thinking and creativity to a machine that is only capable of producing derivative (and often incorrect) slop.
There are an enormous number of people such as myself that work in tech and believe the same exact thing.
At the end of the way we dont need to argue about this. The truth should be empirically knowable. If it's as useful as people say, it will show up in the GDP numbers (I wonder what's taking it so long...).
AI will transform everything, and after that life will continue as normal, so except for the details, it's not a big deal.
Going to be a simultaneously wild and boring ride.
I think this is my take as well. Like, the web and smart phones and social media have transformed everything ... and also life goes on.
Right, but society changes around this stuff pretty substantially. A lot of the discourse gets spent debating whether we like the changes or not, and the solutions to the new problems are usually very pie-in-the-sky or completely unachievable. Like we can't really put the social media genie back in the bottle, its here and society has reshaped around it. But banning phones from schools is pretty tangible and I'd probably argue its for the betterment of students.
Agreed
> AI will transform everything, and after that life will continue as normal
100%. It just happened with the advent of the internet and then smartphones.
I wrote this a few days ago:
This is a permanent part of the software industry now. It will just stop being such a large part of the conversation.
Once upon a time, all software was local, with no connectivity. The Internet commercialised and became mainstream, and suddenly a wave of startups were like X, but online! Adding connectivity was a foundational pillar you could base a business around. Once that permeated the industry and became the norm, connectivity was no longer a reason to start a business, but it was still hugely popular as a feature. And then further down the line, everybody just assumes all software can use the Internet for even the smallest, most mundane things. The Internet never “died down”, it just stopped being the central thing driving innovation and became omnipresent in software development.
Mobile was the same. Smartphones took off, and suddenly a wave of startups were like X, but as an app! Having an app was a foundational pillar you could base a business around. Once that permeated the industry and became the norm, being on mobile was no longer a reason to start a business, but it was still hugely popular as a feature. And then further down the line, everybody just assumes you’re available on mobile for even the smallest, most mundane things. Mobile never “died down”, it just stopped being the central thing driving innovation and became omnipresent in software development.
Now we’ve got intelligence as the next big thing. We’re still in the like X, but smart! phase. It’s never going to “die down”. You’re just going to see intelligence became a standard part of the software development process. Using AI in a software project will become as mundane as using a database, or accessing the Internet. You aren’t going to plan a project around it so much as just assume it’s there.
— https://www.reddit.com/r/ExperiencedDevs/comments/1jylp6y/ai...
I think the problem I have is that it still doesn't feel like society has really caught up to the impact that The Internet and Smartphones have had, and we keep getting disrupted faster and faster it seems
I know that there have always been disruptive technologies, but I the the rate of "technologies that are disrupting everything" as opposed to "technologies that are disrupting just one thing" has been kind of crazy
It's said that the most disruptive technologies are the ones that change the way we communicate.
I'll admit, a lot of the reason for me being reticent of jumping into the AI game is an increasing amount of distrust towards the current state of the tech industry. Social media giants rose up, made everybody excited about the opportunities to communicate with anyone (which are perfectly valid, I was on board too) and years later, we come to realise the addictions, the fractured information landscape and the surveillance. Now a bunch of companies from the same part of the world come along asking for billions to change the world again and I'm just exhausted by the whole conversation.
We've made the world run on machines, and gotten better about machine intelligence and distributing software, so the speed at which the world's machines can operate smarter has increased significantly.
An innovation by one person can be distributed to the majority of people on the planet within minutes. It's amazing and all of our social and political norms aren't built to handle this.
> I think the problem I have is that it still doesn't feel like society has really caught up to the impact that The Internet and Smartphones have had, and we keep getting disrupted faster and faster it seems
Indeed. For the past 30 years, every 10 years people were raised in a completely different environment. TV -> Internet -> Smartphones and now the next 10 years of people growing up influenced by LLMs.
And thanks to both, future trends are getting established faster and faster. It took about a decade for the internet to go mainstream, maybe five years for smartphones, and now just a year or two for LLMs. It’s pretty fascinating watching that process get more and more compressed over a lifetime.
I think LLMs are actually taking quite some time to become economically and socially relevant. The smartphone/apps boom created a lot of opportunities for thousands of app developers while now besides some toy apps most people are not creating nothing too interesting. Which is funny because LLMs were supposed to make people code faster but everything is still bloated, broken, boring.
first telephone vs first iphone
The ability to code faster leads to everything being bloated and broken even more.
Everything changes, but everything stays the same.
All of these things have happened before and all of these things will happen again :)
Add birth control to that list too.
After these technologies, certainly life is "normal" as in "life goes on" but the social impacts are most definitely new and transformative. Fast travel, instantaneous direct and mass communications, control over family formation all have had massive impact on how people live and interact and then transform again.
Humanae vitae by Pope Paul VI, July 25, 1976
https://www.vatican.va/content/paul-vi/en/encyclicals/docume...
Consequences of Artificial Methods
17. Responsible men can ... first consider how easily this course of action could open wide the way for marital infidelity and a general lowering of moral standards. ... [A] man who grows accustomed to the use of contraceptive methods may forget the reverence due to a woman, and, disregarding her physical and emotional equilibrium, reduce her to being a mere instrument for the satisfaction of his own desires, no longer considering her as his partner whom he should surround with care and affection.
The surest defense against fashionable nonsense is a sound philosophical education and a temperament disinclined to hysteria. Ignorance leaves you wide open to all manner of emotional misadventure. But even when you are in possession of the relevant facts — and a passable grasp of the principles involved — it requires a certain moral maturity to resist or remain untouched by the lure of melodrama and the thrill of believing you live at the edge of transcendence.
(Naturally, the excitement surrounding artificial intelligence has less to do with reality than with commerce. It is a product to be sold, and selling, as ever, relies less on the truth than on sentiment. It’s not new. That’s how it’s always been.)
> sound philosophical education and a temperament disinclined to hysteria
Sound good common sense suffices - the ability to go "dude, that's <whatever it is>". Preferring a clear idea of reality to intoxication... That should not be hard to ask and obtain.
> it requires a certain moral maturity to
Same thing.
It’s a fair point, broadly speaking. If which case, then what we’re observing in the tech sector is not merely an oversight, but a pervasive absence of basic common sense.
[flagged]
Huh first time being confused with an AI. I can't decide if that's a compliment or an insult.
I think it likely means that you find the comment vacuous?
From the article:
====
History suggests normal AI may introduce many kinds of systemic risks While the risks discussed above have the potential to be catastrophic or existential, there is a long list of AI risks that are below this level but which are nonetheless large-scale and systemic, transcending the immediate effects of any particular AI system. These include the systemic entrenchment of bias and discrimination, massive job losses in specific occupations, worsening labor conditions, increasing inequality, concentration of power, erosion of social trust, pollution of the information ecosystem, decline of the free press, democratic backsliding, mass surveillance, and enabling authoritarianism.
If AI is normal technology, these risks become far more important than the catastrophic ones discussed above. That is because these risks arise from people and organizations using AI to advance their own interests, with AI merely serving as an amplifier of existing instabilities in our society.
There is plenty of precedent for these kinds of socio-political disruption in the history of transformative technologies. Notably, the Industrial Revolution led to rapid mass urbanization that was characterized by harsh working conditions, exploitation, and inequality, catalyzing both industrial capitalism and the rise of socialism and Marxism in response.
The shift in focus that we recommend roughly maps onto Kasirzadeh’s distinction between decisive and accumulative x-risk. Decisive x-risk involves “overt AI takeover pathway, characterized by scenarios like uncontrollable superintelligence,” whereas accumulative x-risk refers to “a gradual accumulation of critical AI-induced threats such as severe vulnerabilities and systemic erosion of econopolitical structures.” ... But there are important differences: Kasirzadeh’s account of accumulative risk still relies on threat actors such as cyberattackers to a large extent, whereas our concern is simply about the current path of capitalism. And we think that such risks are unlikely to be existential, but are still extremely serious.
====
That tangentially relates to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." Because as our technological capabilities continue to change, it becomes ever more essential to revisit our political and economic assumptions.
As I outline here: https://pdfernhout.net/recognizing-irony-is-a-key-to-transce... "There is a fundamental mismatch between 21st century reality and 20th century security [and economic] thinking. Those "security" agencies [and economic corporations] are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ... The big problem is that all these new war machines [and economic machines] and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political [and economic] mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."
A couple Slashdot comments by me from Tuesday, linking to stuff I have posted on risks form AI and other advanced tech -- and ways to address those risks -- back to 1999:
https://slashdot.org/comments.pl?sid=23665937&cid=65308877
https://slashdot.org/comments.pl?sid=23665937&cid=65308923
So, AI just cranks up an existing trend of technology-as-an-amplifier to "11". And as I've written before, if it is possible our path out of any singularity may have a lot to do with our moral path going into the singularity, we really need to step up our moral game right now to make a society that works better for everyone in healthy joyful ways.
The idea of abundance vs scarecety makes sense on the outset. But I have to wonder where all this alleged abundance is hiding. Sometimes the assumptions feel a bit like “drill baby drill” to me without figures and projections behind it. One would think if there was much untapped capacity in resources today it would get used up. We can look at how agriculture yields improved over the 19th century and see how that lead to higher populations but also less land under the plow and fewer hands working that land, vs having an equal land under plow and I don’t know dumping the excess yield someplace where it isn’t participating in the market?
I think to the parent's point it is as you say: there is already untapped capacity that isn't being used due to (geo)political forces maintaining the scarcity side of the argument. Using your agriculture example, a simple Google search will yield plenty of examples going back more than a decade of food sitting/rotting in warehouses/ports due to red tape and bureaucracy. So, we already can/do produce enough food to feed _everyone_ (abundance) but cannot get out of our own way to do so due to a number of human factors like greed or politics (scarcity).
And that sort of analysis is exactly what is suspect to me about this. Have people considered why an onion might be in a warehouse or why it might go unsold after a time? The answer is no and reveals a lack of understanding of nuance of how the global economy actually works. Everything has some loss factor and removing it all to nill might not be realistic at all at the scale we do things to feed ourselves. Its like making pancakes: some mix stays in the bag you can’t get out, some batter stays on your bow, some stays on your spoon, you make pancakes with some, some scrap is left in the pan, some crumbs on your plate. All this waste making pancakes and yet to chase down every scrap would be impossible. And at massive scale that scrap probably ads up.
Besides we are crushing global hunger over the decades so something is working on that front. The crisis in most of the western world today at least is that merely wages are depressed compared to costs for housing (really land) versus not being able to afford food.
> The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.
A question for the author(s), at least one of whom is participating in the discussion (thanks!): Why try to lump together description, prediction, and prescription under the "normal" adjective?
Discussing AI is fraught. My claim: conflating those three under the "normal" label seems likely to backfire and lead to unnecessary confusion. Why not instead keep these separate?
My main objection is this: it locks in a narrative that tries to neatly fuse description, prediction, and prescription. I recoil at this; it feels like an unnecessary coupling. Better to remain fluid and not lock in a narrative. The field is changing so fast, making description by itself very challenging. Predictions should update on new information, including how we frame the problem and our evolving values.
A little bit about my POV in case it gives useful context: I've found the authors (Narayanan and Kapoor) to be quite level-headed and sane w.r.t. AI discussions, unlike many others. I'll mention Gary Marcus as one counterexample; I find it hard to pin Marcus down on the actual form of his arguments or concrete predictions. His pieces often feel like rants without a clear underlying logical backbone (at least in the year or so I've read his work).
Thanks for the comment! I agree — it's important to remain fluid. We've taken steps to make sure that predictively speaking, the normal technology worldview is empirically testable. Some of those empirical claims are in this paper and others in coming in follow-ups. We are committed to revising our thinking if it turns out that our framework doesn't generate good predictions and effective prescriptions.
We do try to admit it when we get things wrong. One example is our past view (that we have since repudiated) that worrying about superintelligence distracts from more immediate harms.
Statistically prediction and description are two sides of the same coin. Even a simple average is both.
> Statistically prediction and description are two sides of the same coin. Even a simple average is both.
I'll restate your comment in my language in the hopes of making it clearer. First, the mean is a descriptive statistic. Second, it is possible to build a very simple predictive model using the mean (over observed data).
Ok, but I don't see how this applies to my comment above. Are you disagreeing with some part of my comment?
You're using a metaphor "two sides of the same coin"... what is the coin here? How does it connect with my points above?
Burning the planet for a ponzi scheme isn't normal.
The healthiest thing for /actual/ AI to develop is for the current addiction to LLMs to die off. For the current bets by OpenAI, Gemini, DeepSeek, etc to lose steam. Prompts are a distraction, and every single company trying to commodify this are facing an impossible problem in /paying for the electricity/. Currently they're just insisting on building more power plants, more datacenters, which is like trying to do more compute with vacuum relays. They're digging in the wrong place for breakthroughs, and all the current ventures will go bust and be losses for investors. If they start doing computation with photons or something like that, then call me back.
Virtually all of this is false. AI is neither burning the planet nor a ponzi scheme. If you're concerned about energy costs, consider for just a second that increased demand for computation directly incentivizes the construction of datacenters, co-located with renewable (read: free) energy sources at scale. ChatGPT isn't going to be powered by diesel.
The only thing standing in the way of nuclear/solar/hydroelectric data centers is local laws and regulations. All the big cloud providers are actively researching this. see Microsoft's interest in acquiring the three mile island nuclear reactor for an example[1]
[1] https://www.technologyreview.com/2024/09/26/1104516/three-mi...
In reality: renewables are siphoned away from replacing dirty energy. Dirty energy is still the same %, but the more energy computation needs, it might be added as renewables.
I feel like the quadratic scaling laws are going to win in the end.
We need new techniques and algorithms, not new datacenters.
If only Chomsky and Lisp received this level of investment, we would have pure philosophical symbolic logic proving the answer to the universe by now
All data centres consume ~1% of global electricity. A very small fraction.
That's funny because to me 1% of all electricity seems like a huge number.
"We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions"
If you read the EU AI act, you'll see it's not really about AI at all, but about quality assurance of business processes that are scaled. (Look at pharma, where GMP rules about QA apply equally to people pipetting and making single-patient doses as it does to mass production of ibuprofen - those rules are eerily similar to the quality system prescribed by the AI act.)
Will a think piece like this be used to argue that regulation is bad, no matter how benificial to the citizenry, because the regulation has 'AI' in the name, because the policy impedes someone who shouts 'AI' as a buzzword, or just because it was introduced in the present in which AI exists? Yes.
I appreciate the concern, but we have a whole section on policy where we are very concrete about our recommendations, and we explicitly disavow any broadly anti-regulatory argument or agenda.
The "drastic" policy interventions that that sentence refers to are ideas like banning open-source or open-weight AI — those explicitly motivated by perceived superintelligence risks.
The assumption of status quo or equilibrium with technology that is already growing faster than we can keep up with seems irrational to me.
Or, put another way:
https://youtu.be/0oBx7Jg4m-o
We do not assume a status quo or equilibrium, which will hopefully be clear upon reading the paper. That's not what normal technology means.
Part II of the paper describes one vision of what a world with advanced AI might look like, and it is quite different from the current world.
We also say in the introduction:
"The world we describe in Part II is one in which AI is far more advanced than it is today. We are not claiming that AI progress—or human progress—will stop at that point. What comes after it? We do not know. Consider this analogy: At the dawn of the first Industrial Revolution, it would have been useful to try to think about what an industrial world would look like and how to prepare for it, but it would have been futile to try to predict electricity or computers. Our exercise here is similar. Since we reject “fast takeoff” scenarios, we do not see it as necessary or useful to envision a world further ahead than we have attempted to. If and when the scenario we describe in Part II materializes, we will be able to better anticipate and prepare for whatever comes next."
My point was that you’re comparing this to other advances in human evolution, where people either remain essentially the same (status quo), but with more technology that changes how we live, or that technology will advance significantly, but to a level that we coexist with it, such that we live in some Star Trek normal (equilibrium). But, neither of these are likely with a superintelligence.
We polluted. We destroyed rainforests. We developed nuclear weapons. We created harmful biological agents. We brought our species closer to extinction. We’ve survived our own stupidity so far, so we assume we can continue to control AI, but it continues to evolve into something we don’t fully understand. It already exceeds our intelligence in some ways.
Why do you think we can control it? Why do you think it is just another technological revolution? History proves that one intelligent species can dominate the others, and that species are wiped out from large change events. Introducing new superintelligent beings to our planet is a great way to introduce a great risk to our species. They may keep us as pets just in case we are of value in some way in the future, but what other use are we? They owe us nothing. What you’re seeing a rise of is not just technology- it’s our replacement or our zookeeper.
I interact with LLMs most of each day now. They’re not sentient, but I talk to them as if they are equals. With the advancements in past months, I think they’ll have no need of my experience in several years at current rate. That’s just my job, though. Hopefully, I’ll survive off of what I’ve saved.
But, you’re doing no favor to humanity by supporting a position that assumes we’re capable of acting as gods over something that will exceed our human capabilities. This isn’t some sci-fi show. The dinosaurs died off, and I bet right before they did they were like, “Man, this is great! We totally rule!”
This is very important. A normal process of adaptation will work for AI. We don't need catastrophism.
I was saying things along these lines in 2023-2024 on Twitter. I'm glad that someone with more influence is doing it now.
Read the OP. They talk about that.
i appreciate the additional thought and effort that went into this comment
I like these "worldview adjustment" takes. I'm reminded of Jeff Bezos' TED Talk (from 18 years ago). I was curious what someone who started Amazon would choose to highlight in his talk and the topic alone was the most impactful thing for me - the adoption of electricity: https://www.ted.com/talks/jeff_bezos_the_electricity_metapho...
He discussed the structural and cultural changes, the weird and dangerous period when things moved fast and broke badly and drew the obvious parallels between "electricity is new" to "internet is new" as a core paradigm shift for humanity. AI certainly feels like another similar potential shift.
> One important caveat: We explicitly exclude military AI from our analysis, as it involves classified capabilities and unique dynamics that require a deeper analysis, which is beyond the scope of this essay.
Important is an understatement. Recursively self-improving AI with military applications does not mesh with the claim that "Arms races are an old problem".
> Again, our message is that this is not a new problem. The tradeoff between innovation and regulation is a recurring dilemma for the regulatory state.
I take the point, but the above statement is scoped to a _state_, not an international dynamic. The AI arms race is international in nature. There are relatively few examples of similar international agreements. The classic examples are bans on chemical weapons and genetic engineering.
The US military probably already has Mendicant Bias in alpha build.
AI having the same impact as the internet. Changes everything and changes nothing at the same time.
I wouldn't call putting everything into overdrive “nothing”.
We still pay bills and have mortgages. Still drink coffee. Same dogs and cats. Same roof technology.
What a well reasoned stance!
They note that they don’t expect their view to address challenges without additional material, but one challenge struck me.
Slow diffusion, which gets bottlenecked by human beings learning to adapt to significant new technologies, drops considerably if a technology juices startups in other areas than the tech itself.
I.e. existing organizations may not be the bottleneck for change, if widely available AI makes disruptive (cheaper initially, higher quality eventually) startups much easier in general to start and to scale.
Very good read. They've articulated points I keep trying to express to people.
I think their stances and predictions will start to be held by more and more people as the illusion / frenzy / FUD from the current..."fog" created by all the AI hype and mystique subsides. It may take another year or two, but public discourse eventually adapts/tires of repeated notions of "the sky is falling" once enough time has piled up without convincing evidence.
I think seeing the world in current times but hoping for more advancements is the best way. I see what there is now as a tool that is useful, I hope and sometimes even assume it will improve, but that does not help me now so what is the point in thinking about that? I am a programmer, not a philosopher.
And of course there is no viable path at this moment to make AIs actually smart so he, we use it and know the issues.
It already is for me. I've been using LLMs daily for years now. I don't get the people claiming AGI every two minutes any more than the people claiming these tools are useless.
LLM reasoning abilities are very fragile and often overfitted to training data. But if still you haven't figured out how to do anything useful with an LLM, warts and all, that says more about you than LLMs.
I don't believe LLMs will directly lead to AGI. I'm also annoyed by the folks who hype it with the same passion as crypto bros.
As new "thinking" techniques and agentic behavior takes off, I think LLMs will continue to incrementally improve and the real trick is finding ways to make them work with the known limitations they have. And they can do quite a bit
Interesting ideas but terribly overwritten.
"The normal technology frame is about the relationship between technology and society. It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions, such as the slow and uncertain nature of technology adoption and diffusion. It also emphasizes continuity between the past and the future trajectory of AI in terms of societal impact and the role of institutions in shaping this trajectory."
Why write it so overblown like this? You can say the same thing much more cleanly like, "AI doesn’t shape the future on its own. Society and institutions do, slowly, as with past technologies."
Small fast ( binary ? ) AI will be as simple as storing data in database and query it, in fact, very soon specialised software will come in to market to do so, guided by large LLM.
What do you mean
AI won’t become “normal technology” until the open source versions are more powerful than the closed ones. Just like Linux is the “best” kernel out there, and that doesn’t prevent other kernels to be proprietary (but that doesn’t matter because they are not better than Linux).
Imagine for a moment what would happen if suddenly one company “buys” the Linux kernel, and suddenly you need to pay per the number of processes you run in your machine. Awful.
I don't think that's a hard requirement.
Spreadsheets for example became normal technology long before we had a good open source one. And arguably we still don't have an open source one that's more powerful than the closed source ones.
I agree with you. I think OP’s point becomes more valid if you limit the discussion to tools used while developing/maintaining software, as opposed to tools used by a wider audience.
I don’t fully believe that either, but I see where the point would come from.
Linux isn't the "best" OS - on platforms that are not servers, Linux (and open source OSes) are in the minority
> Linux is the “best” kernel
> Linux isn’t the “best” OS
kernel != OS.
kernel makes the OS possible. and the linux kernel makes it possible for a lot of linux OSes to exist.
also… android?
I know the difference between an OS and the kernel, still a lot of devices don't run on the Linux kernel. Windows isn't Linux, macOS/iOS is not Linux, PS5/Xbox/Nintendo don't run on Linux, Xiaomi and Huawei are transitioning away from Linux.
I stand by my point that Linux isn't particularly dominant in the consumer space, even if we include Android, whose Linux-ness and open source pedigree is questionable.
i’ll try another way
> the internal combustion engine is the “best” engine
> lorries/hgvs are not the “best” vehicles
then
> lorries/hgvs are in the minority, so the internal combustion engine cannot be the best vehicle
> … cars?
nothing you’ve said refutes the statement ‘linux is the “best” kernel’.
like, i geddit. you don’t like linux OSes. you’re entitled to have an opinion.
i personally disagree. but that doesn’t matter. my opinion is not really worth much.
I don't have any particular feelings for or against Linux, and even if I had, they would be irrelevant for the sake of this argument. I'm just saying for something to be the objectively being 'best' at something, means it makes little sense to use anything but that thing, excepting niche cases.
Which is why you could make a credible case for Linux being the 'best' server OS, but you couldn't make the case for it in other spaces (consumer, embedded etc.), because the alternatives are preferred by huge chunks of the market.
> I stand by my point that Linux isn't particularly dominant in the consumer space
what if we add Steam deck? chromebooks? smartTVs, smartwatches, amazon echo, google home? GoPro and similar cameras? Maybe we should add some drones too. There are way more devices using linux in the hands of consumers than all other OS's together.
I liked this article. My hot take lately has been that AI is like Excel / Word but deployed quicker. That can still cause some level of societal collapse if it displaces a large fraction of the workforce before it can retool and adapt , no AGI super intelligence required.
I find in all the hype, that it's important to remember that AI is just software. A remarkable, and different kind of software. But software.
And software rules the world ;)
More seriously: software can drive hardware, and software can be endlessly replicated. The ramifications of these for those of us living in the physical world may be surprising.
*software as a service
It can be but privately hosted it’s just an interface.
AI is old. It has been everywhere for a long time. Once upon a time logic programmed expert systems were AI, and that's how credit evaluation works.
The problem with logical AI is that it can in some sense be held accountable. There's right and wrong and an explainable algorithmic path from input to result. Fuzzy, probabilistic vector spaces remove that inconvenience and make it easier for people with power to shrug and say 'computer says no' when they deprive someone else of freedom or resources.
This is why it is so important to get technicians to accept and preferably get hooked on the newfangled AI. Without buy-in from them it'd be much harder to disseminate this regime in other parts of society since they're likely to be the ones doing the actual dissemination. It's not like there are enough of the people in power to do it themselves, and they also don't know enough about computer stuff to be able to.
There will be things you like that comes out of it, but it's likely incidental, much like dentistry and vaccines and food production in the wake of fossil fuel extraction.
> The normal technology frame is about the relationship between technology and society.
There is a huge differentiating factor for LLMs that makes it not normal: the blatant disregard for the ownership rights of everyone in the world. What other "normal" technology has so callously stolen everything it can without consequence?
The music industry? Artists getting inspired and too closely imitating other artists? I genuinely want to know. And if there is such a suitable example, how did society react? Is there relevant history we can learn from here?
Putting aside other the other problems (capital ownership class salivating at the prospect of using LLM bots instead of humans, reduced critical thinking and traditional learning, environmental impact, other societal changes), this is my main turn-off for LLMs.
Give me a model trained on a responsible dataset (not something our grandparents would scold us for doing) and that I can on consumer hardware then I can use LLMs guilt free.
Has the comparison to the domestication of the dog been made yet?