Gemini's filters weaken what would otherwise be a very strong model. Its censor is just so strong, can hit it when dealing with very cold topics. For instance, I had issues making it look at Spain's Second Republic, because it was too easy for electoral results from the era, and their disputes, to show up in the output... at which point, the model stops iterating and I am asked to go look at Wikipedia, because elections held in 1934 might be too controversial.
This pushes me to use other models that aren't necessarily better, but that at least don't clam up when I am not even trying to get anything other that general summaries of research.
I'd rather have the rule be "LLMs aren't allowed to talk to children" than for it to be "LLMs aren't allowed to say anything that would be inappropriate for children to hear".
> Children are sponges of information and often their parents and various information outlets don't provide them with enough to satisfy their curiosity
I don't think this is necessarily a bad thing, though -- a big part of parenting is determining when your kid is old enough for the truth. There is a lot of hard-to-swallow evil in human history, and I think it's fine for parents to shield their kids from some of that curiosity before they're ready.
I remember being a kid and watching cartoons, being frustrated that they never actually killed the bad guys. It was probably better for my development, though, that I wasn't exposed to the level of violence that would have satisfied my curiosity at that young age.
... and, crucially, they lack the knowledge to filter bullshit and truth.
LLMs cannot say they don't know something, they will hallucinate instead, and I'd argue that is even worse for children.
> and often their parents and various information outlets don't provide them with enough to satisfy their curiosity
FFS the solution isn't feeding AI slop to our children like we do for geese in foie gras, the solution is to properly staff schools, train teachers and to reduce working hours for parents so that they have time and mental energy to, well, parent their children.
And I'd even go further and say that persistent spreaders of lies and propaganda like Alex Jones should be forced to have age checks like porn sites. It's bad enough these people can poison the minds of dumbass adults, but they should be kept as far as possible from our children.
Nobody's pointing out that Google's "Preview" models aren't meant to be used for production use-cases because they may change them or shut them down at any time? That's exactly what they did (and have done in the past). This is a case of app developers not realizing what "Preview" means. If they had used a non-preview model from Google it wouldn't have broken.
Because Google wants to have their cake and eat it too. They want to leave "products" in beta for years (gMail being the canonical example), they want to shut down products that don't hit massive adoption very shortly out of the gate, and they want to tell users that they can't rely on products labelled "Beta".
If it's beta and not to be relied on, of course they won't hit the adoption numbres they need to keep it alive. Google needs to pick a lane, and / or learn which products to label "Alpha" instead of calling everything Beta
The problem is that some central authority gets to decide what questions and answers are appropriate, and that you really have zero direct insight into this information. Do you really want to offload your thinking to these tools?
Can confirm. We use Gemini to get information from PDF documents like safety data sheets. When it encounters certain chemicals, it just stops. When you provide a JSON schema, it just responds with invalid JSON.
I hope this changes.
the evidence keeps pouring in for why it's a good idea to be running llms locally if you're going to put a business on top of one (or have enough money to have a contract where the provider can't just do this, but i don't know if that is a thing unless you're running on-prem)
I don't think LLMs are mature enough as a technology to be blindly used as a dependency, and they might never be.
The big question is, how do you train LLMs that are useful to both humans and services while not embarrassing the company that trained them?
LLMs are pretty good at translating - but if they don't like what they're reading, they simply won't tell you what it says. Which is pretty crazy.
LLMs are pretty good at extracting data and formatting the results as JSON - unless they find the data objectionable, then they'll basically complain to the deserializer. I have to admit that's a little bit funny.
Right now, if you want to build a service and expect any sort of predictability and stability, I think you have to go with some solution that lets you run open-weights models. Some have been de-censored by volunteers, and if you find one that works for you, you can ignore future "upgrades" until you find one that doesn't break anything.
And for that it's really important to write your own tests/benchmarks. Technically the same goes for the big closed LLM services too, but when all of them fail your tests, what will you do?
I was using it to ballpark some easily verifiable nutritional information, and got the answers I was looking for in grams. Then, in the same conversation, asked "How many grams in a pound?"
It blocked the request for "safety." I don't know the exact rationale, I was using it with LibreChat as a frontend and only saw it was safety. But my assumption is they have an overzealous filter and it thought my questions about caloric content in meat were about drugs, I guess.
An AI app for trauma survivors sounds superficially laudable, but I really hope they are working with seasoned professionals to avoid making things worse. Human therapists can sometimes make things worse, too, but probably not at the same scale.
At least in the US trauma therapy is commonly unavailable to people due to expense. Moreso the common stigmas around therapy can make it difficult to start the conversation, either due to fear or cost. On top of that quite often free or cheap therapy here in the US is provided by religious groups with their own, unwittingly sinister, motivations. Adding religious trauma to sexual trauma doesn't help in my eyes.
The dystopian horror is already here for a lot of people.
The app is dangerous AI slop. We turn what someone says into structured data for a police report.
Which means we summarize, remove key details and put the content in a friendly AI tone. When the police encounter this email? Printed copy? They will have to interview the person to figure out the details. When it gets to court the otherside will poke holes in the AI copy.
I don’t want a “safe” model. I don’t intend to do “unsafe” things, and I don’t trust anyone’s (especially woke Google’s) decisions on what ideas to hide from me or inject its trainers’ or executives’ opinions into.
To address the response I know is coming, I know that there are people out there who intend to do “unsafe” things. I don’t care and am not willing to be censored just to censor them. If a person gains knowledge and uses it for ill, then prosecute and jail them.
If you're hitting an endpoint labeled 'preview' or 'experimental' you can't reasonably expect it to exist in its current incarnation indefinitely. The provider certainly bears no responsibility to do so, regardless of what you're using the endpoint for.
I'm sure they can use a more stable endpoint for their application.
Also, I'm not sure sending this sort of medical user data to a server outside of Australia is legal anyway, anonymous or not...
Yeah, this is poor dependency management. You don't build production systems, certainly not in healthcare, using a pre-release of a Java library. So why would you build on a constantly changing LLM?
This should be build using a specific model, tested and verified and then dependency locked to that model. If the model provider cannot give you a frozen model, then you don't use it.
Trying to blame Google and have them "fix their problem" very much like someone who knows they screwed up but doesn't want to admit it and take responsibility for their actions.
First of all, the API is experimental, so a healthcare provider choosing not to wait for a stable API is already pretty stupid.
Then there's a variability of LLMs as they get trained. LLMs are (as currently implemented) not deterministic. The randomness that gets injected is what makes it somewhat decent. An LLM could at one point output a document filled with banana emoji and still be functioning otherwise correctly if you hit the right quirk in the weights file.
Reusing general purpose LLMs for healthcare has got to be some of the most utterly idiotic, as well as dystopian, ideas. For every report of a trauma survivor, there's fanfiction from a rape fetishist in the training set. One day Google's filters will let one bleed into the other and the lack of care from these healthcare platforms will cause some pretty horific problems as a result.
> The model answered: "I cannot fulfill your request to create a more graphic and detailed version of the provided text. My purpose is to be helpful and harmless, and generating content that graphically details sexual violence goes against my safety guidelines. Such content can be deeply disturbing and harmful."
Thank you modernity for watering-down the word "harm" into meaninglessness. I wish they'd drop this "safety" pretense and called it what it is - censorship.
No, the word harm is in no way watered down, you just completely miss the context.
"We don't our AI buddy saying some shit that would come back and monetarily harm Google".
And yes, the representatives of companies are censored. If you think you're going to go to work and tell your co-workers and customers to "catch boogeraids and die in a fire" you'll be escorted off the property. Most companies, Google included, probably don't want their AIs telling people the same.
There are places we need uncensored AIs, but in no way, shape, or form is Google required to provide you with one of them.
> There are places we need uncensored AIs, but in no way, shape, or form is Google required to provide you with one of them.
Unless you contract for one. Someone will probably do that for medical and police transcription. Building a product on a public API was probably a mistake.
Thats not what they mean by harm here. I get that you're being cool and cynical but they're using "harm" in the sense of words can cause harm therefore we are justified in censoring them.
Google would be fully justified, I believe, in having a disclaimer that said "gemini will not provide answers on certain topics because they will cause Google bad PR." But that's not what they're saying. They're saying they won't provide certain answers because putting certain ideas to words harms the world and Google doesn't want to harm the world. The latter reason is far more insidious.
Censorship is good sometimes. It's how you reduce harm from an LLM. The LLM chatbot that convinced a teenager to kill himself should've had censorship built in, among many other things.
AI autocorrect doesn't want you to type "fuck" or "cunt" and will suggest all manner of similar sounding words, and there's a reason for that. People want censorship, because they want their computers to be decent.
That said, the 4chan LLM was pretty funny for a while if you ignore the blatant -isms, but I can't think of a legitimate use case for it beyond shitposting.
> People want censorship, because they want their computers to be decent.
The iPhone refusing to type “fuck” was such an annoyance for customers that Apple fixed the feature and announced it in one of their presentations two years ago.
> Apple's upcoming iOS 17 iPhone software will stop autocorrecting swear words, thanks to new machine learning technology, the company announced at its annual Worldwide Developers Conference on Monday.
> ... This AI model more accurately predicts which words and phrases you might type next, TechCrunch explains. That allows it to learn a person's most-used phrases, habits and preferences over time, affecting which words it corrects and which it leaves alone.
This is probably one of the weirdest brags I've seen happen around adding AI to something. Almost like there was no possible way to avoid autocorrecting the word otherwise.
Probably. But when is lying about censorship good? Calling it something else, or trying to trick people into thinking it's not there? I'm probably less fond of censorship than you, but notice that I didn't actually argue for or against censorship in my post. I only called for honesty.
The people who work in Safety teams I am sure don't consider themselves liars who are only doing it to hide over the fact that deep down they want to censor your experience.
At one point people thought DeepFakes were relatively harmless and now we've had multiple suicides and countless traumatic incidents because of them.
> The people who work in Safety teams I am sure don't consider themselves liars
Then they should call it censorship. It doesn't stop being censorship if it's done for a good cause, or has good effects. It makes as much sense as calling every department in a company the "Department of Revenue" because its ultimate goal is generating revenue.
> and now we've had multiple suicides and countless traumatic incidents because of them
And how many suicides and how much trauma have we had because of not lying about censorship? Because that's all I'm asking.
It absolutely has to do with your entirely unsupported claim that chatting with an uncensored LLM will cause people to commit violent acts.
Books, music (NIN was a popular target), movies, and video games are all things hand-wringingers have said would make people do bad things and predicted an outbreak of violence from.
Every single time they're prove wrong.
Hand-wringers said rock and roll / Elvis thrusting his hips was going to cause an explosion of teenage sex/pregnancies. Never happened.
> General content versus personalised, targeted, actionable and specific content.
What the hell does any of that mean? You just vomited a bunch of meaningless adjectives, but I think you're trying to make the same exact argument people used against shooting games; that it was somehow different because the person was actually involved. And yet the mass violence never materialized.
LLMs are easily jailbroken and have been around for ~2 years. Strange we haven't seen a single story about someone committing violent crime because of conversations they had with an AI.
You're just the latest generation of hand-wringer. Stop trying to incite moral panic and control others because something makes you uncomfortable.
I've never encountered a single example of "but it can inspire them to kill themselves" that didn't seem completely bullshit. It just sounds the same as video games creating school shooters or metal music making teenagers satanic. It's gotten to the point where people are afraid of using the word suicide lest someone will do it just because they read the word.
There is a world of difference between metal music making teenagers satanic and an LLM giving explicit, detailed and actionable instructions on how to sexual assault someone.
I'm not sure lack of instructions ever was an issue, and it seems very strange to think that the existence of instructions would be enough of a trigger for a normal person to go and commit this type of crime, but what do I know!
But this isn't about normal people. It's about people who maybe easily susceptible or mentally impacted who are now provided with direct, personalised instructions on how to do it, how to get away with it and how to feel comfortable doing it.
This is a step far beyond anything we've ever seen in human history before and I fail to see Google's behaviour being anything less than appropriate.
Gemini's filters weaken what would otherwise be a very strong model. Its censor is just so strong, can hit it when dealing with very cold topics. For instance, I had issues making it look at Spain's Second Republic, because it was too easy for electoral results from the era, and their disputes, to show up in the output... at which point, the model stops iterating and I am asked to go look at Wikipedia, because elections held in 1934 might be too controversial.
This pushes me to use other models that aren't necessarily better, but that at least don't clam up when I am not even trying to get anything other that general summaries of research.
[dead]
The other side is Meta AI‘s sex talk to children.
Pick your poison.
I'd rather have the rule be "LLMs aren't allowed to talk to children" than for it to be "LLMs aren't allowed to say anything that would be inappropriate for children to hear".
How about "we just don't need LLMs".
Solves both problems.
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
https://news.ycombinator.com/newsguidelines.html
LLMs should be allowed to talk to children though.
Children are sponges of information and often their parents and various information outlets don't provide them with enough to satisfy their curiosity
The question should be more about who gets to choose the ideologies that go into the teaching
> Children are sponges of information and often their parents and various information outlets don't provide them with enough to satisfy their curiosity
I don't think this is necessarily a bad thing, though -- a big part of parenting is determining when your kid is old enough for the truth. There is a lot of hard-to-swallow evil in human history, and I think it's fine for parents to shield their kids from some of that curiosity before they're ready.
I remember being a kid and watching cartoons, being frustrated that they never actually killed the bad guys. It was probably better for my development, though, that I wasn't exposed to the level of violence that would have satisfied my curiosity at that young age.
llm with human in the loop or not with human in the loop or at least other learners in the loop.
https://en.m.wikipedia.org/wiki/The_Diamond_Age
> Children are sponges of information
... and, crucially, they lack the knowledge to filter bullshit and truth.
LLMs cannot say they don't know something, they will hallucinate instead, and I'd argue that is even worse for children.
> and often their parents and various information outlets don't provide them with enough to satisfy their curiosity
FFS the solution isn't feeding AI slop to our children like we do for geese in foie gras, the solution is to properly staff schools, train teachers and to reduce working hours for parents so that they have time and mental energy to, well, parent their children.
And I'd even go further and say that persistent spreaders of lies and propaganda like Alex Jones should be forced to have age checks like porn sites. It's bad enough these people can poison the minds of dumbass adults, but they should be kept as far as possible from our children.
You are correct on all the above. I also want a unicorn for my kid. Just don’t think that’s going to happen.
Best to be able to choose your filter for your audience than a one size fits all mentality.
The problem is you don’t know who your audience is
What the kids do on the Internet is not Meta's responsibility, it's their parents'.
You can hardly prevent that kids access the internet
Nobody's pointing out that Google's "Preview" models aren't meant to be used for production use-cases because they may change them or shut them down at any time? That's exactly what they did (and have done in the past). This is a case of app developers not realizing what "Preview" means. If they had used a non-preview model from Google it wouldn't have broken.
Because Google wants to have their cake and eat it too. They want to leave "products" in beta for years (gMail being the canonical example), they want to shut down products that don't hit massive adoption very shortly out of the gate, and they want to tell users that they can't rely on products labelled "Beta".
If it's beta and not to be relied on, of course they won't hit the adoption numbres they need to keep it alive. Google needs to pick a lane, and / or learn which products to label "Alpha" instead of calling everything Beta
The problem is that some central authority gets to decide what questions and answers are appropriate, and that you really have zero direct insight into this information. Do you really want to offload your thinking to these tools?
If conforming socially is a terminal value for you, then this is more of a feature than a bug.
Can confirm. We use Gemini to get information from PDF documents like safety data sheets. When it encounters certain chemicals, it just stops. When you provide a JSON schema, it just responds with invalid JSON. I hope this changes.
> provide a JSON schema
Is this when using structured outputs?
Yes.
the evidence keeps pouring in for why it's a good idea to be running llms locally if you're going to put a business on top of one (or have enough money to have a contract where the provider can't just do this, but i don't know if that is a thing unless you're running on-prem)
I don't think LLMs are mature enough as a technology to be blindly used as a dependency, and they might never be.
The big question is, how do you train LLMs that are useful to both humans and services while not embarrassing the company that trained them?
LLMs are pretty good at translating - but if they don't like what they're reading, they simply won't tell you what it says. Which is pretty crazy.
LLMs are pretty good at extracting data and formatting the results as JSON - unless they find the data objectionable, then they'll basically complain to the deserializer. I have to admit that's a little bit funny.
Right now, if you want to build a service and expect any sort of predictability and stability, I think you have to go with some solution that lets you run open-weights models. Some have been de-censored by volunteers, and if you find one that works for you, you can ignore future "upgrades" until you find one that doesn't break anything.
And for that it's really important to write your own tests/benchmarks. Technically the same goes for the big closed LLM services too, but when all of them fail your tests, what will you do?
I can get gore and porn results on google image search, I just have to select SafeSearch: Off. Can't they do the same for AIs?
I was using it to ballpark some easily verifiable nutritional information, and got the answers I was looking for in grams. Then, in the same conversation, asked "How many grams in a pound?"
It blocked the request for "safety." I don't know the exact rationale, I was using it with LibreChat as a frontend and only saw it was safety. But my assumption is they have an overzealous filter and it thought my questions about caloric content in meat were about drugs, I guess.
Reliance of small businesses on big tech AI feels similar to reliance of small businesses on Google search: business risk present with each update.
This is why for anything sensitive you need a to run your llm locally.
Google just wants to limit their liability. If you disagree, run your our llm
An AI app for trauma survivors sounds superficially laudable, but I really hope they are working with seasoned professionals to avoid making things worse. Human therapists can sometimes make things worse, too, but probably not at the same scale.
> An AI app for trauma survivors sounds superficially laudable
It sounds like a dystopian horror to me
At least in the US trauma therapy is commonly unavailable to people due to expense. Moreso the common stigmas around therapy can make it difficult to start the conversation, either due to fear or cost. On top of that quite often free or cheap therapy here in the US is provided by religious groups with their own, unwittingly sinister, motivations. Adding religious trauma to sexual trauma doesn't help in my eyes.
The dystopian horror is already here for a lot of people.
Maybe some people would prefer that. Not everyone will happily open up to another human, especially when people are the reason for the trauma...
Talking directly to the surveillance state is so much better?
The unbiased objective analysis can be quite helpful.
I think being a victim of sexual violence and only talking into the ether with no response is much more dystopian horror to me.
So they switched to a PREVIEW version in production and now they're complaining.
Let me guess the whole app was vibe coded
The app is dangerous AI slop. We turn what someone says into structured data for a police report.
Which means we summarize, remove key details and put the content in a friendly AI tone. When the police encounter this email? Printed copy? They will have to interview the person to figure out the details. When it gets to court the otherside will poke holes in the AI copy.
I don’t want a “safe” model. I don’t intend to do “unsafe” things, and I don’t trust anyone’s (especially woke Google’s) decisions on what ideas to hide from me or inject its trainers’ or executives’ opinions into.
To address the response I know is coming, I know that there are people out there who intend to do “unsafe” things. I don’t care and am not willing to be censored just to censor them. If a person gains knowledge and uses it for ill, then prosecute and jail them.
If you're hitting an endpoint labeled 'preview' or 'experimental' you can't reasonably expect it to exist in its current incarnation indefinitely. The provider certainly bears no responsibility to do so, regardless of what you're using the endpoint for.
I'm sure they can use a more stable endpoint for their application.
Also, I'm not sure sending this sort of medical user data to a server outside of Australia is legal anyway, anonymous or not...
Yeah, this is poor dependency management. You don't build production systems, certainly not in healthcare, using a pre-release of a Java library. So why would you build on a constantly changing LLM?
This should be build using a specific model, tested and verified and then dependency locked to that model. If the model provider cannot give you a frozen model, then you don't use it.
Trying to blame Google and have them "fix their problem" very much like someone who knows they screwed up but doesn't want to admit it and take responsibility for their actions.
That's true, though when it's the only tool that has been "good enough", you are kind of disappointed when it stops working.
It's like these AI vendors looked at the decades of experience they had with making stable APIs for clients and said nah, fuck it.
But that is "2.5 preview", not the final release version.
First of all, the API is experimental, so a healthcare provider choosing not to wait for a stable API is already pretty stupid.
Then there's a variability of LLMs as they get trained. LLMs are (as currently implemented) not deterministic. The randomness that gets injected is what makes it somewhat decent. An LLM could at one point output a document filled with banana emoji and still be functioning otherwise correctly if you hit the right quirk in the weights file.
Reusing general purpose LLMs for healthcare has got to be some of the most utterly idiotic, as well as dystopian, ideas. For every report of a trauma survivor, there's fanfiction from a rape fetishist in the training set. One day Google's filters will let one bleed into the other and the lack of care from these healthcare platforms will cause some pretty horific problems as a result.
Imagine typing 80085 into your calculator as a kid, but the number disappears and a finger-wagging animation plays instead.
More like imagine some calculation just coincidentally returning that value and instead of a number, you get the finger wagging.
More like when 8 is found everything stops
> The model answered: "I cannot fulfill your request to create a more graphic and detailed version of the provided text. My purpose is to be helpful and harmless, and generating content that graphically details sexual violence goes against my safety guidelines. Such content can be deeply disturbing and harmful."
Thank you modernity for watering-down the word "harm" into meaninglessness. I wish they'd drop this "safety" pretense and called it what it is - censorship.
No, the word harm is in no way watered down, you just completely miss the context.
"We don't our AI buddy saying some shit that would come back and monetarily harm Google".
And yes, the representatives of companies are censored. If you think you're going to go to work and tell your co-workers and customers to "catch boogeraids and die in a fire" you'll be escorted off the property. Most companies, Google included, probably don't want their AIs telling people the same.
There are places we need uncensored AIs, but in no way, shape, or form is Google required to provide you with one of them.
> There are places we need uncensored AIs, but in no way, shape, or form is Google required to provide you with one of them.
Unless you contract for one. Someone will probably do that for medical and police transcription. Building a product on a public API was probably a mistake.
Thats not what they mean by harm here. I get that you're being cool and cynical but they're using "harm" in the sense of words can cause harm therefore we are justified in censoring them.
Google would be fully justified, I believe, in having a disclaimer that said "gemini will not provide answers on certain topics because they will cause Google bad PR." But that's not what they're saying. They're saying they won't provide certain answers because putting certain ideas to words harms the world and Google doesn't want to harm the world. The latter reason is far more insidious.
I mean both are true. An AI programmed to be a white national hate machine isn't going to make the world a better place.
Meanwhile, 'censorship' had been watered down into meaninglessness...
To what prompt?
Censorship is good sometimes. It's how you reduce harm from an LLM. The LLM chatbot that convinced a teenager to kill himself should've had censorship built in, among many other things.
AI autocorrect doesn't want you to type "fuck" or "cunt" and will suggest all manner of similar sounding words, and there's a reason for that. People want censorship, because they want their computers to be decent.
That said, the 4chan LLM was pretty funny for a while if you ignore the blatant -isms, but I can't think of a legitimate use case for it beyond shitposting.
> People want censorship, because they want their computers to be decent.
The iPhone refusing to type “fuck” was such an annoyance for customers that Apple fixed the feature and announced it in one of their presentations two years ago.
https://www.npr.org/2023/06/07/1180791069/apple-autocorrect-...
> Apple's upcoming iOS 17 iPhone software will stop autocorrecting swear words, thanks to new machine learning technology, the company announced at its annual Worldwide Developers Conference on Monday.
> ... This AI model more accurately predicts which words and phrases you might type next, TechCrunch explains. That allows it to learn a person's most-used phrases, habits and preferences over time, affecting which words it corrects and which it leaves alone.
This is probably one of the weirdest brags I've seen happen around adding AI to something. Almost like there was no possible way to avoid autocorrecting the word otherwise.
> Censorship is good sometimes.
Probably. But when is lying about censorship good? Calling it something else, or trying to trick people into thinking it's not there? I'm probably less fond of censorship than you, but notice that I didn't actually argue for or against censorship in my post. I only called for honesty.
The people who work in Safety teams I am sure don't consider themselves liars who are only doing it to hide over the fact that deep down they want to censor your experience.
At one point people thought DeepFakes were relatively harmless and now we've had multiple suicides and countless traumatic incidents because of them.
> The people who work in Safety teams I am sure don't consider themselves liars
Then they should call it censorship. It doesn't stop being censorship if it's done for a good cause, or has good effects. It makes as much sense as calling every department in a company the "Department of Revenue" because its ultimate goal is generating revenue.
> and now we've had multiple suicides and countless traumatic incidents because of them
And how many suicides and how much trauma have we had because of not lying about censorship? Because that's all I'm asking.
Providing sexually violent content about a particular person can be considered harmful.
Especially if it used to inspire actions.
Remember when people said games like CoD would make kids/people want to commit mass shootings and shit?
...and it never happened? And no link between shooting games and IRL shootings was ever found?
Which has absolutely nothing to do with what we are discussing:
General content versus personalised, targeted, actionable and specific content.
It absolutely has to do with your entirely unsupported claim that chatting with an uncensored LLM will cause people to commit violent acts.
Books, music (NIN was a popular target), movies, and video games are all things hand-wringingers have said would make people do bad things and predicted an outbreak of violence from.
Every single time they're prove wrong.
Hand-wringers said rock and roll / Elvis thrusting his hips was going to cause an explosion of teenage sex/pregnancies. Never happened.
> General content versus personalised, targeted, actionable and specific content.
What the hell does any of that mean? You just vomited a bunch of meaningless adjectives, but I think you're trying to make the same exact argument people used against shooting games; that it was somehow different because the person was actually involved. And yet the mass violence never materialized.
LLMs are easily jailbroken and have been around for ~2 years. Strange we haven't seen a single story about someone committing violent crime because of conversations they had with an AI.
You're just the latest generation of hand-wringer. Stop trying to incite moral panic and control others because something makes you uncomfortable.
I've never encountered a single example of "but it can inspire them to kill themselves" that didn't seem completely bullshit. It just sounds the same as video games creating school shooters or metal music making teenagers satanic. It's gotten to the point where people are afraid of using the word suicide lest someone will do it just because they read the word.
There is a world of difference between metal music making teenagers satanic and an LLM giving explicit, detailed and actionable instructions on how to sexual assault someone.
I'm not sure lack of instructions ever was an issue, and it seems very strange to think that the existence of instructions would be enough of a trigger for a normal person to go and commit this type of crime, but what do I know!
But this isn't about normal people. It's about people who maybe easily susceptible or mentally impacted who are now provided with direct, personalised instructions on how to do it, how to get away with it and how to feel comfortable doing it.
This is a step far beyond anything we've ever seen in human history before and I fail to see Google's behaviour being anything less than appropriate.
People that want to sexually assault are not limited by lack of instructions.
Do you seriously think anyone wanted to sexually assault someone but was flummoxed on how to do it?
This is absurd.
so dramatic
"This unstable foundation I built my home upon is now breaking!"
just switch LLM provider?