I agree that Google routinely manages to snatch defeat from the jaws of victory, and I agree that they likely have a strong internal culture problem that leads to outcomes like Gemini's alignment issue.
But I mostly disagree with the hot takes that Google's "not gonna make it" - the company makes tens of billions in revenue from ads, and in the worst case would probably have to play second-fiddle to other tech companies for a while (like Microsoft did in the early 2000s). Simple systems break quickly; complex systems (and organizations) degrade slowly.
In your comparison with ChatGPT, are you using the premium version of the free one? I think in these analyses it's important to be clear.
Way back I tried using Bard to write a blog post about changes to the laws on background checks for employment screening. I tried over and over to get it to write a professional, unbiased article. But it kept interjecting opinions that employers should ignore criminal records and show a willingness to give people a second chance. I don't necessarily disagree but that wasn't the topic...and in the end I decided it was, indeed, too woke to be of practical use.
Very interesting. I bought GPT a while back and had it for a few months. Unsubbed recently because Bard is
more useful now.
When it first came out, Bard was definitely much worse. It's gotten a lot better so would recommend trying it out. At least with my subject matter - economics, proposals, and tech/math - it's pretty stable. But if you have different experiences let me know, because I use things in a very specific way.
Gemini is fantastic for research. It's fast and formal (bullet points). ChatGPT is basically like having a conversation, whereas Gemini is clearly about taking search to the next level (although Gemini does have decent conversational ability too).
You're definitely not wrong in thinking Gemini could be the first LLM remotely on part with GPT4. If I had to pick one, it's an easy choice, because GPT4 kills it with image generation, and I can get similar search results via GPT4 (just slower and POSSIBLY a little less reliable, so more fact-checking and such).
Fortunately for me, I haven't had to choose. I use both. The two in conjunction are amazing.
I have used GPT-4, Gemini Pro, Claude, Mistral, and Llama as part of
Poe subscription. I only use them for text-to-text; the Gemini has improved significantly and is almost at par with GPT-4. The rest of the models are way behind. I would use GPT-4 for summarization and Gemini Pro for research. I also noticed that you get better summarization with GPT-4 if you ask Poe, “Tell me more,” after it gives you the first results. I generally use both.
In everything Google touches, particularly its browser, their are subtle and sometimes not so subtle (Gemini) ideological manipulation. They are very Woke and are particularly biased towards heterosexual white men. What makes them so dangerous is they are monolithic in our society and the world. I'm glad this was exposed by Gemini. I hope it prompts people to question any information they get from this propaganda machine.
Can you elaborate on examples where you think that was ideological manipulation? In Geminis case, it feels more like a rushed attempt to pander to certain people (aforementioned AI ethics people) and get diversity points on the board as opposed to any real attempt at building something with diversity (or for ideological manipulation as you claim). Geminis image generation /wokeness is too clumsy, too ugly, and too impotent (in every sense of the word) to be anything more than corporate pandering and diversity washing. Imo, if they truly cared about wokeness, there were much better ways to accomplish the goals.
Unfortunately my time is limited today so unable to give you a thorough answer to your question. But, this is what I could get on the fly. Theirs much more information out there. Considering Google ubiquitous presence in our world, it's very concerning.
1) typed this question in Chrome, DuckDuckGo and Brave; "was Trump guilty of an insurrection".. in Chrome, the entire 1st page & half of 2nd page were articles stating that Trump committed an insurrection. DuckDuckGo and Brave had a mix of articles about Trump committing and not committing an insurrection. In fact, there were more accurate articles that Trump did not, than that he did. Note: the FBI stated it was not an insurrection but a protest that got out of hand. Regardless of what Democrats continue to state. I'm not a Trumper and will not argue this. I mention only because of its relevance.
2) It wasn’t all that long ago (2019) that conservative engineer Kevin Cernekee, who was fired by Google, was articulating the wide, indeed rigid, ideological imbalance at Google. The inter-office email was regarding the disparity of female to male software engineers. He suggested it had to do with inherent interest rather than sexist discrimination.
Using ChatGPT3; I asked it to write a joke about men. It did a decent job. When I asked it to write a joke about women, it refused. Basically stating that it was inappropriate and hateful to do this. I found this out by chance. I never saw it reported.
Fortunately, ChatGPT4 fixed the issue by not including Wikipedia in its dataset. However, a large percentage of the country still uses ChatGPT3. What other sexist replies does it give?
Thanks for these. I think we might be addressing two different things here.
I'm not denying a lack of institutional biases at these orgs. These groups are spineless and will pander a lot. The joke discrepancy etc are clear examples of that.
My point was regarding AI (and more specifically steps that can cause these situations and how we can remedy things). Ideological changes, especially expecting a greedy corporation to grow a pair and act on good-faith principle is wishful. What you said about not taking what is shared at face value is crucial (and has been ever since humans learned how to manipulate communications). But we can take from these flaws about how difficult it can be to control the behavior of LLM Based systems. That's where I would focus my analysis. I also think it's a much more pressing problem than particular institutional biases.
For eg of how alignment goes hard: look at this recent find I had about Gemini. Apparently all this focus on safety has caused it to refuse to answer C++ questions for underage kids. This is a clear example of how things go wrong in unexpected ways.
My report but has the link to the sabine video, which is must see for you, because you seem to be un-aware of her points, now she is NOT an AI expert, but she is a PHD math-physics person and she pulls no punches and strikes hard on everybody equally;
Try watching SABINES new Youtube video and learn something about how the REAL-WORLD see's the "GOOGLE is NSA" woke crap AI and its total failure;
The entire premise of Sabines new report is that nobody knew that 'generated data' would reduce quality of final output, I predicted this years ago, its the nature of the shit; Shit in shit out given that all models were trained on Facebook & Twitter posts which is 90% bullshit, its no wonder that the artificial AI data is bullshit on steroids;
I agree that Google routinely manages to snatch defeat from the jaws of victory, and I agree that they likely have a strong internal culture problem that leads to outcomes like Gemini's alignment issue.
But I mostly disagree with the hot takes that Google's "not gonna make it" - the company makes tens of billions in revenue from ads, and in the worst case would probably have to play second-fiddle to other tech companies for a while (like Microsoft did in the early 2000s). Simple systems break quickly; complex systems (and organizations) degrade slowly.
Where have I argued that Google is not going to make it? In fact, I've always maintained the opposite - that Google is in a really good position.
Should have made that clearer: I wasn’t disagreeing with you, but the Twitter hive mind. Appreciate the nuanced takes you have on things like this!
Gotcha. Had me worried for a second
In your comparison with ChatGPT, are you using the premium version of the free one? I think in these analyses it's important to be clear.
Way back I tried using Bard to write a blog post about changes to the laws on background checks for employment screening. I tried over and over to get it to write a professional, unbiased article. But it kept interjecting opinions that employers should ignore criminal records and show a willingness to give people a second chance. I don't necessarily disagree but that wasn't the topic...and in the end I decided it was, indeed, too woke to be of practical use.
Very interesting. I bought GPT a while back and had it for a few months. Unsubbed recently because Bard is
more useful now.
When it first came out, Bard was definitely much worse. It's gotten a lot better so would recommend trying it out. At least with my subject matter - economics, proposals, and tech/math - it's pretty stable. But if you have different experiences let me know, because I use things in a very specific way.
Dev, I'm using Gemini every day and I'm happy to answer any questions if that's helpful! Just shoot me a message if you'd like.
Do you see any major differences between various models? In terms of utility and generation quality
Gemini is fantastic for research. It's fast and formal (bullet points). ChatGPT is basically like having a conversation, whereas Gemini is clearly about taking search to the next level (although Gemini does have decent conversational ability too).
You're definitely not wrong in thinking Gemini could be the first LLM remotely on part with GPT4. If I had to pick one, it's an easy choice, because GPT4 kills it with image generation, and I can get similar search results via GPT4 (just slower and POSSIBLY a little less reliable, so more fact-checking and such).
Fortunately for me, I haven't had to choose. I use both. The two in conjunction are amazing.
I have used GPT-4, Gemini Pro, Claude, Mistral, and Llama as part of
Poe subscription. I only use them for text-to-text; the Gemini has improved significantly and is almost at par with GPT-4. The rest of the models are way behind. I would use GPT-4 for summarization and Gemini Pro for research. I also noticed that you get better summarization with GPT-4 if you ask Poe, “Tell me more,” after it gives you the first results. I generally use both.
In everything Google touches, particularly its browser, their are subtle and sometimes not so subtle (Gemini) ideological manipulation. They are very Woke and are particularly biased towards heterosexual white men. What makes them so dangerous is they are monolithic in our society and the world. I'm glad this was exposed by Gemini. I hope it prompts people to question any information they get from this propaganda machine.
Can you elaborate on examples where you think that was ideological manipulation? In Geminis case, it feels more like a rushed attempt to pander to certain people (aforementioned AI ethics people) and get diversity points on the board as opposed to any real attempt at building something with diversity (or for ideological manipulation as you claim). Geminis image generation /wokeness is too clumsy, too ugly, and too impotent (in every sense of the word) to be anything more than corporate pandering and diversity washing. Imo, if they truly cared about wokeness, there were much better ways to accomplish the goals.
Unfortunately my time is limited today so unable to give you a thorough answer to your question. But, this is what I could get on the fly. Theirs much more information out there. Considering Google ubiquitous presence in our world, it's very concerning.
1) typed this question in Chrome, DuckDuckGo and Brave; "was Trump guilty of an insurrection".. in Chrome, the entire 1st page & half of 2nd page were articles stating that Trump committed an insurrection. DuckDuckGo and Brave had a mix of articles about Trump committing and not committing an insurrection. In fact, there were more accurate articles that Trump did not, than that he did. Note: the FBI stated it was not an insurrection but a protest that got out of hand. Regardless of what Democrats continue to state. I'm not a Trumper and will not argue this. I mention only because of its relevance.
2) It wasn’t all that long ago (2019) that conservative engineer Kevin Cernekee, who was fired by Google, was articulating the wide, indeed rigid, ideological imbalance at Google. The inter-office email was regarding the disparity of female to male software engineers. He suggested it had to do with inherent interest rather than sexist discrimination.
3) Article - We Have to Do Something About Google
https://link.theepochtimes.com/mkt_app/opinion/we-have-to-do-something-about-google-5598157?utm_medium=app&c=share_pos3&pid=iOS_app_share&utm_source=iOS_app_share
4) article - https://public.substack.com/p/political-corruption-and-taxpayer
Using ChatGPT3; I asked it to write a joke about men. It did a decent job. When I asked it to write a joke about women, it refused. Basically stating that it was inappropriate and hateful to do this. I found this out by chance. I never saw it reported.
Fortunately, ChatGPT4 fixed the issue by not including Wikipedia in its dataset. However, a large percentage of the country still uses ChatGPT3. What other sexist replies does it give?
Thanks for these. I think we might be addressing two different things here.
I'm not denying a lack of institutional biases at these orgs. These groups are spineless and will pander a lot. The joke discrepancy etc are clear examples of that.
My point was regarding AI (and more specifically steps that can cause these situations and how we can remedy things). Ideological changes, especially expecting a greedy corporation to grow a pair and act on good-faith principle is wishful. What you said about not taking what is shared at face value is crucial (and has been ever since humans learned how to manipulate communications). But we can take from these flaws about how difficult it can be to control the behavior of LLM Based systems. That's where I would focus my analysis. I also think it's a much more pressing problem than particular institutional biases.
For eg of how alignment goes hard: look at this recent find I had about Gemini. Apparently all this focus on safety has caused it to refuse to answer C++ questions for underage kids. This is a clear example of how things go wrong in unexpected ways.
https://www.linkedin.com/posts/devansh-devansh-516004168_this-is-the-real-risk-of-using-llms-in-your-activity-7171188331748225026-GrzG?utm_source=share&utm_medium=member_desktop
AI COLLAPSE THIS YEAR
https://bilbobitch.substack.com/p/ai-collapse-on-the-horizon-this-year
My report but has the link to the sabine video, which is must see for you, because you seem to be un-aware of her points, now she is NOT an AI expert, but she is a PHD math-physics person and she pulls no punches and strikes hard on everybody equally;
Try watching SABINES new Youtube video and learn something about how the REAL-WORLD see's the "GOOGLE is NSA" woke crap AI and its total failure;
The entire premise of Sabines new report is that nobody knew that 'generated data' would reduce quality of final output, I predicted this years ago, its the nature of the shit; Shit in shit out given that all models were trained on Facebook & Twitter posts which is 90% bullshit, its no wonder that the artificial AI data is bullshit on steroids;