And the quality of the AI output sucks. I was recently looking for information about positive convention for yaw, pitch, and roll in aircraft. I was looking at az and yaw and got reasonable results from the AI, but when I looked at pitch and el all of the results were about elevator pitches. Even when I spelled out elevation it insisted on elevator pitches. I scroll past the AI results as a matter of principle, but I usually look at them so I have something specific to complain about when people ask why I am so virulently anti-AI.
The other day I tried to have it help me with a programming task on a personal project. I am an experienced programmer, but I only “get by” in Python (typically just by looking up the documentation for the standard library). I thought, “OK. This is it. I will ask Llama 3.3 and GPT4 for help.”
That shit literally set me back a weekend. It gave me such bad approaches and answers, that I could tell were bad (aforementioned experience in programming, degree in comp sci, etc) that I got confused about writing Python. Had I just done what I usually do, which is to look up the documentation and use my brain, I would have gotten my weekend task done a whole weekend sooner.
It scares me to think what people are doing to themselves by relying on this, especially if they’re novices.
It scares me to think what people are doing to themselves by relying on this, especially if they’re novices.
Same here. There’s a lot of denial going on but, LLMs are not good for anything that requires factual information. They likely will never be on account of just being statistical models for language. Summarizing long text where correctness isn’t an issue is really one of the only places where I still think that they are good.
Search? Not if you want anything factual with citations.
Code? Fuck no. They constantly produce code of poor quality that may depend on non-existent libraries or functionality. More time it’s spent debugging than writing code and it leaves the dev with a poor understanding of what the code actually does and ways to optimize/extend/etc.
Generating literary smut? Well, it’s not going to do as good of a job as a person who can create something completely novel but can be passable without likely harm to authors (I’d classify it as a tier below erotic fan fiction).
We’re going to be entering a golden age of hacks in the next 5 years, I’m calling it now. All this copy-pasted bad ChatGPT code is going to be used in ways that generate security holes the likes of which we’ve never seen before.
I recently started as a graphic designer despite knowing absolutely nothing about it, so i am constantly searching how to do stuff in Adobe suite at work. Half the time Google’s AI can’t even keep “Cmnd” and “ctrl” straight, telling me to use’ “cmnd+shift+H” on Windows or “ctrl+shift+H” on Mac’. I don’t even know how it botches that, but it does it about 25% of the time.
Google automatically gives me ai search results that are piss poor.
And these results are taken at face value by a shocking number of people. I’ve gotten into niche academic arguments where someone just copy and pasted the AIs completed hallucinated response as “evidence.”
I experimented with using AI to generate basic quizzes for students on concepts like atomic theory or conservation of energy, but maybe 2/20 questions it came up with were any form of accurate/useful. Even when it’s not making shit up entirely, the information is so shallow as to be useless.
And the quality of the AI output sucks. I was recently looking for information about positive convention for yaw, pitch, and roll in aircraft. I was looking at az and yaw and got reasonable results from the AI, but when I looked at pitch and el all of the results were about elevator pitches. Even when I spelled out elevation it insisted on elevator pitches. I scroll past the AI results as a matter of principle, but I usually look at them so I have something specific to complain about when people ask why I am so virulently anti-AI.
The other day I tried to have it help me with a programming task on a personal project. I am an experienced programmer, but I only “get by” in Python (typically just by looking up the documentation for the standard library). I thought, “OK. This is it. I will ask Llama 3.3 and GPT4 for help.”
That shit literally set me back a weekend. It gave me such bad approaches and answers, that I could tell were bad (aforementioned experience in programming, degree in comp sci, etc) that I got confused about writing Python. Had I just done what I usually do, which is to look up the documentation and use my brain, I would have gotten my weekend task done a whole weekend sooner.
It scares me to think what people are doing to themselves by relying on this, especially if they’re novices.
Same here. There’s a lot of denial going on but, LLMs are not good for anything that requires factual information. They likely will never be on account of just being statistical models for language. Summarizing long text where correctness isn’t an issue is really one of the only places where I still think that they are good.
Search? Not if you want anything factual with citations.
Code? Fuck no. They constantly produce code of poor quality that may depend on non-existent libraries or functionality. More time it’s spent debugging than writing code and it leaves the dev with a poor understanding of what the code actually does and ways to optimize/extend/etc.
Generating literary smut? Well, it’s not going to do as good of a job as a person who can create something completely novel but can be passable without likely harm to authors (I’d classify it as a tier below erotic fan fiction).
We’re going to be entering a golden age of hacks in the next 5 years, I’m calling it now. All this copy-pasted bad ChatGPT code is going to be used in ways that generate security holes the likes of which we’ve never seen before.
AI is useful for basic, mundane tasks and that’s about it. Trying to force it to be some sort of Uber search engine is such a bad idea.
I recently started as a graphic designer despite knowing absolutely nothing about it, so i am constantly searching how to do stuff in Adobe suite at work. Half the time Google’s AI can’t even keep “Cmnd” and “ctrl” straight, telling me to use’ “cmnd+shift+H” on Windows or “ctrl+shift+H” on Mac’. I don’t even know how it botches that, but it does it about 25% of the time.
Yea that’s a bad example of what to use ai for at least right now. You’re going to get bad results with that question.
It’s good for things, if you pay.
I don’t want to ask ai. Google automatically gives me ai search results that are piss poor. Those useless results still use energy to generate.
And these results are taken at face value by a shocking number of people. I’ve gotten into niche academic arguments where someone just copy and pasted the AIs completed hallucinated response as “evidence.”
I experimented with using AI to generate basic quizzes for students on concepts like atomic theory or conservation of energy, but maybe 2/20 questions it came up with were any form of accurate/useful. Even when it’s not making shit up entirely, the information is so shallow as to be useless.