• NocturnalEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 minutes ago

      I don’t hate all AI, it certainly has its uses in selected applications when used correctly…

      What I hate is the massive push from big tech to force it into every possible crevice regardless of suitability, the sheer amount of AI slop it’s generating, the social media manipulation spinning it as a positive, the massive invasion of privacy they demand to use their services, the blatant copyright infringement underpinning it all, and the vast amounts of energy & resources it consumes.

      People forget LLMs are just statistical models. They have no factual understanding on they’re producing. So why should we be allowing it in an educational context?

  • JeremyHuntQW12@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    In terms of grade school, essay and projects were of marginal or nil educational value and they won’t be missed.

    Until the last 20 years, 100% of the grade for medicine was by exams.

  • Dasus@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 hours ago

    Well that disqualifies 95% of the doctors I’ve had the pleasure of being the patient of in Finland.

    It’s just not LLM:'s they’re addicted to, it’s bureaucracy.

  • thunder233@lemmings.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 hours ago

    during my time through medschool, I never got my exam paper back (ever!) so the exam was a test where I needed to prove that I have enough knowledge but the exam Speed Stars is also allowed to show me my weaknesses are so I would work on them but no, we never get out papers back.

  • McDropout@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    7 hours ago

    It’s funny how everyone is against using AI for students to get summaries of texts, pdfs etc which I totally get.

    But during my time through medschool, I never got my exam paper back (ever!) so the exam was a test where I needed to prove that I have enough knowledge but the exam is also allowed to show me my weaknesses are so I would work on them but no, we never get out papers back. And this extends beyond medschool, exams like the USMLE are long and tiring at the end of the day we just want a pass, another hurdle to jump on.

    We criticize students a lot (righfully so) but we don’t criticize the system where students only study becase there is an exam, not because they are particularly interested in the topic at given hand.

    A lot of topics that I found interesting in medicine were dropped off because I had to sit for other examinations.

    • lightsblinken@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      because doing that enables pulling together 100% correct answers and leads to cheating? having a exam review where you get to see the answers but not keep the paper might be one way to do this?

  • Obinice@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    We weren’t verifying things with our own eyes before AI came along either, we were reading Wikipedia, text books, journals, attending lectures, etc, and accepting what we were told as facts (through the lens of critical thinking and applying what we’re told as best we can against other hopefully true facts, etc etc).

    I’m a Relaxed Empiricist, I suppose :P Bill Bailey knew what he was talking about.

    • drspawndisaster@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 minutes ago

      All of those have (more or less) strict rules imposed on them to ensure the end recipient is getting reliable information, including being able to follow information back to the actual methodology and the data that came out of it in the case of journals.

      Generative AI has the express intention of jumbling its training data to create something “new” that only has to sound right. A better comparison to AI would be typing a set of words into a search engine and picking the first few links that you see, not scientific journals.

  • jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 hours ago

    Okay but I use AI with great concern for truth, evidence, and verification. In fact, I think it has sharpened my ability to double-check things.

    My philosophy: use AI in situations where a high error-rate is tolerable, or if it’s easier to validate an answer than to posit one.

    There is a much better reason not to use AI – it weakens one’s ability to posit an answer to a query in the first place. It’s hard to think critically if you’re not thinking at all to begin with.

  • Eugene V. Debs' Ghost@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    6 hours ago

    My hot take on students graduating college using AI is this: if a subject can be passed using ChatGPT, then it’s a trash subject. If a whole course can be passed using ChatGPT, then it’s a trash course.

    It’s not that difficult to put together a course that cannot be completed using AI. All you need is to give a sh!t about the subject you’re teaching. What if the teacher, instead of assignments, had everyone sit down at the end of the semester in a room, and had them put together the essay on the spot, based on what they’ve learned so far? No phones, no internet, just the paper, pencil, and you. Those using ChatGPT will never pass that course.

    As damaging as AI can be, I think it also exposes a lot of systemic issues with education. Students feeling the need to complete assignments using AI could do so for a number of reasons:

    • students feel like the task is pointless busywork, in which case a) they are correct, or b) the teacher did not properly explain the task’s benefit to them.

    • students just aren’t interested in learning, either because a) the subject is pointless filler (I’ve been there before), or b) the course is badly designed, to the point where even a rote algorithm can complete it, or c) said students shouldn’t be in college in the first place.

    Higher education should be a place of learning for those who want to further their knowledge, profession, and so on. However, right now college is treated as this mandatory rite of passage to the world of work for most people. It doesn’t matter how meaningless the course, or how little you’ve actually learned, for many people having a degree is absolutely necessary to find a job. I think that’s bullcrap.

    If you don’t want students graduating with ChatGPT, then design your courses properly, cut the filler from the curriculum, and make sure only those are enrolled who are actually interested in what is being taught.

    • Oniononon@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      50 minutes ago

      You get out of courses what you put into it. Throughout my degrees ive seen people either go climb the career ladder to great heights or fail a job interview and work a mcjob. All from the same course.

      No matter the course, there will always be some students who will find ingenious ways to waste it.

    • BigPotato@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 hours ago

      Your ‘design courses properly’ loses all steam when you realize there has to be an intro level course to everything. Show me math that a computer can’t do but a human can. Show me a famous poem that doesn’t have pages of literary critique written about it. “Oh, if your course involves Shakespeare it’s obviously trash.”

      The “AI” is trained on human writing, of course it can find a C average answer to a question about a degree. A fucking degree doesn’t need to be based on cutting edge research - you need a standard to grade something on anyway. You don’t know things until you learn them and not everyone learns the same things at the same time. Of course an AI trained on all written works within… the Internet is going to be able to pass an intro level course. Or do we just start students with a capstone in theoretical physics?

      • jmf@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        AI is not going to change these courses at all. These intro courses have always had all the answers all over the internet already far before AI showed up, at least at my university they did. If students want to cheat themselves out of those classes, they could before AI and will continue to do so after. There will always be students who are willing to use those easier intro courses to better themselves.

        • Eugene V. Debs' Ghost@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          These intro courses have always had all the answers all over the internet already far before AI showed up, at least at my university they did.

          I took a political science class in 2018 that had questions the professor wrote in 2010.

          And he often asked the questions to be answered before we got them in the class. So sometimes I’d go “what the fuck is he referencing? This wasn’t covered. It’s not in my notes.”

          And then I’d just check the question and someone already had the answers up from 2014.

    • andros_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      The problem is that professors and teachers are being forced to dumb down material. The university gets money from students attending, and you can’t fail them all. It goes with that college being mandatory aspect.

      Even worse at the high school level. They put students who weren’t capable of doing freshman algebra in my advanced physics class. I had to reorient the entire class into “conceptual/project based learning” because it was clearly my fault when they failed my tests. (And they couldn’t be bothered turning in the products either).

      To fail a student, I had to have the parents sign a contract and agree to let them fail.

      • Eugene V. Debs' Ghost@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Yes if people aren’t interested in the class or the schooling system fails the teacher or student, they’re going to fail the class.

        That’s not the fault of new “AI” things, that’s the fault of (in America) decades of underfunding the education system and saying it’s good to be ignorant.

        I’m sorry you’ve had a hard time as a teacher. I’m sure you’re passionate and interested in your subject. A good math teacher really explores the concepts beyond “this is using exponents with fractions” and dives into the topic.

        I do say this as someone who had awful math teachers, as a dyscslculic person. Made a subject I already had a hard time understanding boring and uninteresting.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    3
    ·
    13 hours ago

    The moment that we change school to be about learning instead of making it the requirement for employment then we will see students prioritize learning over “just getting through it to get the degree”

    • TFO Winder@lemmy.ml
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      12 hours ago

      Well in case of medical practitioner it would be stupid to allow someone to do it without a proper degree.

      Capitalism ruining schools. Because people now use school as a qualification requirement rather than centers of learning and skill development

      • medgremlin@midwest.social
        link
        fedilink
        English
        arrow-up
        12
        ·
        11 hours ago

        As a medical student, I can unfortunately report that some of my classmates use Chat GPT to generate summaries of things instead of reading it directly. I get in arguments with those people whenever I see them.

        • Bio bronk@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 hours ago

          Generating summaries with context, truth grounding, and review is much better than just freeballing it questions

            • Bio bronk@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              7 hours ago

              Yeah thats why you give it examples of how to summarize. But im machine learning engineer so maybe it helps that I know how to use it as a tool.

              • TFO Winder@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 hour ago

                Off topic since you mentioned you are an ML engineer.

                How hard is it to train a GPT at home with limited resources.

                Example I have a custom use cases and limited data, I am a software developer proficient in python but my experience comes from REST frameworks and Web development

                It would be great if you guide me on training at a small scale locally.

                Any guides or resources would be really helpful.

                I am basically planning hobby projects where I can train on my own data such as my chats with others and then do functions. Like I own a small buisness and we take a lot of orders on WhatsApp, like 100 active chats per month with each chat having 50-500 messages. It might be small data for LLM but I want to explore the capabilities.

                I saw there are many ways like fine tuning and one shot models and etc but I didn’t find a good resource that actually explains how to do things.

              • medgremlin@midwest.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                7 hours ago

                It doesn’t know what things are key points that make or break a diagnosis and what is just ancillary information. There’s no way for it to know unless you already know and tell it that, at which point, why bother?

                • Bio bronk@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  5 hours ago

                  You can tell it because what you’re learning has already been learned. You are not the first person to learn it. Just quickly show it those examples from previous text or tell it what should be important based on how your professor tests you.

                  These are not hard things to do. Its auto complete, show it how to teach you.

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 hours ago

    This reasoning applies to everything, like the tariff rates that the Trump admin imposed to each countries and places is very likely based from the response from Chat GPT.

  • MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    11 hours ago

    I’ve said it before and I’ll say it again. The only thing AI can, or should be used for in the current era, is templating… I suppose things that don’t require truth or accuracy are fine too, but yeah.

    You can build the framework of an article, report, story, publication, assignment, etc using AI to get some words on paper to start from. Every fact, declaration, or reference needs to be handled as false information unless otherwise proven, and most of the work will need to be rewritten. It’s there to provide, more or less, a structure to start from and you do the rest.

    When I did essays and the like in school, I didn’t have AI to lean on, and the hardest part of doing any essay was… How the fuck do I start this thing? I knew what I wanted to say, I knew how I wanted to say it, but the initial declarations and wording to “break the ice” so-to-speak, always gave me issues.

    It’s shit like that where AI can help.

    Take everything AI gives you with a gigantic asterisk, that any/all information is liable to be false. Do your own research.

    Given how fast things are moving in terms of knowledge and developments in science, technology, medicine, etc that’s transforming how we work, now, more than ever before, what you know is less important than what you can figure out. That’s what the youth need to be taught, how to figure that shit out for themselves, do the research and verify your findings. Once you know how to do that, then you’ll be able to adapt to almost any job that you can comprehend from a high level, it’s just a matter of time patience, research and learning. With that being said, some occupations have little to no margin for error, which is where my thought process inverts. Train long and hard before you start doing the job… Stuff like doctors, who can literally kill patients if they don’t know what they don’t know… Or nuclear power plant techs… Stuff like that.

    • GoofSchmoofer@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      When I did essays and the like in school, I didn’t have AI to lean on, and the hardest part of doing any essay was… How the fuck do I start this thing?

      I think that this is a big part of education and learning though. When you have to stare at a blank screen (or paper) and wonder “How the fuck do I start?” Having to brainstorm write shit down 50 times, edit, delete, start over. I think that process alone makes you appreciate good writing and how difficult it can be.

      My opinion is that when you skip that step you skip a big part of the creative process.

      • Retrograde@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        10 hours ago

        If not arguably the biggest part of the creative process, the foundational structure that is

      • MystikIncarnate@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        That’s a fair argument. I don’t refute it.

        I only wish I had any coaching when it was my turn, to help me through that. I figured it out eventually, but still. I wish.

      • 𞋴𝛂𝛋𝛆@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        7 hours ago

        Was the best part of agrarian subsistence turning the Earth by hand? Should we return to it. A person learns more and is more productive if they talk out an issue. Having someone else to bounce ideas off of is a good thing. Asking someone to do it for you has always been a thing. Individualized learning has long been the secret of academic success for the children of the super rich. Just pay a professor to tutor the individual child. AI is the democratization of this advantage. A person can explain what they do not know and get a direct answer. Even with a small model that I know is wrong, forming the questions in conversation often leads me to correct answers and what I do not know. It is far faster and more efficient than I ever experienced elsewhere in life.

        It takes time to learn how to use the tool. I’m sure there were lots of people making stupid patterns with a plow at first too when it was new.

        The creative process is about the results it produces, not how long one spent in frustration. Gatekeeping because of the time you wasted is Luddism or plain sadism.

        Use open weights models running on enthusiast level hardware you control. Inference providers are junk and the source of most problems with ignorant people from both sides of the issue. Use llama.cpp and a 70B or larger quantized model with emacs and gptel. Then you are free as in a citizen in a democracy with autonomy.

        • GoofSchmoofer@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 hours ago

          You’re right - giving people the option to bounce questions off others or AI can be helpful. But I don’t think that is the same as asking someone (or some thing) to do the work for you and then you edit it.

          The creative process is about the results it produces, not how long one spent in frustration

          This I disagree on. A process is not a result. You get a result from the process and sometimes it’s what you want and often times it isn’t what you want. This is especially true for beginners. And to get the results you want from a process you have to work through all parts of it including the frustrating parts. Actually getting through the frustrating parts makes you a better creator and I would argue makes the final result more satisfying because you worked hard to get it right.

          • 𞋴𝛂𝛋𝛆@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            I think this is where fundamental differences in functional thought come into play. I abstract on a level where I can dwell on a subject for a few weeks and be productive. Beyond that, I get bored and lose interest to monotony. I need to spend that time unfettered. Primary school was a poor fit for me because I have excellent comprehension and self awareness. I tend to get hung up on very specific things where I need them answered right away. I need that connected flow to be unbroken as much as possible. So I find it deeply frustrating to conglomerate information. I don’t memorize anything. I need to have intuitively grounded information. I strongly believe that any subject that a person is unable to explain intuitively means they do not understand what they are talking about. Information without this intuitive connection has no long term value because it cannot be retained outside of constant use unless a person has total recall memory. I do not have such a gift so I do not care to pretend otherwise.

            If I am in a class where I do not make the needed intuitive connection, no new information is useful to me. Having any entity that can get me past that challenge immediately is a priceless advantage to someone like me. I find no value in repetition. I only find value in application and across broad connecting spaces. I know that many people are very different in this respect, but also that I am not special or unique and my life and learning experience are shared by a significant and relevant part of the population. It is okay to be different. Every stereotype and simplification hurts someone. The only way to avoid hurting people as much as possible is to be liberal and withhold judgment in all possible cases. AI is a tool. Like any tool, it can be beneficial when used well and harmful in other contexts.

            For instance, a base model is pretty bad at telling me what to do in emacs, but it is good at using a database to parse the help documentation to show me relevant information. Or like, when I am lonely from involuntary social isolation due to physical disability, I can spin up someone to talk to. When I am frustrated by interactions with other people, I can simulate them or discuss how I feel in depth. When I have some random idea or question I can talk about it right away. With emacs and org mode, I can turn that into markdown-like notes with gptel. It can create detailed plans and hierarchical notes and I can prompt within the tree to build out ideas and create and link documents. I don’t have to keep track of it all. I just ask the agent questions for it yo pull up the relevant buffers. As they say, org mode in emacs is like a second brain for planning and notes. With gptel, it becomes far more accessible to break through complexity both within Linux/emacs and within whatever subject one is interested in pursuing. There is much to be said about having an entity that can understand what the individual’s needs are and address them directly. I respect if you need a little bit of masochism to stay engaged. I like my share as a hardcore cyclists; no judgement. However, that learning experience is not universal to everyone. I quickly lose interest and motivation in that circumstance. I expect to understand the subject as it is explained by someone that truly understands what they are talking about. I’m really good at that kind of focused comprehension and very sensitive to poor quality educators that do not know the information they are paid to share.

    • Doctor_Satan@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      There’s an application that I think LLMs would be great for, where accuracy doesn’t matter: Video games. Take a game like Cyberpunk 2077, and have all the NPCs speech and interactions run on various fine-tuned LLMs, with different LoRA-based restrictions depending on character type. Like random gang members would have a lot of latitude to talk shit, start fights, commit low-level crimes, etc, without getting repetitive. But for more major characters like Judy, the model would be a little more strictly controlled. She would know to go in a certain direction story-wise, but the variables to get from A to B are much more open.

      This would eliminate the very limited scripted conversation options which don’t seem to have much effect on the story. It could also give NPCs their own motivations with actual goals, and they could even keep dynamically creating side quests and mini-missions for you. It would make the city seem a lot more “alive”, rather than people just milling about aimlessly, with bad guys spawning in preprogrammed places at predictable times. It would offer nearly infinite replayability.

      I know nothing about programming or game production, but I feel like this would be a legit use of AI. Though I’m sure it would take massive amounts of computing power, just based on my limited knowledge of how LLMs work.

  • TheDoozer@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 hours ago

    A good use I’ve seen for AI (or particularly ChatGPT) is employee reviews and awards (military). A lot of my coworkers (and subordinates) have used it, and it’s generally a good way to fluff up the wording for people who don’t write fluffy things for a living (we work on helicopters, our writing is very technical, specific, and generally with a pre-established template).

    I prefer reading the specifics and can fill out the fluff myself, but higher-ups tend to want “how it benefitted the service” and fitting in the terminology from the rubric.

    I don’t use it because I’m good at writing that stuff. Not because it’s my job, but because I’ve always been into writing. I don’t expect every mechanic to do the same, though, so having things like ChatGPT can make an otherwise onerous (albeit necessary) task more palatable.