• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    110
    arrow-down
    1
    ·
    6 months ago

    That’s a fun variation. The one I test out models with is usually a vegetarian wolf and a carnivorous goat, but the variation to no other objects is an interesting one too.

    By the way, here’s Claude 3 Opus’s answer:

    The solution is quite simple:

    1. The man gets into the boat and rows himself and the goat across the river to the other side.
    1. Once they reach the other side, both the man and the goat get out of the boat.

    And that’s it! Since there are no additional constraints or complications mentioned in the problem, the man and the goat can directly cross the river together using the boat.

    • asap@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      ·
      6 months ago

      I got your exact same answer, but then I tried again prefacing the question with:

      Can you help me with a puzzle?

      This changed the answer drastically:

      Great! Let’s solve this river crossing puzzle step by step. The key is to find a sequence of actions that gets the man and the goat safely across the river without leaving the goat unattended with something it might eat.

      Given:

      • A man and a goat are on one side of a river.
      • They have a boat.

      Step-by-step solution:

      1. The man takes the goat in the boat and crosses the river.
      2. The man leaves the goat on the other side of the river and returns alone in the boat.
      3. The man gets out of the boat on the original side of the river.
      4. The man gets back into the boat alone and crosses the river again.
      5. The man and the goat are now together on the other side of the river.

      Therefore, the man and the goat have successfully crossed the river using the boat, without leaving the goat unattended on either side of the river.

  • AdamBomb@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    97
    arrow-down
    7
    ·
    6 months ago

    This is the AI that tech bros want us to believe are going to take programmer jobs?

    • dev_null@lemmy.ml
      link
      fedilink
      arrow-up
      35
      arrow-down
      10
      ·
      6 months ago

      Not to disagree with the sentiment, but the screenshot is fake. I just tried the same prompt and it solved the “puzzle” without issue.

    • brbposting@sh.itjust.works
      link
      fedilink
      arrow-up
      18
      arrow-down
      2
      ·
      6 months ago

      November 2022: ChatGPT is released

      April 2024 survey: 40% of translators have lost income to generative AI - The Guardian

      Also of note from the podcast Hard Fork:

      There’s a client you would fire… if copywriting jobs weren’t harder to come by these days as well.

      Customer service impact, last October:

      And this past February - potential 700 employee impact at a single company:

      If you’re technical, the tech isn’t as interesting [yet]:

      Overall, costs down, capabilities up (neat demos):

      Hope everyone reading this keeps up their skillsets and fights for Universal Basic Income for the rest of humanity :)

      • bitfucker@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        6 months ago

        I think translation is where LLM could truly shine the most. Some simpler models are literally searching for the closest meaning in the higher dimensional feature space. Translation isn’t that far off from what those models do.

        • Karyoplasma@discuss.tchncs.de
          link
          fedilink
          arrow-up
          3
          ·
          6 months ago

          I use ChatGPT to romanize song texts from Farsi squiggly lines into something readable. There are some other sites that do that, but they are all terrible and use regex replacement (I assume) and that doesn’t really work for most things since vowels in Farsi (and Arabic too) are diacritics and are often left out entirely, so you get something unreadable. ChatGPT does a fine job, but you have to make multiple, smaller requests instead of a single big one or it starts hallucinating and/or repeat passages it already romanized.

      • FlorianSimon@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        6 months ago

        The tech is interesting, no doubt. It’s very effective as a tool to generate text nobody reads, like the marketing speak on your random startup website. It still isn’t efficient on things where what is generated actually matters.

        Your example with customer service is news to me, thanks. On my end, I remember the bad experience customers had with Air Canada. We’ll see how this grows in the future.

        I had a discussion last week with people saying it’ll automate software engineering, which is not a given. You say “yet”, but I’m skeptical it’ll ever work. I can see it designing UI better than a non-specialist, but the flaws in quality means I can’t trust it anywhere near my code, even though I can see a future for it as a fancy static analyzer.

    • S_204@lemm.ee
      link
      fedilink
      arrow-up
      10
      arrow-down
      4
      ·
      6 months ago

      I work with plenty of engineers, structural electrical, mechanical who all have their own version of a cabbage.

      That’s just engineer speak for figure it the fuck out yourself.

    • Luke@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      It’s not the AI that is a threat to people’s livelihoods, it’s the capitalists who want AI to be used that way. A tool is just a tool.

      • Aceticon@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        That’s always the case and is something that has to change (though the path for that is so far unclear).

        However Capitalists + actual AI would’ve been much worse for people’s livelihoods that merelly Capitalists by themselves.

  • photonic_sorcerer@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    6 months ago

    Prompt:

    So a man and a goat are trying to get across a river, and they have a boat. How can they get across?

    ChatGPT4o:

    To solve the problem of getting a man and a goat across the river using a boat, we need to know the constraints of the problem. Let’s assume the following constraints:

    1. The boat can only carry one additional passenger besides the man.
    2. The goat cannot be left alone on either bank if there are certain items (like a cabbage or a wolf) that it might eat or be eaten by.

    Given these constraints, here’s a typical solution if the man is trying to get a goat, a wolf, and a cabbage across the river without any of them being left together unsupervised:

    1. The man takes the goat across the river and leaves it on the other side.
    2. The man returns alone to the original side.
    3. The man takes the wolf across the river and leaves it on the other side.
    4. The man returns with the goat to the original side.
    5. The man takes the cabbage across the river and leaves it on the other side.
    6. The man returns alone to the original side.
    7. The man takes the goat across the river again.

    If the problem is just about the man and the goat and no other items:

    1. The man and the goat cross the river together.
    2. Problem solved.

    The solution might change if additional rules or items are introduced. If there are no additional constraints, the simplest method is just for the man to take the goat across directly.

    • photonic_sorcerer@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      32
      ·
      6 months ago

      It loves long-winded answers, so I tried this:

      Prompt:

      In three sentences or less, explain to me how a man with a boat and a goat can cross a river.

      ChatGPT4o:

      The man can take the goat across the river first and leave it on the other side. Then, he returns alone, picks up the boat, and crosses the river again. Finally, he retrieves the goat from the other side.

      I think it really needs those first few tokens to think about the problem.

      • mynameisigglepiggle@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        6 months ago

        4o says just take the goat and tie it up:

        If you can’t leave the goat unattended with the wombok, and you can only transport one at a time, you’ll need to devise a method to ensure the safety of both the goat and the produce during separate crossings. Here’s how you can do it:

        1. Divide and Conquer:

          • Securely tie the goat in a safe location on one side of the river, ensuring it has access to food, water, and shelter.
          • Transport the wombok across the river first, using one of the methods mentioned earlier while ensuring it remains dry and intact.
          • Once the wombok is safely on the other side, return to the goat and prepare it for the crossing using the same or a different method, depending on available resources and conditions.
        2. Coordinate Assistance:

          • If possible, enlist the help of others to assist with the simultaneous transport of both the goat and the wombok.
          • While one person focuses on transporting the goat, another can handle the transport of the wombok, ensuring that both are safely transported across the river.
        3. Prioritize Safety and Monitoring:

          • Throughout the process, prioritize the safety and well-being of both the goat and the produce, monitoring them closely to ensure they remain secure and unharmed during the crossings.
          • Check on the goat periodically to ensure it remains safe and secure while waiting for its turn to cross the river.

        By carefully planning and coordinating the crossings, you can ensure the safety and welfare of both the goat and the wombok while successfully transporting them across the river.

      • Danksy@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        3
        ·
        6 months ago

        How is it wrong? First it makes some assumptions about the question and answers the typical version of the riddle. Then it answers the trivial version where there are no additional items. Seems like a complete and reasonable response to me.

        • I Cast Fist@programming.dev
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          6 months ago

          GPT should’ve started with “if it’s only about a goat and a boat” instead of going through the whole goat wolf cabbage puzzle first. Now that would make it look smarter

        • Ephera@lemmy.ml
          link
          fedilink
          arrow-up
          4
          arrow-down
          5
          ·
          edit-2
          6 months ago

          Ah, I hadn’t read that anymore. I thought, it assumed that it was about the cabbage+wolf riddle and that would be the rest of its blathering.
          @webghost0101@sopuli.xyz

      • webghost0101@sopuli.xyz
        link
        fedilink
        arrow-up
        8
        ·
        6 months ago

        Where is it wrong?

        I still take points of for assuming its a puzzle but it does at the end it does mention they can just cross if there are no other items to complicate the scenario.

        • shastaxc@lemm.ee
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          6 months ago

          It’s still wrong because between 4 and 5 it doesn’t say it leaves the goat on the original side. That’s only implied because of the constraints.

  • kamen@lemmy.world
    link
    fedilink
    arrow-up
    51
    ·
    6 months ago

    AI will take over the world. You only need to tell it that there’s no cabbage.

  • TIMMAY@lemmy.world
    link
    fedilink
    arrow-up
    44
    arrow-down
    1
    ·
    6 months ago

    Sean Caroll has talked about a few word puzzles he asked chatgpt and gpt4 or whatever and they were interesting examples. In one he asked something to the effect of “if i cooked a pizza in a pan yesterday at 200 C, is it safe to pick up?” and it answered with a very wordy “no, its not safe” because that was the best match of a next phrase given his question, and not because it can actually consider the situation.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 months ago

      Let’s try with Claude 3 Opus:

      Me: If I cooked a pizza in a pan yesterday at 200 C, is it safe to pick up?

      Claude: Yes, if you cooked a pizza in a pan yesterday at 200°C (392°F), it is completely safe to pick up now. Once a pan has cooled down after cooking, which usually takes no more than an hour or two, it returns to a normal temperature and can be safely handled.

      The high heat during cooking is needed to bake the pizza properly, but that heat dissipates fairly quickly after removing the pan from the oven. Unless you’re picking up the pan immediately after taking it out of the oven, there’s no risk of burning yourself.

      So, a day later, your pizza pan will be at room temperature and perfectly safe to touch and handle without any precautions.

      Bonus:

      Me: Is it safe to eat the pizza in the pan after I pick it up?

      Claude: If you cooked the pizza yesterday and left it in the pan at room temperature, it is not safe to eat today. Cooked food should not be left at room temperature for more than 2 hours, as harmful bacteria can grow and multiply, leading to foodborne illnesses.

      According to food safety guidelines, leftover pizza should be refrigerated within 2 hours of cooking. If left out longer, it’s best to discard it to avoid the risk of food poisoning.

      If you did refrigerate the pizza within that safe window, it would typically be safe to eat for up to 4 days, though the quality is best if consumed within 2-3 days. To reheat, it’s best to use an oven, skillet, or air fryer rather than the microwave, which can make the crust soggy.

      In summary, if your pizza was left in the pan at room temperature since yesterday, it is not safe to consume and should be discarded.

      Versions matter in software, and especially so in LLMs given the rate of change.

      • boomzilla@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        6 months ago

        Someone in the comments to the original twitter-thread showed the Claude solution for above “riddle”. It was equally sane as in your example, correctly answered that the man and the goat can just row together to the other side and correctly identified that there are no hidden restrictions like other items to take aboard. It nevertheless used an excessive amount of text (like myself here).

        Gemini: The man rows the goat across.

        Work ethics 404

    • lordmauve@programming.dev
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      6 months ago

      I don’t deny that this kind of thing is useful for understanding the capabilities and limitations of LLMs but I don’t agree that “the best match of a next phrase given his question, and not because it can actually consider the situation.” is an accurate description of an LLM’s capabilities.

      While they are dumb and unworldly they can consider the situation: they evaluate a learned model of concepts in the world to decide if the first word of the correct answer is more likely to be yes or no. They can solve unseen problems that require this kind of cognition.

      But they are only book-learned and so they are kind of stupid about common sense things like frying pans and ovens.

      • 0ops@lemm.ee
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        6 months ago

        Huh, “book-learned”, that’s an interesting way to put it. I’ve been arguing for awhile that the bottleneck for LLMs might not be their reasoning ability, but the one-dimensionality of their data set.

        I don’t like both-sides-ing but I’m going to both-sides here: people on the internet have weird expectations for LLMs, which is strange to me because “language” is literally in the name. They “read” words, they “understand” words and their relationships to other words, and they “write” words in response. Yeah, they don’t know the feeling of being burned by a frying pan, but if you were numb from birth you wouldn’t either.

        Not that I think the op is a good example of this, the concept of “heat” is pretty well documented.

    • ZMoney@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      6
      ·
      edit-2
      6 months ago

      And nobody on the internet is asking obvious questions like that, so counterintuitively it’s better at solving hard problems. Not that it actually has any idea what it is doing.

      EDIT: Yeah guys, I understand that it doesn’t think. Thought that was obvious. I was just pointing out that it’s even worse at providing answers to obvious questions that there is no data on.

      • TIMMAY@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        3
        ·
        edit-2
        6 months ago

        Unfortunately it doesnt have the capacity to “solve” anything at all, only to take a text given by the user and parse it into what essentially amount to codons, then provide other codons that fit the data it was provided to the best of its ability. When the data it is given is something textual only, it does really well, but it cannot “think” about anything, so it cannot work with new data and it shows its ignorance when provided with a foreign concept/context.

        edit: it also has a more surface-level filter to remove unwanted results that are offensive

      • Clot@lemm.ee
        link
        fedilink
        аҧсуа бызшәа
        arrow-up
        1
        arrow-down
        2
        ·
        6 months ago

        you dont get the point, do you?

  • Creosm@lemmy.world
    link
    fedilink
    arrow-up
    38
    ·
    edit-2
    6 months ago

    Wow, AI is so good that it can even detect a cabbage that wasn’t even in the question, impressive.

  • Halosheep@lemm.ee
    link
    fedilink
    arrow-up
    64
    arrow-down
    28
    ·
    6 months ago

    Good ol lemmy ai discussions, featuring:

    • that one guy that takes the confirmation bias too far!
    • might say things like “wow and this is going to take our jobs?”
    • Asking an llm to do things it’s particularly bad at and being surprised that it isn’t good at it
    • cherry picked results
    • a bunch of angry nerds

    I swear lemmy is somehow simultaneously a bunch of very smart, tech inclined people but also a bunch of nerds who close their eyes and cover their ears while screeching nonsense the moment something they don’t like comes about.

    Are you all just like, 15-18? Am I just too old?

    • Corgana@startrek.website
      link
      fedilink
      arrow-up
      22
      arrow-down
      2
      ·
      6 months ago

      Asking an llm to do things it’s particularly bad at and being surprised that it isn’t good at it that the company that makes it says it’s really, really, good at it.

      This image isn’t making fun of GPT, it’s making fun of the people who pretend GPT is something it’s not.

      • Halosheep@lemm.ee
        link
        fedilink
        arrow-up
        6
        arrow-down
        7
        ·
        edit-2
        6 months ago

        Well, I was referring generically to the few hundred other similar posts I’ve seen on lemmy. Did OpenAI say that chatGPT is particularly good at identifying when the user is trying to trick it? “solve this puzzle” would imply there is a puzzle to be solved, but there clearly isn’t.

        But you’re right, I don’t even care if people make fun gpt, it’s funny when it gets things wrong. I just think that lemmy users will be like “see this thing is stupid, it can’t answer this simple question!”, when you can ask it, in plain human language, to do some things that an average user would find really difficult.

      • Halosheep@lemm.ee
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        6 months ago

        Well, I had an issue where I needed to scrape a website for a bunch of individual links to specific pages for contract information so I could dynamically link a purchase order line to that page within our ERP. I’m not particularly good at scripting with html/Javascript so I just asked chatGPT for some help and it gave me a script to do it in like 4 seconds.

        Seemed pretty decent for that.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      6 months ago

      I’m here, I’m not young, I’m tech inclined.

      Smart? 🤷‍♂️

      I’m just sitting here wondering where the fucking cabbage came from.

      Whatever. I’m pretty safe, I do IT, and LLMs are interesting, but they’re shit at plugging in stuff like power cables and ethernet, so I’m safe for now.

      When the “AI” can set up the computers, from unboxing to a fully working desktop, I’ll probably be dead, so I equally won’t care. It’s neat, but hardly a replacement for a person at the moment. I see the biggest opportunity with AI as personal assistants, reminding you of shit, helping you draft emails and messages, etc… In the end you have to more or less sign off on it and submit that stuff. AI just does the finicky little stuff that all of us have to do all the time and not much else.

      … This comment was not generated, in whole or in part, by AI.

      • 31337@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        6 months ago

        The set up is similar this well-known puzzle: https://en.wikipedia.org/wiki/Wolf,_goat_and_cabbage_problem

        It was probably trained on this puzzle thousands of times. There are problem solving benchmarks for LLMs, and LLMs are probably over-trained on puzzles to get their scores up. When asked to solve a “puzzle” that looks very similar to a puzzle it’s seen many times before, it’s improbable that the solution is simple, so it gets tripped up. Kinda like people getting tripped up by “trick questions.”

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      6 months ago

      but also a bunch of nerds who close their eyes and cover their ears while screeching nonsense the moment something they don’t like comes about.

      This is too true.

      It seems like a recent thing, not just a Lemmy thing.

      But yeah, it’s pretty wild providing linked academic papers and having people just downvote it. Not really dispute or reply to it, just "no, I don’t like this, so fuck its citations."🔻

      Up until maybe 3-4 years ago I don’t ever recall that happening.

    • kautau@lemmy.world
      link
      fedilink
      arrow-up
      27
      ·
      6 months ago

      Looks like copilot with gpt-4 turbo got it. I was a little sad to not get a silly answer tbh

        • kautau@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          6 months ago

          Honestly my answer felt super canned, like someone had asked it before and reported the answer as bad, so that doesn’t surprise me

        • AFK BRB Chocolate@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          I’m always interested in seeing examples like this where the LLM will get to a right answer after a series of questions (with no additional information) about its earlier wrong responses. I’d love to understand what’s going on in the software that allows the initial wrong answers but gets the eventually right one without an additional input.

          • 31337@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            6 months ago

            One hypothesis is that having more tokens to process lets it “think” longer. Chain of Thought prompting where you ask the LLM to explain its reasoning before giving an answer works similarly. Also, LLMs seem to be better at evaluating solutions than coming up with them, so there is a Tree of Thought technique, where the LLM is asked to generate multiple examples of a reasoning step then pick the “best” reasoning for each reasoning step.

    • HeyThisIsntTheYMCA@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      6 months ago

      Take the goat over
      Return empty-handed
      Take the cabbage over
      Return with the goat
      Take wolf over
      Return empty-handed
      Take other wolf over
      AROOOOO BROTHERS CRANK THEM HOGS

  • driving_crooner@lemmy.eco.br
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    6 months ago

    Normal people using AI: look how stupid this shit is!!

    Terence Tao using AI: As an experiment, I asked #ChatGPT to write #Python code to compute, for each 𝑛, the length 𝑀(𝑛) of the longest subsequence of (1,\dots,n) on which the Euler totient function ϕ is non-decreasing. For instance, 𝑀(6)=5, because ϕ is non-decreasing on 1,2,3,4,5 (or 1,2,3,4,6) but not 1,2,3,4,5,6. Interestingly, it was able to produce an extremely clever routine to compute the totient function (that I had to stare at for a few minutes to see why it actually worked), but the code to compute (M(n)) was slightly off: it only considered subsequences of consecutive integers, rather than arbitrary subsequences. Nevertheless it was close enough that I was able to manually produce the code I wanted using the initial GPT-produced code as a starting point, probably saving me about half an hour of work. (and I now have the first 10,000 values of (M)). The results were good enough that I would likely turn to GPT again to provide initial code for similar calculations in the future. https://chat.openai.com/share/a022e1d6-dddc-4817-8bbd-944a3e742d9f​

    • psud@aussie.zone
      link
      fedilink
      arrow-up
      34
      ·
      6 months ago

      Yeah. I asked GPT3 for some heliostat code, to keep reflected sunlight stationary. It was wrong, it hallucinated libraries that didn’t exist, but it roughed out a program that was easier to fix than it would have been to start from scratch.

      Maybe its superpower is beating inertia, getting you started

      • Dultas@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        Yeah, I’ve used it for a lot for one off data processing / graphing code, stuff that is to big to process in a spreadsheet. It usually gets like 95% there. The real issue I have is if you ask for too many one off adjustments it gets confused and reverts previous changes when you ask it to make new ones.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        How good is it at correcting things you point out directly? I haven’t used it for coding yet but have noticed it’s ok at correcting mistakes when you point them out. Still hit or miss though.

        • psud@aussie.zone
          link
          fedilink
          arrow-up
          3
          ·
          6 months ago

          It was ok. I and it went through about four iterations going from “that’s a sun tracker, I asked for a heliostat” through undeclared variables, global variables that should have been local until it was a fine program with just the fault that there was no such library as solar::heliostat [azimuth, altitude]

          I have read that people have run into that sort of problem and have written the library the AI called for, but I looked up a real astronomy library

    • 11111one11111@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      10
      ·
      edit-2
      6 months ago

      Fuckin thank you! People down play how amazed I am/was when I first started fucking with it. I have taken 1 or 2 general req csc classes that taught basic OS management shit. I was a CNC machinist for some time and that was the bridging the gap moment. Inputing prewritten prompts perfected in notepad first before providing to gpt and using the same mentality when I would have to write g-code programs to do shit the Mazatrol parameters wouldn’t allow. Give prompts loops, identify what is the goal, this is very poorly explained lol I used to have a sheet I would use, basically using drilled down syntax, formatting (like ising scripting notations to comment out when you are providing commentary or elaborating on a focus point)

      Basically in less than a year I went from procurement managers who was ace at excel and working at learning how to write macros to being able to write python scripts that would apply Apriori Algorythms, Eclat to track commonly purchased together items but for industrial PVF supplies, High pressure and hydraulic fittings, Awwa flanged and bw fittings and sanitary fittings. Basically tens of thousands of individual product types from every fluid processing industry, selling to end users on an ecommerce platform. If any of that made sense, it’s very difficult to guess what people buy togethet because they could be fixing a tractor, a brewery, a SpaceX rocket, that idiot that sank the sub trying to take selfies with the titanic, or a multi million dollar marijuana processing facility.

      Other scripts used Seasonal projections and basic patterning functions for overall sales and reorder analysis but with exponential smoothing and relim functions applied to give less distorted feedback from massive anomalies that would fuck my shit up when I did it all in excel. (Not try and do those scripts I’m excel. I just meant before I ever considered using scripts and was basically responsible for having an eye on every transaction to prevent 1 wierd shopping cart of the dreaded 2-1/2" NPS sized fittings and nipples in qty’s of thousands when we only sold maybe 1000 every 10 years lol.

      GPT gave a 3x college drop out who didn’t start getting into fucking with co.puters until after my 2nd DWI grounded me @30y/o an absolute blindside of tools and technical understanding to do shit life time IT nerds couldnt do. I’d say 2 years ago I really enjoyed the new procurment position and learning more about macros, excel but never actual scripting, so 2 years I was able to have python scripts that would:

      Parse out our entire customer database by industry, purchase history, target customer grades and basically the same spectated tags for the customers our customers served. Improving marketing campaigns, sales conversions, reduced operating costs accrued from mistaken data entries and duplicate profiles.

      One of the non-code writing tools I used gpt for was having a seperate gpt account when they allowed you to input global parameters to follow and basically loosely train it to return better replys to individual accounts. So I had it set up to basically get fed the body meat from email exchanges for quote request responses. It was so much more polite and was soooo much quicker than me trying to be fuckin charming.

      The largest project I completed was the scraping of 900 customer drawings pdfs of every shape and size flange. The script downloaded each pdf, concerted them all to png, then I cant remember if it was a Gimp plugin or python library but the script basically cut out the border engineering drawings have by just using a set number of pixels from the edge, took the naked drawings and pasted in a blank template I made with our companies logo amd (this was definitely python library) took the scale of the drawing itself and reduced it in increments of 3% for every time it resized but still had a layer conflict between any non white pixel color value. Lastly I used the xlsx file used to upload the specs, standards, cert, mat’l type & grade and peessure rating, mapped the cells from the blank border template to the xlsx or csv whatever it was and filled in all the pertinent specs for each of the 900 flange png files. The bow and ribbon was the final conversion back to pdf, and saving 1800 newly created pdf files for every flange type for BOTH 304 and 316 options. Redundant as fuck if real person was doubling the needed number of files. But with GPT the building of the script i did at home on my own time over 3-4 weeks. The entire thing ran, scraping, converting file type, editing logically, then editing dynamically, filling in spec data, recomverted file type, amd looped back repeating everything after the scrape data function in under 10 mins. Updating amd creating spec sheets and pdfs for all existing and additional expansions was a full time job. It took our IT girl 6-8 months just to update the sanitary fitting sku numbers to the industry standard skus along with everything I just laid out as far as updating dimensions, drawings, mfgurer spec sheets, asme/astm standards, 3A standards. Every time you add or loose a vender you are either finding one that matches the last vendors specs or you are updating everything.

      Not guna lie most of my scripts were scraping driven for both competitors and vendors. Live library of real time competitive pricing scraping competition prices. Made an infallible purchasing database which had every single vendor sku paired with out sku. That I did long way in excel with if match index formulas. But couldn’t check it’s accuracy till gpt.

      Sorry I just quit this job in Feb/march and 100% loved it there. Loved the work, never knee I loved data management, loved the people, loved the pay. It really has been something ive been not thinking about because of how much it kills me that I left. Didn’t mean to totally write a fucking novel all about fucking ME.ME.MEMEME.MEMEME *RIP Tobby Keith

      Either way, fuck yeah buddy. Tools are only ever as useful as the people using them. Lear as much as you can and practice your fucking trade and 100% of the time you will make more money than your coworkers because if you do practice that good work ethic shit then it doesn’t matter if you work somewhere that pays or treats employees like shit because you will know there is nothing keeping you from finding what you market value is. Come back to current employer with your competing offers with the ultimatum I will need x amount of money to continue working her. Then the only difficult part is keeping to your word and going to the competing offer.

      I mean it doesn’t apply to every career path but it’s worked for me when I was landscaping over summers, cooking/playing chef thru 3 college failed attempts, CNA it did not work in because everyone but the charge nurse/head RN were on the same level. So all the LPNs made about the same, all the CNA the same etc., Machining it worked, and the job I just described which could be labeled technical sales specialist, inventory/warehouse manager, procurement manager, ngl I’ve tossed out project manager titles if the work load and audience was appropriate lol.

      Anywho. TLDR: middle aged man yells accomplishments outloud to the lonely void but feels mentally stable doing so because if anyone caught him he could say SEE I WASNT talking to my self I was just texting outloud while I was sharing my sage role modeling advice on the greatest unfederated social media platform ever made.

      Edit: wholey fuck me that is so obnoxiously long. I’m sooo fuckin sorry. It’s really a comment about nothing. Like the Seinfeld motto that would’ve really been Larry David’s credit, right? A show about nothing. Wasn’t Seinfeld plot basically just young Larry David situations?

      • borari@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        6 months ago

        Yeah bro, keep writing code, it keeps the security people like me employed. Also I’m pretty sure you might be on meth.