• 0 Posts
  • 2.27K Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • jj4211@lemmy.worldtoFuck AI@lemmy.worldEfficency
    link
    fedilink
    arrow-up
    4
    ·
    6 hours ago

    Half of the participants don’t contribute anything, but need to look busy so they have to lengthen the call with inane banter.

    Most of my emails have nothing to do with me, but everyone is CCing me on stuff, just in case I might be relevant somehow. Particularly and they made a convenient distribution list that includes 300 people and people send to it all the time. Someone I’ve never heard of on the other side of the world was going to be unavailable because they were sick and I get an email. The automated test for some project I have nothing to do with failed again last night and I get an email. Every morning I am greeted with about 100 emails that happened overnight. Even the handful of threads where I have some relevance, it goes off topic and I have no idea if I’m relevant to the new message or not until I read it.

    Corporate communication is just screwed.


  • jj4211@lemmy.worldtoFuck AI@lemmy.worldEfficency
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    6 hours ago

    I’m with you on a lot of even most developers at a company making things worse rather than better.

    However if for some reason a webinar is only going to be “live” with no recording to be provided, and further it may be a pointless session you don’t need but work mandates, then I would be firing off whatever recording/transcription/summarization they allow me. Like my employer has manated every employee regardless of job attending 60 hours of AI webinars in the year, to give the illusion of being in tune with AI without bothering to actually have a plan. Mostly it’s been people rambling without any actionable stuff trying to sound smart, absolutely every bit of it has been superficial, the speakers at best have toyed with prompts and read articles saying Nvidia gpus are useful. Not one of them have so much as even run a local model. There’s nothing in these 60 mandated hours that will do anything but waste time.

    Even for mandatory “all hands” where we can’t all questions but at least I want to know what they are thinking, I’ll get a recording and watch it at 2x speed.


  • If that concern were actually relevant and not merely an excuse, mob justice would fail to consider innocence. That’s seen repeatedly. Merely an excuse seems likely, but…

    The thing I could imagine is that the list being something like a contact list, and Epstein treated celebrity contact information as a status symbol. If the list is a ledger of otherwise off book transactions, or a lost of people complete with blackmail material, then I can’t see how they could even try to make the argument about innocent caught up in the list.

    Either way if they have a district list, put aside releasing the list for a moment, where’s the legal system enforcement actions?













  • The issue here is that we’ve well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.

    I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it’s right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.

    I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see “stock photography” used again. I expect animation to employ AI at least for backgrounds like “generic forest that no one is going to actively look like, but it must be plausibly forest”. I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.



  • I’ve found that as an ambient code completion facility it’s… interesting, but I don’t know if it’s useful or not…

    So on average, it’s totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.

    It’s exceedingly frustrating and annoying, but not sure I can call it a net loss in time.

    So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let’s say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I’m annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I’m filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I’m doing something even vaguely esoteric, I just ignore the suggestions popping up.

    However, the 20% is a problem still since I’m maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.

    That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.



  • As someone who tries to keep the vague number in mind, it would be strange to me as well, but I suspect a large number of people don’t really try to keep even the vague numbers in mind about how many people are about or how many people realistically could reside in a place like NYC.

    They track the rough oversimplifications. Like “barely anyone lives in the middle of the country”, and every TV show they see in the US either has a bunch of background people in NYC or LA, or is in the middle of nowhere with a town seemingly made up of mere dozens of people. They might know that “millions” live in the US and also, “millions” live in NYC, so same “ballpark” if they aren’t keeping track of the specifics. They’d probably believe 10 million in NYC and 50 million nationwide.

    This is presuming they bother to follow through on the specific math rather than merely roughly throwing out a percentage.