but, as demonstrated by the summary in this thread, it doesn’t provide remotely accurate information.
its summary of this article doesn’t actually convey anything at all about what this text is saying, but it does refer to conceptually related things which aren’t even mentioned in the text (eg “Monte Carlo Tree Search”?!).
Don’t you think that enabling someone to “get information right away” is actually the opposite of helpful when they don’t realize it’s entirely incorrect information, especially when it appears plausibly-correct-enough that it prevents someone from reading the link and getting the actual information?
i think the tldr bot as an abstract concept is a fundamentally harmful thing, given the state of LLMs today. i strongly encourage you to read the linked article (and/or the paper it is referring to) if you haven’t yet.
“fundamentally” itself contradicts to any appealing to current conditions.
frankly, I just see that you proved the point that no one is calling in question(including me) - Hugging chat, chat gpt and other models may provide false information that could look plausibly-correct-enough. Since bot is just client for hugging chat and nothing more it inherits all of it.
Future of bot here depends on test results, if we have 4-5 posts per day and i manually checking responses, as long it is not disturbing me to review it, it will be here increasing engagement.
As the authors point out here, their critique applies to GPT-4 just as well as what came before it.
I am far from alone in being confident that current conditions will not improve to the point that a summarization bot in a public forum will be non-harmful anytime in the near future.
The time required to manually audit its responses thoroughly is substantial, and will almost always result in finding problems. Can you point me to a single example output from it so far that you believe to not contain any inaccuracies or misrepresentations?
Your point in this chat, despite the words you use, is that you are simply trying toforbid me to conduct my own test within my community and drawing my own conclusions. I totally understand your position and knew that you are consistent in fighting with tldr bots on lemmy (it is not the first bot you are Ordering to kill). You are not opening my eyes on how AI works and its drawbacks, at the same time you are ignoring the motives behind this test and the test itself, even though the community sidebar clearly states that the bot has been added in test mode. This should make it clear to anyone that it could be removed if the test is unsuccessful.
I cannot imagine what kind of result you are expecting.
Your point in this chat, despite the words you use, is that you are simply trying toforbid me to conduct my own test within my community and drawing my own conclusions.
I’m not trying to forbid you from playing with a bot in one community that you’re the mod of. I’m trying to convince you that it isn’t a worthwhile experiment, in case you decide that it is worth deploying elsewhere.
I totally understand your position and knew that you are consistent in fighting with tldr bots on lemmy (it is not the first bot you are Ordering to kill).
The one tldr bot I banned was posting total nonsense in worldnews, and iirc I only banned it from that one community? (Not sure, I might’ve banned it from the site…)
I think I briefly commented negatively about a second one, but I didn’t take the time to make a strong argument against it like I’ve done here. I’m doing it here because, while we don’t know eachother, I appreciate your posts lemmy and I figured I might be able to change your mind.
You are not opening my eyes on how AI works and its drawbacks, at the same time you are ignoring the motives behind this test and the test itself
I thought I was acknowledging your motivations, while respectfully explaining why I think they’re misguided.
even though the community sidebar clearly states that the bot has been added in test mode. This should make it clear to anyone that it could be removed if the test is unsuccessful.
I do appreciate the message in the sidebar.
I cannot imagine what kind of result you are expecting.
I was hoping to change your mind about this being a worthwhile endeavor. I’m sorry I appear to have failed, and to have irritated you in the process :/
No, worries. I don’t have any illusions about ai and capacity for tldr bot, I appreciate your efforts though. As I said, this bot just testing my hypothesis at particular place and time. And it was not supposed to be used outside this.
@cypherpunks@lemmy.ml So, at the end of the day you are right, despite bot helped to bring life to a new community, it is a stochastic parrot. His mission is over.
but, as demonstrated by the summary in this thread, it doesn’t provide remotely accurate information.
its summary of this article doesn’t actually convey anything at all about what this text is saying, but it does refer to conceptually related things which aren’t even mentioned in the text (eg “Monte Carlo Tree Search”?!).
Don’t you think that enabling someone to “get information right away” is actually the opposite of helpful when they don’t realize it’s entirely incorrect information, especially when it appears plausibly-correct-enough that it prevents someone from reading the link and getting the actual information?
i think the tldr bot as an abstract concept is a fundamentally harmful thing, given the state of LLMs today. i strongly encourage you to read the linked article (and/or the paper it is referring to) if you haven’t yet.
“fundamentally” itself contradicts to any appealing to current conditions. frankly, I just see that you proved the point that no one is calling in question(including me) - Hugging chat, chat gpt and other models may provide false information that could look plausibly-correct-enough. Since bot is just client for hugging chat and nothing more it inherits all of it. Future of bot here depends on test results, if we have 4-5 posts per day and i manually checking responses, as long it is not disturbing me to review it, it will be here increasing engagement.
Am I correct in assuming you have not read the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜?
As the authors point out here, their critique applies to GPT-4 just as well as what came before it.
I am far from alone in being confident that current conditions will not improve to the point that a summarization bot in a public forum will be non-harmful anytime in the near future.
The time required to manually audit its responses thoroughly is substantial, and will almost always result in finding problems. Can you point me to a single example output from it so far that you believe to not contain any inaccuracies or misrepresentations?
Your point in this chat, despite the words you use, is that you are simply trying toforbid me to conduct my own test within my community and drawing my own conclusions. I totally understand your position and knew that you are consistent in fighting with tldr bots on lemmy (it is not the first bot you are Ordering to kill). You are not opening my eyes on how AI works and its drawbacks, at the same time you are ignoring the motives behind this test and the test itself, even though the community sidebar clearly states that the bot has been added in test mode. This should make it clear to anyone that it could be removed if the test is unsuccessful.
I cannot imagine what kind of result you are expecting.
I’m not trying to forbid you from playing with a bot in one community that you’re the mod of. I’m trying to convince you that it isn’t a worthwhile experiment, in case you decide that it is worth deploying elsewhere.
The one tldr bot I banned was posting total nonsense in worldnews, and iirc I only banned it from that one community? (Not sure, I might’ve banned it from the site…)
I think I briefly commented negatively about a second one, but I didn’t take the time to make a strong argument against it like I’ve done here. I’m doing it here because, while we don’t know eachother, I appreciate your posts lemmy and I figured I might be able to change your mind.
I thought I was acknowledging your motivations, while respectfully explaining why I think they’re misguided.
I do appreciate the message in the sidebar.
I was hoping to change your mind about this being a worthwhile endeavor. I’m sorry I appear to have failed, and to have irritated you in the process :/
No, worries. I don’t have any illusions about ai and capacity for tldr bot, I appreciate your efforts though. As I said, this bot just testing my hypothesis at particular place and time. And it was not supposed to be used outside this.
@cypherpunks@lemmy.ml So, at the end of the day you are right, despite bot helped to bring life to a new community, it is a stochastic parrot. His mission is over.