Maybe non-judgemental chatbots are a feature only at a higher paid tiers.
it’s rather hilarious that the service is the one throwing the brakes on. I wonder if it’s done because of public pushback, or because some internal limiter applied in the cases where the synthesis drops below some certainty threshold. still funny tho
Haven’t used cursor, but I don’t see why an LLM wouldn’t just randomly do that.
a lot of the LLMs and models-of-this-approach blow out when they go beyond window length (and similar-strain cases), yeah, but I wonder if this is them trying to do this because of that or because of other bits
I could also see this being done as “lowering liability” (which is a question that’s going to start happening as all the long-known issues of these things start amplifying as more and more dipshits over-rely on them)
Maybe non-judgemental chatbots are a feature only at a higher paid tiers.
Haven’t used cursor, but I don’t see why an LLM wouldn’t just randomly do that.
a lot of the LLMs and models-of-this-approach blow out when they go beyond window length (and similar-strain cases), yeah, but I wonder if this is them trying to do this because of that or because of other bits
I could also see this being done as “lowering liability” (which is a question that’s going to start happening as all the long-known issues of these things start amplifying as more and more dipshits over-rely on them)