Well it has to go somewhere, you can’t just take in water forever with nowhere for it to go. So either it’s non-potable water being returned to its source, or it’s closed loop. In either case, it’s not really a problem.
Well it has to go somewhere, you can’t just take in water forever with nowhere for it to go. So either it’s non-potable water being returned to its source, or it’s closed loop. In either case, it’s not really a problem.
It doesn’t use water in the sense that it is consuming it. It “uses” water in the sense that it is temporarily in a datacenter, gets a little hot, and then leaves the datacenter. I don’t even think a lot of datacenters use actual drinking water, instead taking water directly from a river, warming it slightly, and putting it back in said river.
Not to say I like AI, or think it’s a good thing. But this phrase that’s been going around just bugs me, because it’s really misleading. We should be focused on the ridiculous amount of energy it consumes, not the water it temporarily uses.
The answer to this question is quite simple, because Google (excluding the Pixel line) isn’t making the actual phones, just the software. The actual manufacturers (Samsung, Motorola, Huawei, etc) are taking Google’s OS and putting it on their phones. This case mostly hinges on Googles behavior being monopolistic to them, not to the end consumer.
On the other hand, Apple make both the OS and the Hardware, there’s no manufacturer they’re forcing the app store on, so the same rules don’t apply here.
As far as I’m aware, all donors are public, and have been for quite some time. Anyone can get exact numbers down to the dollar of each contributor a major politician took money from.
Sure, but the difference here was that all those companies were offering something different. Some had better results than others, a better ui, more accuracy in certain niches, etc. But 99% of AI companies now are all effectively reselling the OpenAI API. They aren’t making an effort to differentiate themselves at all. It’s as if Google was the only shop in town, and everyone bought all their search data an algorithms to slap their logo on. That’s just simply not sustainable at anywhere near the scale it is now. This won’t be a 3-5 year decline, it’ll be a 2 month crash.
This. I am so tired of hearing “the wheels of justice turn slowly”. If justice isn’t able to address a problem for so long that the perpetrators are allowed to continue perpetuating the same behavior an entire election cycle later, the justice system has failed, straight up. This is unacceptable.
Discord isn’t a social media. With platforms like facebook, you’re still paying for all your storage, just not with money. There’s ads all over the platform, and all your content is data mined to be sold to advertisers. Discord doesn’t data mine (to my knowledge) OR run ads. Would you prefer a higher limit at the cost of having ads all over the interface? The AWS bill has to get paid somehow, nothing is free.
This was my core point. I don’t consider a business raising prices or gating features as a direct result of those features increasing their cost as “enshittification”. Stickers being paid, custom emojis, etc, that doesn’t cost Discord anything to provide, making that paid is enshittification; But if the feature itself costs the business actual money to provide, does everyone just expect them to eat that cost forever, in a lot of cases for absolutely no revenue from the users?
Calling out businesses for not giving stuff that costs them money away for free just, doesn’t fundamentally make sense to me. Why is it just expected of Discord that they pay to store all your large files? A lot of “freemium” services like GMail recoup some of that money by mining your email for data that it can sell to advertisers, or eating the cost in an attempt to lock you into an ecosystem where you’ll spend money. Storing files on Discord is neither of those things.
Don’t get me wrong, a lot of services are enshittifying, and making their services worse so you spend more money with them— but adjusting your quotas and pricing to reflect your real world cost of business is not that. To frame it as though you are entitled to free compute and resources from companies that don’t owe you anything comes off as just that, entitled. The cloud isn’t free. If you want to use a service, you should pay for it if you can.
I don’t see this as enshittification. It’s a real thing that’s happening, but raw storage is expensive. They pay for it directly. Unlike artificially limiting features that are “free” to them, this genuinely isn’t, it’s not even really super discounted for them on the backend. They’re likely just paying for a series of S3 buckets.
This news is from over a month ago, and conditions have materially and dramatically changed since it’s publication. Regardless of the intent, posting this without noting a critical detail (it’s age) is at best incredibly misleading, and at worst intentionally subversive.
I van totally believe that it detects AI generated content 99% of the time, that’s trivial. What I really wanna know is the false positive rate. If I write a program that flags everything, it’d have a 100% hit rate. It’d also however have a crazy high false positive rate.
Hey so most of us are against this nonsense, republicans haven’t won the popular vote in well over a decade. We don’t really have a say in the matter, and haven’t in a pretty long time.
Curved screens are often significantly harder (and more expensive, even at independent shops) to replace than standard flat screens.
It’s not just random jitter, it also likely adds context, including the device you’re using, other recent queries, and your relative location (like what state you’re in).
I don’t work for Google, but I am somewhat close to a major AI product, and it’s pretty much the industry standard to give some contextual info to the model in addition to your query. It’s also generally not “one model”, but a set of models run in sequence— with the LLM (think chatGPT) only employed at the end to generate a paragraph from a conclusion and evidence found by a previous model.
Relaying a key signal 20 ft when you know the key is there isn’t too tricky, like when you’re home. But I would propose that trying to relay a signal across hundreds of feet, like a busy mall or store, when you’re not even sure the owner is there is quite another thing. You can also require that the IR blaster is in the car before starting. There’s also a technology Google has been using for a while now where the device (car) would emit a constant ultrasonic signal for the other device (key) to pick up on to determine if they are close to each other. Something that could be done through clothing, but not easily relayed.
Potentially better idea, add a gyroscope to the key fob, and stop broadcasting after the fob is perfectly still for some threshold. That way when you set it down inside it can’t be relayed, but if it’s in your pocket, it won’t remain perfectly still, and will start transmitting. Could also add an IR blaster to detect if you set it down in the car. Battery life would start to become a bigger issue, but I think solutions to these problems could be engineered.
Indexing and lookups on datasets as big as companies like Google and Amazon are running also take trillions of operations to complete, especially when you take into account the constant reindexing that needs to be done. In some cases, encoding data into a neural network is actually cheaper than storing the data itself. You can see this in practice with gaussian splatting point cloud capture, where they are training networks to guide points in the cloud at runtime, rather than storing the position of trillions of points over time.
I firmly believe it will slow down significantly. My prediction for the future is that there will be a much bigger focus on a few “base” models that will be tweaked slightly for different roles, rather than “from the ground up” retraining like we see now. The industry is already starting to move in that direction.
While I agree in principle, one thing I’d like to clarify is that TRAINING is super energy intensive, once the network is trained, it’s more or less static. Actually using the network isn’t dramatically more energy than any other indexed database lookup.
There are multiple other browser startups in development that are not Chromium based. Like LadyBird (which is completely independant), and Zen browser (which started as a FF fork)