So was all this bloat inevitable as hardware got better, or is there a way to go back? It feels like a ripoff that our computers are 1000x better but they’re maybe 10x faster once all the shitty software is taken into consideration.
Better education. Don’t scare people who’re learning programming away from the lower-level stuff, especially as people are even getting scared to use type declarations, not just the pointers (of which I was fearmongered with in college, as they told me Java is the future).
Better portable APIs. Thanks to WebAssembly, one could easily have both something portable in a web browser and as a native desktop app, except instead we get browsers running said applications. I had some thinking about such a project, but then I remembered my iota project (a D-native replacement of SDL/SFML/GLFW, but without bloat by including standard library features), and then stopped thinking about it immediately, since a much smaller project already causes me too much headache. (Someone has a handy guide on win32 API? I have issues on getting certain messages produced, like input language change, and I don’t know if I glimpsed over some functions that enable them and just weren’t included in the documentation of the input language change event codes.)
You know, I haven’t worked on a super big project, but I feel like every time I’ve gotten a type error in a static language it’s pointed to something wrong with my underlying reasoning.
In the short run, yes. In the long run, this just makes a bunch of coders that are now afraid of type declarations, because they were scared away from it with the “what if you have to choose?” tagline, thus making turning back to the proper way of doing things harder.
Just from context, I’m guessing it means that you might type things one way and then need to use them another way later, and dynamically typed languages are sold as not having that problem.
I was thinking about this a bit. Does that mean you can develop a piece of software much more cheaply now? I have a fear that companies writing software get a 10% discount from writing bloat, while clients wind up using 10,000% the resources and are just so used to it they don’t complain.
So was all this bloat inevitable as hardware got better, or is there a way to go back? It feels like a ripoff that our computers are 1000x better but they’re maybe 10x faster once all the shitty software is taken into consideration.
I have a few suggestions:
You know, I haven’t worked on a super big project, but I feel like every time I’ve gotten a type error in a static language it’s pointed to something wrong with my underlying reasoning.
nope. The bloat is there mainly because it makes the job easier for the devs.
In the short run, yes. In the long run, this just makes a bunch of coders that are now afraid of type declarations, because they were scared away from it with the “what if you have to choose?” tagline, thus making turning back to the proper way of doing things harder.
Can you talk more about this? I’ve never heard that tagline and can’t figure out what it’s supposed to mean.
Just from context, I’m guessing it means that you might type things one way and then need to use them another way later, and dynamically typed languages are sold as not having that problem.
I was thinking about this a bit. Does that mean you can develop a piece of software much more cheaply now? I have a fear that companies writing software get a 10% discount from writing bloat, while clients wind up using 10,000% the resources and are just so used to it they don’t complain.
It’s not really inevitable, it’s just a consequence that develops can get away with being lazy because the hardware can cope with it.