Yeah I agree although I hate the use of the name “AI”. It’s marketing nonsense driven by a gold rush to sell both companies to shareholders and get consumers interested.
The stuff we’re seeing like ChatGPT is not AI; it’s extremely powerful and impressive language based models, and current systems have an impressive ability to “remix” content into new content. But the technology isn’t ready for the mass market yet - it’s too inaccurate yet it’s being rushed into search engines out of fear of being left behind.
There is also a huge issues around the data used to “train” these systems. Train is a misnomer - that data is taken and stored and pulled upon actively and constantly. Who owns the data, whose data has been used? Whose art works for example are used as the engine to drive the generative art by art AIs? When human users create and share content, that is being copied and stored, or constantly accessed to drive the AI. User content across the internet is being used to drive these systems, and yet it’s not the original content makers who benefit; it’s the AI fims and the tech firms who are opaquely selling and hoarding our data and content for commerical gain.
People don’t understand how much data has been essentially stolen from us to drive this technology. Look at the reddit saga - one driver for locking down the API is lock out AI companies - not to protect your data or content, but purely so Reddit can monetise it. We are the products when we use social media; it used to be that we were sold to advertisers but now the content we share has been comandeered by social medial and tech giants and is being used to drive “AI”. And because it’s complex and technical, we’re being screwed over en masse to line the pockets of tech company executives and share holders.
Yeah I agree although I hate the use of the name “AI”. It’s marketing nonsense driven by a gold rush to sell both companies to shareholders and get consumers interested.
The stuff we’re seeing like ChatGPT is not AI; it’s extremely powerful and impressive language based models, and current systems have an impressive ability to “remix” content into new content. But the technology isn’t ready for the mass market yet - it’s too inaccurate yet it’s being rushed into search engines out of fear of being left behind.
There is also a huge issues around the data used to “train” these systems. Train is a misnomer - that data is taken and stored and pulled upon actively and constantly. Who owns the data, whose data has been used? Whose art works for example are used as the engine to drive the generative art by art AIs? When human users create and share content, that is being copied and stored, or constantly accessed to drive the AI. User content across the internet is being used to drive these systems, and yet it’s not the original content makers who benefit; it’s the AI fims and the tech firms who are opaquely selling and hoarding our data and content for commerical gain.
People don’t understand how much data has been essentially stolen from us to drive this technology. Look at the reddit saga - one driver for locking down the API is lock out AI companies - not to protect your data or content, but purely so Reddit can monetise it. We are the products when we use social media; it used to be that we were sold to advertisers but now the content we share has been comandeered by social medial and tech giants and is being used to drive “AI”. And because it’s complex and technical, we’re being screwed over en masse to line the pockets of tech company executives and share holders.