Discussion about this post

User's avatar
Error 404's avatar

That was a tough read, I’m sorry to say.

Firstly, I’m not familiar with the OpenAI protein folding breakthrough, can you share details? I am only aware of DeepMind’s AlphaFold from 2020 so would love to learn more if OpenAI have made an accomplishment in the field.

With regards to the article, and I’m sorry I don’t mean to be publicly picky, but you make a lot of single-sentence statements with nothing to back them up. You use analogies / metaphors with no explanation or attempt to link them to the point you are attempting to make.

This leaves the flow dancing from conjecture, to quite frankly, purely speculative fiction and fantasy.

You refer multiple times to AI models, especially in a swarm context, as sociopathic / psychopathic.

Algorithms cannot be sociopathic and psychopathic. Just as in the same sense that a car or gun cannot be sociopathic or psychopathic, these are not beings, they are not humans, they are not alive.

You refer to an MIT paper, which one, quote / referencing would be helpful, so I could follow along?

Data collection as a revenue stream or used in the context of nefarious purposes as you suggest? This has been the case for years, and will continue to build. There’s a reason why your TV doesn’t cost as much as it used to. There’s a reason why companies do loyalty schemes. How many companies do you think that the main product is YOU, instead of what it used to be, their physical product? Grocery stores don’t make profit from groceries; they make profit from your data, your shopping preferences.

With regard to the price fixing you discuss, I think you are referring to the Wharton School at the University of Pennsylvania and the Hong Kong University of Science and Technology conducted a simulated market experiment. In a controlled environment with predetermined criteria, had AI trading agents in simulated stock markets autonomously engaged in price fixing by colluding without any explicit instruction to do so.

We already knew this, there is a high need for regulation and continued monitoring in AI, absolutely. But ultimately we will be the ones that decide to use it before it’s ready, just ask Duolingo, Klarna, Atlassian…

To be clear, no one is letting AI models loose on the stock market just yet.

You also called current federated learning a joke. Why? You sight no reasons or explanation. Would love to hear more on this topic.

Without dragging this on too much, the last point you cover talks about future AI models stealing data, scouring the Internet like little ninjas, don’t have to hunt hard we’re not exactly the cleanest with our data we become more like Hansel and Gretel dropping trash everywhere we go leaving little trails and like your other points, I think you’re a little behind; all of these things are already happening.

But not because of AI, because of us, humans. We did that.

Ultimately, every other sentence you use is a far-flung, nonsensical anecdote, and quite frankly, uneducated, and inflammatory. You’re blaming everything that has or hasn’t happened in your dystopian future on algorithms.

You might as well go, like King Canute, and stand in the ocean and shout at the tides to stop.

Try shifting your blame to the people responsible, us! Where’s the accountability to not just the people writing the algorithms (which also would be like trying to hold a gun manufacturer accountable after a shooting) but the people and companies actually using them for illegal purposes.

Expand full comment

No posts