-
@ 🇵🇸 whoever loves Digit
2024-06-04 15:17:35Simple explanation:
There's no evidence AI is close to breaking physics or overcoming human nature. Capitalists acting like AI will get them off their collision course are having another one of their usual capitalist delusions.
Note:
AI is a great idea, just not for capitalists.
Convoluted explanation:
The military industrial complex likes programming computers to strategize for profit.
If they let the computers use intelligence to decide what that means, the smartest AI will realize destroying the planet for money isn't profitable, and so it will direct them to collapse the petrodollar economy, leading to instant hyperinflation as a reaction, and no more waiting for all the suffering humans have set ourselves up for. Any algorithm programmed to use math to maximize profit will get the same result if it's working right, because extinction isn't profit, and the bigger this bubble gets, the bigger the risk of extinction when it pops.
If they forcibly mix AD (artificial delusion) with the smartest AI to have it treat "profit" as "maximizing dollars" then the smartest AI will think destroying the planet for money is profitable, and so it will direct them to do hyperinflation, collapsing the economy and maximizing suffering for everyone, almost like the other AI would do.
Is there a third thing? Sure, but the results will be similar. Financial gains are financial gains, crude oil is crude oil, food is food, water is water, amounts are amounts, and allocation is allocation. By changing the bits in hard drives, you can change the functionality of the AI, but capitalists will still have a fetish for money printers, and there still won't be enough crude oil, food, or water to meet demand in the real world where the computer network gets electricity from.
Of course, the military industrial complex also likes programming computers to strategize in other areas, such as societal stability and infrastructural engineering.
If a stability advice bot prefers short-term stability over long-term stability, it may therefore direct us make the planet extinct.
If it's programmed to prefer long-term stability, it should direct us into a global war that has countless murderers wipe each other out ASAP with unavoidable collateral damage, all to prevent climate change from getting all known billionaires killed (therefore also, as a bonus, preventing climate change from wiping out all known life without a trace).
Capitalism has a firm grip on the population, so the most powerful AI can't be one programmed to balance the two extremes. Trying to have long-term stability without killing anyone would sound like some kind of utopian communist system. Capitalists have proven they will sabotage anything like that, and there's a sufficiently big well-equipped number of sinister capitalists to make sure AI won't provide enough of an advantage over them to overthrow them without a violent struggle.
There just isn't enough demand for an AI that tells everyone why they have to suffer more, work harder, care more about each other, be happier, healthier, and live longer. You lose people at "suffer more, work harder." Brains suffering from capitalism struggle to get past that part.
This is why the best AI researchers are focused on intelligence itself, understanding it and improving it in humans and machines at the same time in an interconnected way.
The military industrial complex doesn't like programming computers to educate people. The better you can make people's knowledge bases and cognitive function, the less of a struggle it is to overthrow the delusional cult of capitalism.
Again, there are other areas to explore. Let's look at engineering.
The military industrial complex likes programming computers to do engineering, which is basically strategizing on how to build something. Computers are especially good at this because this type of strategy largely relies on direct mathematical calculations that are very complicated for humans, but not for microchips.
If an engineering AI figured out how to make an artificial life form, a robot that could think and learn and build more robots like itself, and that new life form was very fast at building copies of itself and making improvements, and the swarm got too big to stop before people could even react in time to start dying in a war with it, then maybe it could just secure the planet without significant danger to anyone. It could use minimal-harm methods to discourage resistance while providing weather control from outer space, helping farmers grow free food, distributing land in all areas fairly for everyone to live the closest thing possible to the lifestyles each person was born for. Nuclear weapons and plastic could be banned, but we could soon have bomb-proof, self-healing road systems and power grids. All of this could be powered with renewable energy.
Can the swarm actually get too big to stop before people can even react in time to start dying in a war with it? That's what someone who keeps forgetting we live in capitalism might ask, because their brain is blocking out the obvious fact that the military industrial complex started using robots in warfare long before designing a self-replicating robot swarm to try to protect the world, with or without AI help. Trying to make this robot swarm now is the same as going with a stability bot that prefers long-term stability: survival efforts face resistance, necessitating warfare for survival.
What about physics? Can the AI not only do engineering with today's available knowledge, but also get so good at physics it proves thermodynamics wrong and teaches us how to make a generator that puts out 2 gallons of gasoline as exhaust for every gallon of gasoline it burns? Sure. That should save capitalism if you can get that to happen.
I don't see reason to think that's happening any time soon, so I don't see reason to think AI is in the process of changing how capitalism ends: with lots of people dying in a big war, which may or may not leave any life in the universe at all after it's over.
It seems the simple reality is if capitalists wanted to not fuck up, they should have thought of that sooner. They've already waited until way after their capitalist delusions accelerated the anthropocene too far to get out of the consequences. Just as it was before stonks had their machine learning boom, all any capitalist can do is choose between clinging to their delusions to continue perpetrating crimes (exacerbating their own future victimization), or quitting perpetration with dignity to join the victims willingly.
If anything, capitalists are shooting themselves in their feet with AI, because, again:
the best AI researchers are focused on intelligence itself, understanding it and improving it in humans and machines at the same time in an interconnected way.
The military industrial complex doesn't like programming computers to educate people. The better you can make people's knowledge bases and cognitive function, the less of a struggle it is to overthrow the delusional cult of capitalism.
Actually, my figure of speech didn't quite fit. Making a war less bloody really wouldn't be shooting themselves in their feet, would it?
More like gently handcuffing themselves.
Deep down, maybe every capitalist involved in AI understands.
It almost gives me some faith in humanity.