-
@ Nostr Chess Guy
2024-01-15 13:04:21I do not believe we will see that happening anytime soon, while I very much agree that we have seen very much of an explosion in the narrow AI and have gotten some very useful tools from this, it does not make me think it is more likely to give us the GAI anytime soon.
Historically throwing a lot of data at the problem and hoping something sticks is not a new idea, in fact it was one of the first ideas of AI, Alan Turing himself wanted to do this way back with the start of modern computers. He had two problems doing that, he did not have the required compute power to make it feasible, and he did not have an overwhelming amount of data to use, so it wasn't possible for him to do it. These are however not problems today, but expecting a computer to stumble on intelligence by statistics sounds like a very silly idea. I also do not believe in the idea that when enough compute power gets collected together the result will be intelligence.
Philosophy
Let me tell you why I do not believe in GAI, and I might even go so far as to say it is impossible. Philosophical there is many different schools believing many different things, and trying to understand the world with different starts. There is no problem with this, I will however only take two schools to highlight a difference that will explain (I hope) why people can belief that AI is possible, and I reject the idea out of hand.
Materialism vs Idealism
If I were to guess I believe that most if not all current AI researchers, are firmly in the materialists camp. Materialism might not sound wrong on a first casual inspection much of it seems to be rather solid, yet I think it is complete BS. I might always have believed in idealism and only realized it upon reading "materialism is baloney", or well I have not read the full book as of yet, but I am think I found the thing I find more reasonable.
The world view of a materialist is that the world is essentially material, and there is a objective world outside our brains. Our brains are the only thing giving us consciousness and indeed a mind, now if this was true it should not be so difficult to make a machine that could rival us, okay we might not have the compute power. But think about it, does this sound reasonable our brains are not actually that big, are they just so much better than computers. Okay they might be more efficient than computers, but is that really just our brains doing this. I find that hard to belief.
In a idealist world view our world are not objective, and our brains do not make us, we are rather a part of a collective unconscious. Our brains are not so much generating our mind, as it is a filter for the collective unconscious, meaning that to make a GAI would require to give it a connection to the general unconscious, that is not so easily done.
As you can surely see from my point of view creating a functioning GAI is actually impossible. Now I am not saying that my view of the world is entirely correct.
This was just a short long-form piece. I might put more of my thoughts down in writing if this is appreciated. Thanks you for reading this piece.