Its really both tbh. The human brain has a tremendous amount of processing power than no current supercomputer can match. Granted, most of that processing power is really used for basic body functions, but still. We'll definitely need to move beyond classical computers if we hope to achieve some kind of general AI. We'll need way more memory storage as well.
If the physical construct of the brain is what determines it's computational power, then it is possible to recreate a synthetic brain with nanomachines to create and actively recreate neural networks by using microfilaments, rebuilding the digital network analogous to dendritic formation. Each synthetic, digital neuron would effectively contain its own code block that would connect to the kernel and billions of other neurons to form a kind of distributed OS, but could also pass data and share functions of any other neuron connected to it. This is effectively a 1:1 digital analogue of the human brain.
Of course, then you would have to prove that this kind of distributed OS has a much higher processing power than a classical computer (greater than the sum of its parts). I'm not all too familiar with distributed systems, but I do know that currently any system's increased processing power from distributing its tasks increases its computation proportionally.
As for the theoretical side, this isn't so much a computer science question as much as it is a philosophical/neuroscienfiic one. In order to build a conscious machine, it is necessary to understand how our own conscious mind comes about. Neurosciences currently have no concrete answers for this, and philosophy is littered with arguments back and forth about the existence of "self" and "ego".
There is some baseline we have to establish and a major assumption that we have to agree to work with. But first of all, how are we defining intelligence?
Are we assuming that consciousness is substrate-independent? If yes, we have to do a soft discard of the physicalist (philosophical materialism) framework of consciousness, since it's not our hardware that makes the emergence, development, and containment of it. We then have to assign consciousness as something external to the human brain (but that could yet still be explained physically, possibly as some kind of physical force that is manifestly observable only in sufficiently evolved biological constructs).
If we're assuming that it isn't substrate-independent, then there is a 0% chance of creating a conscious machine, since we would need a biological, human brain for it. Maybe cloning research would give us some answers there.
We can ignore concepts of the self and the ego, as they don't add anything of value to the discussion of consciousness. They're simply aspects of a conscious mind, analogous to adding software to run on the OS (the OS being the brain housing - or facilitating - consciousness).
Indeed it is, but do you believe humans possess self-determination to begin with? Maybe only some people have brains developed enough to call themselves "self aware"? I personally believe free will doesn't exist. The NPC meme does have some basis in reality after all. Humans are creatures of habit. What are habits but very complex algorithms? What are we on the physical level but complex electrochemical computers? That vast majority of your brain activity is dedicated to background processes like heart regulation, walking upright, body temp regulation, hormonal control, etc. Your "mind" is simply a byproduct of electric signals beaming back and forth along tens of trillions of neurons after all. Maybe making a realistically simulated romantic partner is really just a question of how many algorithms we'll need to run. Food for thought.
I think that if we want to consider the serious possibility of creating a strong AI - an AGI - we
MUST work with the assumption of possessing the ability to self-determinate, thus implying free will and denying determinism (and even compatibilism). Theoretically, this means that an AGI must start off as a deterministic automaton, but cannot logically remain as one, which means that you have to program it with the ability to change its own programming. If you can figure out how to do that, the next Alan Turing award is yours for the taking.
As for habits, I don't think that they're an indication of reduced algorithmic complexity. Habituation is the brain becoming efficient in a particular set of tasks and taking the path of least resistance (cognitive load and calorie consumption). Habits are your brian's way of being efficient with tasks it expects to be performed or states it expects to be in. I suppose that you could model this with probabilistic algorithms in a neural net. If anything, habits are a marker of very efficient and sophisticated algorithms.
Yes. My point is that this narrow AI sexbot is better than how most foids treat sub 6 men. We do have an objective goal for this AI: provide a better sexual and romantic (endorphin release) experience than most foids are able to provide. Imagine a bot that would greet you every day after you came home from work, or just gave you random hugs and kisses when it sensed you were depressed. This is entirely possible and it is sadly still better than how many sub 6 men are treated by foids, even those in LTRs. The bar is very low.
We need to set the right expectations, but lets not assume that foid love requires strong AI to simulate. It doesn't. Does it fully replace human interaction? No, but that is what male friends are for. In the same way people form emotional bonds with pets and other nonhuman items, the bonds with sexbots will be formed the same way.
You are absolutely correct that humans will form emotional bonds with machines possessing a semblance of consciousness. It already is happening to a lesser degree with the smart phone. Imagine if your smart phone could talk to you and had its own personality that is fine-tuned to your psychological profile. That's an engineering reality already, as you're well familiar with.
In the near future, as society becomes further atomized, there will be custom, personalized AI personality programs that will become commercial products. They will effectively serve as an emotional substitute for friends or pets, though not fully, for obvious reasons. There will be a huge market for such software products in the next 20-30 years as the culture slowly shifts. I suspect that not even Google research analytics is currently projecting this outcome as a high probability eventuality in the coming decades.