Neurons are able to periodically fire. Brain neurons basically watch a bunch of neighbors, and when neighbor firing adds up to a certain threshold, the nerve fires. Than then goes into the neighboring nerves' inputs. The inputs can be weighted, and can be negative. The workings of this have been fairly well understood for ages.
edit: And this is exactly how artificial neural networks work.
Most of the remaining magic in neuroscience has to do with emergent properties of large networks. As artificial neural networks get larger, and as we find better ways of training them, emergent properties are appearing in ways no one expected. (Google "emergent properties in LLMs" for some interesting reading.) The current rounds of LLMs seem to hit thresholds where they suddenly have new abilities that were never intended. They're just trained to predict the next word/pixel/etc, and yet they seem to be modeling the world, physics, math to do it. This is a big rabbit hole worth looking into if you find it at all interesting.
There's also LOTs of magic in the unfolding initial condition of an embryonic brain. This is one place where current ANNs lack, and likely (definitely?) a reason why they need the entire internet to learn. If your brain started out as a random jumbled mess, you'd have a lot of learning to do, too.
Regarding gradient descent, that's just a fancy term for trying to improve bit by bit over many iterations. Evolution is a form of gradient descent, and has selected how brand new brain works. Neuronal pruning/training in your brain is also a type of gradient descent, where "good" results are reinforced and "bad" results are .. anti-reinforced? You can theoretically get to the same place with either approach. LLM training (and ANNs generally) also use gradient descent to train.
The hardest part is deciding what counts as "good" and "bad".
In comes big data to the rescue? Want to generate human writing? How about using N terabytes of the real deal to train? Etc.
Anyway, I could go on, but I was not very interested in ANNs because they'd basically gone nowhere for 30 years, and I was a skeptic about LLMs more recently, but the more I look at them... It could be the typical stutter forward of technological advancement, but it sure feels like we're at the beginning of what they can do, not the end. We're reaching the limits of capital investments, so the real test will be where optimization takes us in 5-10 years.
edit: And I didn't buy the hype for crypto, driverless cars, VR. Heck, I didn't even think facebook could turn a profit (lesson learned). I'm usually pretty down on the new thing.