AI ain't actually very good at doing logic per se. What it is good at is pattern recognition and extrapolation of recognized patterns. Hence why AI can mimic art styles. It's able to extract the commonalities between artworks in the same style. On the other hand, advanced mathematics is an interesting mix of creativity and playing a game. Just like in, say, chess, there are legal and illegal manipulations one in allowed to perform. Yet, proving new theorems requires one to navigate thru the maze in a way no one was able to do theretofore. Language models such as the famed ChatGPT are barely able to play a dozen legal chess moves in a row, let alone come up with coherent mathematical proofs. You might reasonably argue that specialized AI such as Stockfish are better at chess than any human, so what's to stop AI from conquering mathematics? Well, a chess game seldomly lasts more than a gross moves, but a mathematical proof, if you break it all the way down, easily consists of an unfathomable number of legal moves. Having an AI explore the vast space of all possible sequences of valid moves is exceedingly unlikely to lead anywhere useful. All this ain't to say that AI won't one day be able to do mathematics, but it's probably gonna be one of the harder things for it to master.
With the actual knowledge accrued? Not much. But with the aptitude for numbers and analytical thinking skills required to get such a degree? Most quantitative jobs should be within such a one's purview.
If it ever gets to that in my lifetime, I will. That said, most mathematicians are soys in my experience.