This is my pessimistic worst-case scenario concerning where the AI boom will lead in the medium to long term (barring a full-blown skynet scenario of course).
First, many activities that are now done by humans will obligatorily be done by AI. Imagine you are a publisher/software company/newspaper/PR firm/support line/... Why would you pay a difficult and expensive author/programmer/journalist/PR person/worker/... if you can get essentially the same thing for much cheaper and much less hassle from an AI.
Then, many services that are now small and separate will be done by one AI system. Email, webbrowsing, online orders, text editing, etc. all already start containing AI assistents. These will grow together and, by pure mass replace the single use components. Once that happens, maintaining separate back-ends and their UIs will become too expensive, so AI will be the *only* way to do all these things.
But this isn't even the worst effect, I think. The biggest problem will be that for all the things that AI can do faster and cheaper than humans, the incentive will disappear to learn them. Why spend years learning how to write a good novel or code a web app or diagnose a patient, when it's clear that the chances of getting paid (or even appreciated) for the application of any of these skill are pretty low? Since maintaining expertise in many areas depends on a critical mass of participants, there's a good chance a lot of it will simple be lost in humans.
And of course someone will own and run all of these AI systems. In most likelihood that won't be the government or a charitable not-for-profit.
We will then have a population that has handed over much of what makes us human to AI, that receives most of their news and information either directly from AI or from sources whose priority is controlled by AI and that has largely given up on critical thinking. And all these systems that are now effectively in control as well as irreplacable, are in the hands of a handful of international corporations.
90's Cyberpunk will look like a romantic humanist dream in comparison.
As I said, this is a pessimistic worst-case scenario. In reality things most likely won't go as smoothly. Chances seem to be good that the AI bubble will pop very soon, setting the whole process back for years at least. A lot has also been said about the problem the the prevalence of AI content makes training new/better AI a lot harder. And, so far, AI is still basically very fancy pattern matching (as far as I know), which limits how far it can go. Besides, chances are good that in ten or twenty years AI will be the least of our worries anyway.
Still, I think the problem remains that if an AI that can do all these things is achievable with moderate effort, our current economic system will make it very difficult to avoid going down the path outlined above. I would be very happy to be wrong, though.