@JaQueen 404s are a natural part of the web. By themselves, they are not inherently an issue.
As I said higher up, a lot of what you need to do is in the details.
Crawl budget isn’t a concern until you hit high 10s/100s/1000s thousands of URLs, so at 3-5k, it’s not something I’d be concerned about (from a crawl budget standpoint).
As for domain authority, I’m unsure the definition you’re using when mentioning that, but regardless of definition, crawl and indexing control won’t impact any sense of authority relative to the domain.
As for noindex + robots.txt, I wouldn’t pair those together (at least at the outset). Funny thing about noindex is that the URL that contains it needs to be crawled in order to be seen and work. If you block it, noindex does not work.
Again, details here matter, so I can’t give direct answers to exactly how to proceed in your situation.
Lastly re: your q to @Tony McCreath, 5xx errors indicate a problem with the server/website, so crawling will start to be limited/throttled if they see them consistently over a period of time. It applies to the website in general if they’re seeing a consistent 5xx error over time.
Again, nance to this, but can be taken as a general expected outcome :)