One of the most common questions is definitely about the resources used by Search. Using Continuous Crawl and/or many crawl processes and/or frequent crawl schedules and/or big change sets in the content sources, etc. – all makes the resources consumed by Search higher and higher. Besides the most known optimization and scale-out techniques, I get the question very often: how the resources consumed by noderunner.exe can be limited? The first part of my answer is: Please, do plan your resources in production! Search is often seen as "something" that works "behind the scenes", but it can make some very bad surprises… I'm sure you don't want to see your production farm consuming 99% of the available memory while crawling… So please, plan first. Here's some help for this: Scale search for performance and availability in SharePoint Server 2013 . But you might have some dev or demo environments, for sure, where you cannot have more than one crawl...