One way to accomplish this is by creating a performance baseline. You can compare your configuration changes against that baseline to determine whether they had the desired effect. To create a baseline, start with a default configuration and run a job; it should be representative of the types of jobs that you plan to run once you put the cluster into production. After you run a job, check its history to see how long it took, then calculate an average.
When you launch a job, the system creates two files: a job configuration XML file and a job status file. The job status file is useful for tracking the effect of your configuration changes because it contains status information and runtime metrics. You can view job status data with the Hadoop job command. Append the -List All parameter if you aren’t sure of the job’s identity.
To read the entire article, please click on https://searchdatacenter.techtarget.com/tip/Hadoop-cluster-configuration-best-practices-streamline-workflows