The "game playing" i.e. the behaviours developing to meet the measures can result in the measurement being totally pointless.
An example is an organisation that needs to collate attendance data and berates its staff for not doing this by close of play that day when in reality the following morning isn't really an issue; so what happens the staff fill the attendance in advance just in case they forget to fill it in especially when they get tired towards the end of the day, so they don't get "told off" thus challenging the actual quality of the data.
The measurement goal seems to be achieved but - oh dear - the data is probably no good now as attendance is always positively flagged by default and if someone forgets to overwrite the present mark with the correct data then it is artificially presented.
A bit of patience, communication and empathy for working practices would have resulted in better data quality and timeliness to boot. A bit of process evaluation might have helped to actually design more appropriate measures; but that requires a bit of structural thinking!
Changing the measure to allow 48 hours for the data to be entered would have been a compromise and resulted in less game playing and then timely data of a good quality would have been the result.
The person receiving the stats probably things his/her management has been excellent - tick- " am I not wonderful" - but in reality statistics blind the truth - the behaviours have now bucked the system because the consequence of the measures haven't been thought through correctly!