Authors as Adams, Khan, Hafiz and Raeside (1), suggest some method for data collection, basing on the situation, warning from possible threats to the validity and reliability of data collected. Whatever the method of data collection chosen (observations, experimentation, survey, interviews, diary method, case study, data storage, triangulation), there are several hypothesis that need to be considered since the beginning (1); the challenges born from the nature of the research and level of detail the researcher want to reach, then by time and budget available, so careful consideration and planning of data collection is required. There are some common principles, for examples try to eliminate as much as possible human errors, analyze all useful data instead of the only one which seems to fit in the theory, run multiple tests to check eventual errors. Collecting data is crucial in many different field of business interest, e.g. from concurrency evaluation to create a model for the estimation of pipe price, before to meet the supplier for the final negotiation. For example, first strategy adopted from bid and proposal department, for the evaluation of piping price impact, is to evaluate raw material steel price and add a certain percentage which consider total cost of ownership. Second strategy can consider different elements which compose final price, starting from source of data instead of estimate a percentage only. This is one of the key elements: Bebell, O’Dwyer, Russel and Hoffmann (2) studied the importance of technology in the last past years to help researcher to evaluate and confute data availability and validity, for example triangulating the same data. In any case, quantitative methods doesn’t contextualizes in the situation, considering for example the market situation, the human ability to concretize business relationship, …
3.1 Source of data
World is full of data and opinion, the advent of technology and internet allow to many users all over the world to get access to the web for those who have access, source of millions of articles, opinion, paper, studies, … According to Bebell, O’Dwyer, Russel and Hoffmann (2) the use of laptop and internet by learners and scholars, in both cases resulted that about 50% or more use technology to make first research and to deliver instruction.
The central IT organization in a statistical agency has a very important role in Web-based data collection, since data collection system has two very broad component - an electronic questionnaire, and everything else associated with moving that electronic questionnaire to and from a respondent, including systems and security considerations (3).
Since the best result is get if the questionnaire, interview, survey, … is focused as much as possible to the argument of research and to participant that well know the argument, source(s) of data, have to be identified since the beginning, possibly during the data collection planning stage. Doing this, the researcher optimizes his / her time, avoiding to source data time per time is need. Researcher has to avoid interpretation and misunderstanding in the question, in order to get invalid responses. This imply that for example, the questionnaires received, if duly filled, may not be very useful because don’t meet the requirements, otherwise, target of the research cannot be reached.
Infact rate of response can results too low so unacceptable, and potentially people can decide to not respond since they don’t know about the question. Initial investment of the time to plan the job, avoid creating questionnaires inefficient to the researcher. When we face to questionnaires which don’t know what’s talking about, the first reaction is to leave it blanks or give confused answers.
For these reasons, random sampling techniques, stratified random sampling techniques integrating with pre-test, are crucial in order to avoid eventual fairness, big enemy...