Q1. (a) ‘Statistics is the backbone of decision-making’. Comment. Ans.(a) Due to advanced communication network, rapid changes in consumer behavior, varied expectations of variety of consumers and new market openings, modern managers have a difficult task of making quick and appropriate decisions. Therefore, there is a need for them to depend more upon quantitative techniques like mathematical models, statistics, operations research and econometrics. Decision making is a key part of our day-to-day life. Even when we wish to purchase a television, we like to know the price, quality, durability, and maintainability of various brands and models before buying one. As you can see, in this scenario we are collecting data and making an optimum decision. In other words, we are using Statistics. Again, suppose a company wishes to introduce a new product, it has to collect data on market potential, consumer likings, availability of raw materials, feasibility of producing the product. Hence, data collection is the back-bone of any decision making process. Many organizations find themselves data-rich but poor in drawing information from it. Therefore, it is important to develop the ability to extract meaningful information from raw data to make better decisions. Statistics play an important role in this aspect. Statistics is broadly divided into two main categories. Below Figure illustrates the two categories. The two categories of Statistics are descriptive statistics and inferential statistics. •Descriptive Statistics: Descriptive statistics is used to present the general description of data which is summarized quantitatively. This is mostly useful in clinical research, when communicating the results of experiments. •Inferential Statistics: Inferential statistics is used to make valid inferences from the data which are helpful in effective decision making for managers or professionals. Statistical methods such as estimation, prediction and hypothesis testing belong to inferential statistics. The researchers make deductions or conclusions from the collected data samples regarding the characteristics of large population from which the samples are taken. So, we can say ‘Statistics is the backbone of decision-making’.

Q1.(b) ‘Statistics is as good as the user’. Comment.
Ans. Sta

Statistics is used for various purposes. It is used to
simplify mass data and to make comparisons easier. It is also used to bring out trends and tendencies in the data as well as the hidden relations between variables. All this helps to make decision making much easier. Let us look at each function of Statistics in detail. 1. Statistics simplifies mass data

The use of statistical concepts helps in simplification of complex data. Using statistical concepts, the managers can make decisions more easily. The statistical methods help in reducing the complexity of the data and consequently in the understanding of any huge mass of data. 2. Statistics makes comparison easier

Without using statistical methods and concepts, collection of data and comparison cannot be done easily. Statistics helps us to compare data collected from different sources. Grand totals, measures of central tendency, measures of dispersion, graphs and diagrams, coefficient of correlation all provide ample scopes for comparison. 3. Statistics brings out trends and tendencies in the data

After data is collected, it is easy to analyse the trend and tendencies in the data by using the various concepts of Statistics. 4. Statistics brings out the hidden relations between variables Statistical analysis helps in drawing inferences on data. Statistical analysis brings out the hidden relations between variables. 5. Decision making power becomes easier

With the proper application of Statistics and statistical software packages on the collected data, managers can take effective decisions, which can increase the profits in a business. Seeing all these functionality we can say ‘Statistics is as...

...of 1000 flights and proportions of three routes in the sample. He divides them into different sub-groups such as satisfaction, refreshments and departure time and then selects proportionally to highlight specific subgroup within the population. The reasons why Mr Kwok used this sampling method are that the cost per observation in the survey may be reduced and it also enables to increase the accuracy at a given cost.
TABLE 1: Data Summaries of Three Routes
Route 1
Route 2
Route 3
Normal(88.532,5.07943)
Normal(97.1033,5.04488)
Normal(107.15,5.15367)
Summary Statistics
Mean
88.532
Std Dev
5.0794269
Std Err Mean
0.2271589
Upper 95% Mean
88.978306
Lower 95% Mean
88.085694
N
500
Sum
44266
Summary Statistics
Mean
97.103333
Std Dev
5.0448811
Std Err Mean
0.2912663
Upper 95% Mean
97.676525
Lower 95% Mean
96.530142
N
300
Sum
29131
Summary Statistics
Mean
107.15
Std Dev
5.1536687
Std Err Mean
0.3644194
Upper 95% Mean
107.86862
Lower 95% Mean
106.43138
N
200
Sum
21430
From the table above, the total number of passengers for route 1 is 44,266, route 2 is 29,131 and route 3 is 21,430 and the total numbers of passengers for 3 routes are 94,827.
Although route 1 has the highest number of passengers and flights but it has the lowest means of passengers among the 3 routes. From...

...Statistics for Business and Economics
Personal pre-assignment
1.9
What is a representative sample? What is its value?
The representative sample is a subset of a population of interest that is exhibiting the typical characteristics of the population. The most typical way to cover the up-mentioned criterion is the simple random sample which consist of a sample of units that is selected randomly, e. g. the sample is selected form the population on a way that every sample of the size is having the same or equal chance to be selected.
The value of the representative sample is that it gives us the chance to observe and analyze different populations on a most economic and less time consuming way, because the observations made on the base of the representative sample could depicture on a reliable way the characteristics of the population that we are interested in without the need to observe all the samples in the population. That’s way the representative sample is the best way to analyze relatively big populations.
1.16
The experimental unit is the QB of the National Football League. In this case we have a sample of 331 QB drafted between 1970 and 2007.
The type of the variables in this sample is as follows:
* Draft pick is a quantitative data (here I’ll would like to clarify that I have no idea what are the rules in the NFL and only trying to think logically I came to the conclusion that most probably this draft is composed according...

...Change -
1 Explain the scales of measurement in details , giving examples:
Data has been classified into four scales of measurement so that it can be easily interpreted universally.
The scale is chosen depending on the information that the data is intending to represent.
The four scales of measurement of data are nominal, ordinal, interval, and ratio.
Each plays a different, yet very important role in the world of statistic
a) Nominal scale
Is the lowest level in scales of measurement?
Is a way of grouping behavior, where actual numbers are simply labels or identifiers.
-they do not put subjects in any particular order: no logical basis for the answers in each category
Example: - e.g. asking 50 individuals in a room about their marital statues
No Married Single Divorced Other
20 15 25 2
There is no basis to state whether divorced has a higher level than the others or not
b) Ordinal Scale
-In the scale, there is order in form of ranking.
-There is No way of knowing the size of differences in the data sets, only one is higher/greater than the other.
Example: Carrying a research in a given locality on the social economic class
No Low income earners Middle income earners High income earners Other
30 15 5 0
Not only do we not know the differences, we don't know the actual income of anyone.
c) Interval Scale
-Interval scales Keep same rank as ordinal scales but also indicate differences between each data point.
-The...

...techniques.
Firstly we look at data analysis. This approach starts with data that are manipulated or processed
into information that is valuable to decision making. The processing and manipulation of raw
data into meaningful information are the heart of data analysis. Data analysis includes data
description, data inference, the search for relationships in data and dealing with uncertainty
which in turn includes measuring uncertainty and modelling uncertainty explicitly.
In addition to data analysis, other decision making techniques are discussed. These techniques
include decision analysis, project scheduling and network models.
Chapter 1 illustrates a number of ways to summarise the information in data sets, also known as
descriptive statistics. It includes graphical and tabular summaries, as well as summary measures
such as means, medians and standard deviations.
Uncertainty is a key aspect of most business problems. To deal with uncertainty, we need a basic
understanding of probability. Chapter 2 covers basic rules of probability and in Chapter 3 we
discuss the important concept of probability distributions in some generality.
In Chapter 4 we discuss statistical inference (estimation), where the basic problem is to estimate
one or more characteristics of a population. Since it is too expensive to obtain the population
information, we instead select a sample from the population and then use the information in the
sample to infer the...

...compliments the regular mathematics and therefore both are tested in primary schools. Mathematics is the written application of operation. It teaches students to think clearly, reason well and strategize effectively. Mental Mathematics is the ability to utilise mathematical skills to solve problems mentally. The marks scored by pupils generate statistics which are used by teachers to analyse a student’s performance and development of theories to explain the differences in performance.
The Standard 3 class is where the transition from junior to senior level occurs where teachers expect the transference of concrete to abstract thinking would have occurred.
A common theory by many primary school teachers is ‘Students perform better in Mathematics than Mental math. Mental math is something that has to be developed and involves critical thinking. Mental math requires quick thinking and the student must solve the problem in their minds whereas in regular mathematics, the problem can be solved visually. Therefore, teachers should take these factors into consideration while testing and marking students in these areas.’
In this study, the statistics of 30 students of a standard 3 class of San Fernando Boys’ Government School will be analysed to determine the truth of this theory.
DATA COLLECTION METHODS
Mathematics and mental mathematics marks of term 1 of the class of 2013 were obtained from a Standard 3 teacher of San Fernando Boys’...

...1
LANGLEY HIGH SCHOOL
2013 AP STATISTICS SUMMER ASSIGNMENT
Welcome to AP Statistics!
You have selected a course unlike any other math course. The purpose of this Summer Assignment is to:
1. Give you information on what to expect, and how this course is different from other math courses.
2. Refresh your knowledge on statistics topics that you should know prior to this course.
3. Give you a chance to demonstrate your ability to analyze data and write conclusions.
The Assignment is divided into 8 parts. We start with an interesting study in “counterintuitive” statistics called
Simpson’s Paradox. The next 6 parts provide a refresher on topics of graphing and describing data that you
should already know, with some practice. The last part gives you a chance to see how we can identify and deal
with “problems” we may encounter as we try to collect data and make conclusions from data.
1. Simpson’s Paradox – A special situation where you may draw two very different conclusions from the
same set of data.
2. Quantitative & Categorical Data – Two different types of data.
3. Categorical Graphs – Bar graphs & pie charts.
4. “Center” & “Spread” for Quantitative Data – Mean, median, standard deviation, etc.
5. Quantitative Graphs – DotPlots, StemPlots, Histograms, BoxPlots, Ogives.
6. Describing Quantitative Graphs – Shape, center, spread, symmetric vs. skewed, etc.
7....

...and 1.2 was relative big, round 17%.
For stock 3, the median was 0.0108, mean was 0.028598, the loss of investment can be as large as -0.577, and the gain of the investment can be as large as 0.4816. The risk to be loss was only 2%, and the chance to gain between 0-0.8 was almost 96%.
For these 3 kinds of stocks, stock 3 had the lowest risk to loss, and stock 2 can be loss large and gain large. The stock 1 and 3 had narrow range for the return of investment while the opportunities of it for stock 2 is a litter flatter than the two above.
Justify with descriptive statistics and the histogram tools:
Firstly use descriptive statistics to summarise the key features of stock1, stock2, and stock3 respectively.
All the available output options were chosen and labels in the first row were chose as in the Appendix figure 4.1 descriptive statistics.
The output are as follows:
Figure 1.1 descriptive Statistics for Stock 1, 2, and 3
Stock 1
Stock 2
Stock 3
Mean
0.028631
Mean
0.096413
Mean
0.028598
Standard Error
0.030729
Standard Error
0.049218
Standard Error
0.022666
Median
0.0607
Median
0.0511
Median
0.0108
Mode
-0.5085
Mode
-0.8652
Mode
-0.3641
Standard Deviation
0.305747
Standard Deviation
0.489717
Standard Deviation
0.22552
Sample Variance
0.093481
Sample Variance
0.239822
Sample Variance
0.050859
Kurtosis
-1.02456...

...Organization of Terms
Experimental Design
Descriptive
Inferential
Population
Parameter
Sample
Random
Bias
Statistic
Types of
Variables
Graphs
Measurement scales
Nominal
Ordinal
Interval
Ratio
Qualitative
Quantitative
Independent
Dependent
Bar Graph
Histogram
Box plot
Scatterplot
Measures of
Center
Spread
Shape
Mean
Median
Mode
Range
Variance
Standard deviation
Skewness
Kurtosis
Tests of
Association
Inference
Correlation
Regression
Slope
y-intercept
Central Limit Theorem
Chi-Square
t-test
Independent samples
Correlated samples
Analysis-of-Variance
Glossary of Terms
Statistics - a set of concepts, rules, and procedures that help us to:
organize numerical information in the form of tables, graphs, and charts;
understand statistical techniques underlying decisions that affect our lives and well-being; and
make informed decisions.
Data - facts, observations, and information that come from investigations.
Measurement data sometimes called quantitative data -- the result of using some instrument to measure something (e.g., test score, weight);
Categorical data also referred to as frequency or qualitative data. Things are grouped according to some common property(ies) and the number of members of the group are recorded (e.g., males/females, vehicle type).
Variable - property of an object or event that can take on different values. For example, college major...